Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
An Empirical Approach to Predicting Effects of Climate Change on Stream Water Chemistry
NASA Astrophysics Data System (ADS)
Olson, J. R.; Hawkins, C. P.
2014-12-01
Climate change may affect stream solute concentrations by three mechanisms: dilution associated with increased precipitation, evaporative concentration associated with increased temperature, and changes in solute inputs associated with changes in climate-driven weathering. We developed empirical models predicting base-flow water chemistry from watershed geology, soils, and climate for 1975 individual stream sites across the conterminous USA. We then predicted future solute concentrations (2065 and 2099) by applying down-scaled global climate model predictions to these models. The electrical conductivity model (EC, model R2 = 0.78) predicted mean increases in EC of 19 μS/cm by 2065 and 40 μS/cm by 2099. However predicted responses for individual streams ranged from a 43% decrease to a 4x increase. Streams with the greatest predicted decreases occurred in the southern Rocky Mountains and Mid-West, whereas southern California and Sierra Nevada streams showed the greatest increases. Generally, streams in dry areas underlain by non-calcareous rocks were predicted to be the most vulnerable to increases in EC associated with climate change. Predicted changes in other water chemistry parameters (e.g., Acid Neutralization Capacity (ANC), SO4, and Ca) were similar to EC, although the magnitude of ANC and SO4 change was greater. Predicted changes in ANC and SO4 are in general agreement with those changes already observed in seven locations with long term records.
NASA Astrophysics Data System (ADS)
Li, Chenghai; Miao, Jiaming; Yang, Kexin; Guo, Xiasheng; Tu, Juan; Huang, Pintong; Zhang, Dong
2018-05-01
Although predicting temperature variation is important for designing treatment plans for thermal therapies, research in this area is yet to investigate the applicability of prevalent thermal conduction models, such as the Pennes equation, the thermal wave model of bio-heat transfer, and the dual phase lag (DPL) model. To address this shortcoming, we heated a tissue phantom and ex vivo bovine liver tissues with focused ultrasound (FU), measured the temperature response, and compared the results with those predicted by these models. The findings show that, for a homogeneous-tissue phantom, the initial temperature increase is accurately predicted by the Pennes equation at the onset of FU irradiation, although the prediction deviates from the measured temperature with increasing FU irradiation time. For heterogeneous liver tissues, the predicted response is closer to the measured temperature for the non-Fourier models, especially the DPL model. Furthermore, the DPL model accurately predicts the temperature response in biological tissues because it increases the phase lag, which characterizes microstructural thermal interactions. These findings should help to establish more precise clinical treatment plans for thermal therapies.
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P
2017-05-22
PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age of 40. The PREDICT v2 is an improved prognostication and treatment benefit model compared with v1. The online version should continue to aid clinical decision making in women with early breast cancer.
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin
2016-11-01
Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-12-09
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.
Thematic and spatial resolutions affect model-based predictions of tree species distribution.
Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei
2013-01-01
Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.
Thematic and Spatial Resolutions Affect Model-Based Predictions of Tree Species Distribution
Liang, Yu; He, Hong S.; Fraser, Jacob S.; Wu, ZhiWei
2013-01-01
Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution. PMID:23861828
von Ruesten, Anne; Steffen, Annika; Floegel, Anna; van der A, Daphne L.; Masala, Giovanna; Tjønneland, Anne; Halkjaer, Jytte; Palli, Domenico; Wareham, Nicholas J.; Loos, Ruth J. F.; Sørensen, Thorkild I. A.; Boeing, Heiner
2011-01-01
Objective To investigate trends in obesity prevalence in recent years and to predict the obesity prevalence in 2015 in European populations. Methods Data of 97 942 participants from seven cohorts involved in the European Prospective Investigation into Cancer and Nutrition (EPIC) study participating in the Diogenes project (named as “Diogenes cohort” in the following) with weight measurements at baseline and follow-up were used to predict future obesity prevalence with logistic linear and non-linear (leveling off) regression models. In addition, linear and leveling off models were fitted to the EPIC-Potsdam dataset with five weight measures during the observation period to find out which of these two models might provide the more realistic prediction. Results During a mean follow-up period of 6 years, the obesity prevalence in the Diogenes cohort increased from 13% to 17%. The linear prediction model predicted an overall obesity prevalence of about 30% in 2015, whereas the leveling off model predicted a prevalence of about 20%. In the EPIC-Potsdam cohort, the shape of obesity trend favors a leveling off model among men (R2 = 0.98), and a linear model among women (R2 = 0.99). Conclusion Our data show an increase in obesity prevalence since the 1990ies, and predictions by 2015 suggests a sizeable further increase in European populations. However, the estimates from the leveling off model were considerably lower. PMID:22102897
A model of litter size distribution in cattle.
Bennett, G L; Echternkamp, S E; Gregory, K E
1998-07-01
Genetic increases in twinning of cattle could result in increased frequency of triplet or higher-order births. There are no estimates of the incidence of triplets in populations with genetic levels of twinning over 40% because these populations either have not existed or have not been documented. A model of the distribution of litter size in cattle is proposed. Empirical estimates of ovulation rate distribution in sheep were combined with biological hypotheses about the fate of embryos in cattle. Two phases of embryo loss were hypothesized. The first phase is considered to be preimplantation. Losses in this phase occur independently (i.e., the loss of one embryo does not affect the loss of the remaining embryos). The second phase occurs after implantation. The loss of one embryo in this stage results in the loss of all embryos. Fewer than 5% triplet births are predicted when 50% of births are twins and triplets. Above 60% multiple births, increased triplets accounted for most of the increase in litter size. Predictions were compared with data from 5,142 calvings by 14 groups of heifers and cows with average litter sizes ranging from 1.14 to 1.36 calves. The predicted number of triplets was not significantly different (chi2 = 16.85, df = 14) from the observed number. The model also predicted differences in conception rates. A cow ovulating two ova was predicted to have the highest conception rate in a single breeding cycle. As mean ovulation rate increased, predicted conception to one breeding cycle increased. Conception to two or three breeding cycles decreased as mean ovulation increased because late-pregnancy failures increased. An alternative model of the fate of ova in cattle based on embryo and uterine competency predicts very similar proportions of singles, twins, and triplets but different conception rates. The proposed model of litter size distribution in cattle accurately predicts the proportion of triplets found in cattle with genetically high twinning rates. This model can be used in projecting efficiency changes resulting from genetically increasing the twinning rate in cattle.
Confidence in the predictive capability of a PBPK model is increased when the model is demonstrated to predict multiple pharmacokinetic outcomes from diverse studies under different exposure conditions. We previously showed that our multi-route human BDCM PBPK model adequately (w...
Noh, Wonjung; Seomun, Gyeongae
2015-06-01
This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.
Edwards, Stefan M.; Sørensen, Izel F.; Sarup, Pernille; Mackay, Trudy F. C.; Sørensen, Peter
2016-01-01
Predicting individual quantitative trait phenotypes from high-resolution genomic polymorphism data is important for personalized medicine in humans, plant and animal breeding, and adaptive evolution. However, this is difficult for populations of unrelated individuals when the number of causal variants is low relative to the total number of polymorphisms and causal variants individually have small effects on the traits. We hypothesized that mapping molecular polymorphisms to genomic features such as genes and their gene ontology categories could increase the accuracy of genomic prediction models. We developed a genomic feature best linear unbiased prediction (GFBLUP) model that implements this strategy and applied it to three quantitative traits (startle response, starvation resistance, and chill coma recovery) in the unrelated, sequenced inbred lines of the Drosophila melanogaster Genetic Reference Panel. Our results indicate that subsetting markers based on genomic features increases the predictive ability relative to the standard genomic best linear unbiased prediction (GBLUP) model. Both models use all markers, but GFBLUP allows differential weighting of the individual genetic marker relationships, whereas GBLUP weighs the genetic marker relationships equally. Simulation studies show that it is possible to further increase the accuracy of genomic prediction for complex traits using this model, provided the genomic features are enriched for causal variants. Our GFBLUP model using prior information on genomic features enriched for causal variants can increase the accuracy of genomic predictions in populations of unrelated individuals and provides a formal statistical framework for leveraging and evaluating information across multiple experimental studies to provide novel insights into the genetic architecture of complex traits. PMID:27235308
Characterization of the 2012-044C Briz-M Upper Stage Breakup
NASA Technical Reports Server (NTRS)
Hamilton, Joseph A.; Matney, Mark
2013-01-01
The NASA breakup model prediction was close to the observed population for catalog objects. The NASA breakup model predicted a larger population than was observed for objects under 10 cm. The stare technique produces low observation counts, but is readily comparable to model predictions. Customized stare parameters (Az, El, Range) were effective to increase the opportunities for HAX to observe the debris cloud. Other techniques to increase observation count will be considered for future breakup events.
Predicting thermally stressful events in rivers with a strategy to evaluate management alternatives
Maloney, K.O.; Cole, J.C.; Schmid, M.
2016-01-01
Water temperature is an important factor in river ecology. Numerous models have been developed to predict river temperature. However, many were not designed to predict thermally stressful periods. Because such events are rare, traditionally applied analyses are inappropriate. Here, we developed two logistic regression models to predict thermally stressful events in the Delaware River at the US Geological Survey gage near Lordville, New York. One model predicted the probability of an event >20.0 °C, and a second predicted an event >22.2 °C. Both models were strong (independent test data sensitivity 0.94 and 1.00, specificity 0.96 and 0.96) predicting 63 of 67 events in the >20.0 °C model and all 15 events in the >22.2 °C model. Both showed negative relationships with released volume from the upstream Cannonsville Reservoir and positive relationships with difference between air temperature and previous day's water temperature at Lordville. We further predicted how increasing release volumes from Cannonsville Reservoir affected the probabilities of correctly predicted events. For the >20.0 °C model, an increase of 0.5 to a proportionally adjusted release (that accounts for other sources) resulted in 35.9% of events in the training data falling below cutoffs; increasing this adjustment by 1.0 resulted in 81.7% falling below cutoffs. For the >22.2 °C these adjustments resulted in 71.1% and 100.0% of events falling below cutoffs. Results from these analyses can help managers make informed decisions on alternative release scenarios.
Demonstrating the improvement of predictive maturity of a computational model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S
2010-01-01
We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smallermore » discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.« less
Assessment of prediction skill in equatorial Pacific Ocean in high resolution model of CFS
NASA Astrophysics Data System (ADS)
Arora, Anika; Rao, Suryachandra A.; Pillai, Prasanth; Dhakate, Ashish; Salunke, Kiran; Srivastava, Ankur
2018-01-01
The effect of increasing atmospheric resolution on prediction skill of El Niño southern oscillation phenomenon in climate forecast system model is explored in this paper. Improvement in prediction skill for sea surface temperature (SST) and winds at all leads compared to low resolution model in the tropical Indo-Pacific basin is observed. High resolution model is able to capture extreme events reasonably well. As a result, the signal to noise ratio is improved in the high resolution model. However, spring predictability barrier (SPB) for summer months in Nino 3 and Nino 3.4 region is stronger in high resolution model, in spite of improvement in overall prediction skill and dynamics everywhere else. Anomaly correlation coefficient of SST in high resolution model with observations in Nino 3.4 region targeting boreal summer months when predicted at lead times of 3-8 months in advance decreased compared its lower resolution counterpart. It is noted that higher variance of winds predicted in spring season over central equatorial Pacific compared to observed variance of winds results in stronger than normal response on subsurface ocean, hence increases SPB for boreal summer months in high resolution model.
Cestari, Andrea
2013-01-01
Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-01-01
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033
Modeling and Characterization of a Graphite Nanoplatelet/Epoxy Composite
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Chasiotis, I.; Chen, Q.; Gates, T. S.
2004-01-01
A micromechanical modeling procedure is developed to predict the viscoelastic properties of a graphite nanoplatelet/epoxy composite as a function of volume fraction and nanoplatelet diameter. The predicted storage and loss moduli from the model are compared to measured values from the same material using Dynamical Mechanical Analysis, nanoindentation, and tensile tests. In most cases, the model and experiments indicate that for increasing volume fractions of nanoplatelets, both the storage and loss moduli increase. Also, in most cases, the model and experiments indicate that as the nanoplatelet diameter is increased, the storage and loss moduli decrease and increase, respectively.
Zhou, D; Bui, K; Sostek, M; Al-Huniti, N
2016-05-01
Naloxegol, a peripherally acting μ-opioid receptor antagonist for the treatment of opioid-induced constipation, is a substrate for cytochrome P450 (CYP) 3A4/3A5 and the P-glycoprotein (P-gp) transporter. By integrating in silico, preclinical, and clinical pharmacokinetic (PK) findings, minimal and full physiologically based pharmacokinetic (PBPK) models were developed to predict the drug-drug interaction (DDI) potential for naloxegol. The models reasonably predicted the observed changes in naloxegol exposure with ketoconazole (increase of 13.1-fold predicted vs. 12.9-fold observed), diltiazem (increase of 2.8-fold predicted vs. 3.4-fold observed), rifampin (reduction of 76% predicted vs. 89% observed), and quinidine (increase of 1.2-fold predicted vs. 1.4-fold observed). The moderate CYP3A4 inducer efavirenz was predicted to reduce naloxegol exposure by ∼50%, whereas weak CYP3A inhibitors were predicted to minimally affect exposure. In summary, the PBPK models reasonably estimated interactions with various CYP3A modulators and can be used to guide dosing in clinical practice when naloxegol is coadministered with such agents. © 2016 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Final Technical Report: Increasing Prediction Accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Hansen, Clifford; Stein, Joshua
2015-12-01
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
Assessment of Arctic and Antarctic Sea Ice Predictability in CMIP5 Decadal Hindcasts
NASA Technical Reports Server (NTRS)
Yang, Chao-Yuan; Liu, Jiping (Inventor); Hu, Yongyun; Horton, Radley M.; Chen, Liqi; Cheng, Xiao
2016-01-01
This paper examines the ability of coupled global climate models to predict decadal variability of Arctic and Antarctic sea ice. We analyze decadal hindcasts/predictions of 11 Coupled Model Intercomparison Project Phase 5 (CMIP5) models. Decadal hindcasts exhibit a large multimodel spread in the simulated sea ice extent, with some models deviating significantly from the observations as the predicted ice extent quickly drifts away from the initial constraint. The anomaly correlation analysis between the decadal hindcast and observed sea ice suggests that in the Arctic, for most models, the areas showing significant predictive skill become broader associated with increasing lead times. This area expansion is largely because nearly all the models are capable of predicting the observed decreasing Arctic sea ice cover. Sea ice extent in the North Pacific has better predictive skill than that in the North Atlantic (particularly at a lead time of 3-7 years), but there is a reemerging predictive skill in the North Atlantic at a lead time of 6-8 years. In contrast to the Arctic, Antarctic sea ice decadal hindcasts do not show broad predictive skill at any timescales, and there is no obvious improvement linking the areal extent of significant predictive skill to lead time increase. This might be because nearly all the models predict a retreating Antarctic sea ice cover, opposite to the observations. For the Arctic, the predictive skill of the multi-model ensemble mean outperforms most models and the persistence prediction at longer timescales, which is not the case for the Antarctic. Overall, for the Arctic, initialized decadal hindcasts show improved predictive skill compared to uninitialized simulations, although this improvement is not present in the Antarctic.
Bernecker, Samantha L; Rosellini, Anthony J; Nock, Matthew K; Chiu, Wai Tat; Gutierrez, Peter M; Hwang, Irving; Joiner, Thomas E; Naifeh, James A; Sampson, Nancy A; Zaslavsky, Alan M; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C
2018-04-03
High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.
Barrett, Matthew JP; Suresh, Vinod
2013-01-01
Neural activation triggers a rapid, focal increase in blood flow and thus oxygen delivery. Local oxygen consumption also increases, although not to the same extent as oxygen delivery. This ‘uncoupling' enables a number of widely-used functional neuroimaging techniques; however, the physiologic mechanisms that govern oxygen transport under these conditions remain unclear. Here, we explore this dynamic process using a new mathematical model. Motivated by experimental observations and previous modeling, we hypothesized that functional recruitment of capillaries has an important role during neural activation. Using conventional mechanisms alone, the model predictions were inconsistent with in vivo measurements of oxygen partial pressure. However, dynamically increasing net capillary permeability, a simple description of functional recruitment, led to predictions consistent with the data. Increasing permeability in all vessel types had the same effect, but two alternative mechanisms were unable to produce predictions consistent with the data. These results are further evidence that conventional models of oxygen transport are not sufficient to predict dynamic experimental data. The data and modeling suggest that it is necessary to include a mechanism that dynamically increases net vascular permeability. While the model cannot distinguish between the different possibilities, we speculate that functional recruitment could have this effect in vivo. PMID:23673433
Christine A. Vogt; Greg Winter; Jeremy S. Fried
2005-01-01
Social science models are increasingly needed as a framework for explaining and predicting how members of the public respond to the natural environment and their communities. The theory of reasoned action is widely used in human dimensions research on natural resource problems and work is ongoing to increase the predictive power of models based on this theory. This...
He, Y J; Li, X T; Fan, Z Q; Li, Y L; Cao, K; Sun, Y S; Ouyang, T
2018-01-23
Objective: To construct a dynamic enhanced MR based predictive model for early assessing pathological complete response (pCR) to neoadjuvant therapy in breast cancer, and to evaluate the clinical benefit of the model by using decision curve. Methods: From December 2005 to December 2007, 170 patients with breast cancer treated with neoadjuvant therapy were identified and their MR images before neoadjuvant therapy and at the end of the first cycle of neoadjuvant therapy were collected. Logistic regression model was used to detect independent factors for predicting pCR and construct the predictive model accordingly, then receiver operating characteristic (ROC) curve and decision curve were used to evaluate the predictive model. Results: ΔArea(max) and Δslope(max) were independent predictive factors for pCR, OR =0.942 (95% CI : 0.918-0.967) and 0.961 (95% CI : 0.940-0.987), respectively. The area under ROC curve (AUC) for the constructed model was 0.886 (95% CI : 0.820-0.951). Decision curve showed that in the range of the threshold probability above 0.4, the predictive model presented increased net benefit as the threshold probability increased. Conclusions: The constructed predictive model for pCR is of potential clinical value, with an AUC>0.85. Meanwhile, decision curve analysis indicates the constructed predictive model has net benefit from 3 to 8 percent in the likely range of probability threshold from 80% to 90%.
Pirdavani, Ali; Brijs, Tom; Bellemans, Tom; Kochan, Bruno; Wets, Geert
2013-01-01
Travel demand management (TDM) consists of a variety of policy measures that affect the transportation system's effectiveness by changing travel behavior. The primary objective to implement such TDM strategies is not to improve traffic safety, although their impact on traffic safety should not be neglected. The main purpose of this study is to evaluate the traffic safety impact of conducting a fuel-cost increase scenario (i.e. increasing the fuel price by 20%) in Flanders, Belgium. Since TDM strategies are usually conducted at an aggregate level, crash prediction models (CPMs) should also be developed at a geographically aggregated level. Therefore zonal crash prediction models (ZCPMs) are considered to present the association between observed crashes in each zone and a set of predictor variables. To this end, an activity-based transportation model framework is applied to produce exposure metrics which will be used in prediction models. This allows us to conduct a more detailed and reliable assessment while TDM strategies are inherently modeled in the activity-based models unlike traditional models in which the impact of TDM strategies are assumed. The crash data used in this study consist of fatal and injury crashes observed between 2004 and 2007. The network and socio-demographic variables are also collected from other sources. In this study, different ZCPMs are developed to predict the number of injury crashes (NOCs) (disaggregated by different severity levels and crash types) for both the null and the fuel-cost increase scenario. The results show a considerable traffic safety benefit of conducting the fuel-cost increase scenario apart from its impact on the reduction of the total vehicle kilometers traveled (VKT). A 20% increase in fuel price is predicted to reduce the annual VKT by 5.02 billion (11.57% of the total annual VKT in Flanders), which causes the total NOCs to decline by 2.83%. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi
2017-06-01
Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.
A Prospective Test of Cognitive Vulnerability Models of Depression With Adolescent Girls
Bohon, Cara; Stice, Eric; Burton, Emily; Fudell, Molly; Nolen-Hoeksema, Susan
2009-01-01
This study sought to provide a more rigorous prospective test of two cognitive vulnerability models of depression with longitudinal data from 496 adolescent girls. Results supported the cognitive vulnerability model in that stressors predicted future increases in depressive symptoms and onset of clinically significant major depression for individuals with a negative attributional style, but not for those with a positive attributional style, although these effects were small. This model appeared to be specific to depression, in that it did not predict future increases in bulimia nervosa or substance abuse symptoms. In contrast, results did not support the integrated cognitive vulnerability self-esteem model that asserts stressors should only predict increased depression for individuals with a confluence of negative attributional style and low self-esteem, and this model did not appear to be specific to depression. PMID:18328873
Integrating in silico models to enhance predictivity for developmental toxicity.
Marzo, Marco; Kulkarni, Sunil; Manganaro, Alberto; Roncaglioni, Alessandra; Wu, Shengde; Barton-Maclaren, Tara S; Lester, Cathy; Benfenati, Emilio
2016-08-31
Application of in silico models to predict developmental toxicity has demonstrated limited success particularly when employed as a single source of information. It is acknowledged that modelling the complex outcomes related to this endpoint is a challenge; however, such models have been developed and reported in the literature. The current study explored the possibility of integrating the selected public domain models (CAESAR, SARpy and P&G model) with the selected commercial modelling suites (Multicase, Leadscope and Derek Nexus) to assess if there is an increase in overall predictive performance. The results varied according to the data sets used to assess performance which improved upon model integration relative to individual models. Moreover, because different models are based on different specific developmental toxicity effects, integration of these models increased the applicable chemical and biological spaces. It is suggested that this approach reduces uncertainty associated with in silico predictions by achieving a consensus among a battery of models. The use of tools to assess the applicability domain also improves the interpretation of the predictions. This has been verified in the case of the software VEGA, which makes freely available QSAR models with a measurement of the applicability domain. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Wenhong; Fu, Rong; Dickinson, Robert E.
2006-01-01
The global climate models for the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) predict very different changes of rainfall over the Amazon under the SRES A1B scenario for global climate change. Five of the eleven models predict an increase of annual rainfall, three models predict a decrease of rainfall, and the other three models predict no significant changes in the Amazon rainfall. We have further examined two models. The UKMO-HadCM3 model predicts an El Niño-like sea surface temperature (SST) change and warming in the northern tropical Atlantic which appear to enhance atmospheric subsidence and consequently reduce clouds over the Amazon. The resultant increase of surface solar absorption causes a stronger surface sensible heat flux and thus reduces relative humidity of the surface air. These changes decrease the rate and length of wet season rainfall and surface latent heat flux. This decreased wet season rainfall leads to drier soil during the subsequent dry season, which in turn can delay the transition from the dry to wet season. GISS-ER predicts a weaker SST warming in the western Pacific and the southern tropical Atlantic which increases moisture transport and hence rainfall in the Amazon. In the southern Amazon and Nordeste where the strongest rainfall increase occurs, the resultant higher soil moisture supports a higher surface latent heat flux during the dry and transition season and leads to an earlier wet season onset.
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
Modelling obesity trends in Australia: unravelling the past and predicting the future.
Hayes, A J; Lung, T W C; Bauman, A; Howard, K
2017-01-01
Modelling is increasingly being used to predict the epidemiology of obesity progression and its consequences. The aims of this study were: (a) to present and validate a model for prediction of obesity among Australian adults and (b) to use the model to project the prevalence of obesity and severe obesity by 2025. Individual level simulation combined with survey estimation techniques to model changing population body mass index (BMI) distribution over time. The model input population was derived from a nationally representative survey in 1995, representing over 12 million adults. Simulations were run for 30 years. The model was validated retrospectively and then used to predict obesity and severe obesity by 2025 among different aged cohorts and at a whole population level. The changing BMI distribution over time was well predicted by the model and projected prevalence of weight status groups agreed with population level data in 2008, 2012 and 2014.The model predicts more growth in obesity among younger than older adult cohorts. Projections at a whole population level, were that healthy weight will decline, overweight will remain steady, but obesity and severe obesity prevalence will continue to increase beyond 2016. Adult obesity prevalence was projected to increase from 19% in 1995 to 35% by 2025. Severe obesity (BMI>35), which was only around 5% in 1995, was projected to be 13% by 2025, two to three times the 1995 levels. The projected rise in obesity severe obesity will have more substantial cost and healthcare system implications than in previous decades. Having a robust epidemiological model is key to predicting these long-term costs and health outcomes into the future.
Moore, John R; Watt, Michael S
2015-08-01
Wind is the major abiotic disturbance in New Zealand's planted forests, but little is known about how the risk of wind damage may be affected by future climate change. We linked a mechanistic wind damage model (ForestGALES) to an empirical growth model for radiata pine (Pinus radiata D. Don) and a process-based growth model (cenw) to predict the risk of wind damage under different future emissions scenarios and assumptions about the future wind climate. The cenw model was used to estimate site productivity for constant CO2 concentration at 1990 values and for assumed increases in CO2 concentration from current values to those expected during 2040 and 2090 under the B1 (low), A1B (mid-range) and A2 (high) emission scenarios. Stand development was modelled for different levels of site productivity, contrasting silvicultural regimes and sites across New Zealand. The risk of wind damage was predicted for each regime and emission scenario combination using the ForestGALES model. The sensitivity to changes in the intensity of the future wind climate was also examined. Results showed that increased tree growth rates under the different emissions scenarios had the greatest impact on the risk of wind damage. The increase in risk was greatest for stands growing at high stand density under the A2 emissions scenario with increased CO2 concentration. The increased productivity under this scenario resulted in increased tree height, without a corresponding increase in diameter, leading to more slender trees that were predicted to be at greater risk from wind damage. The risk of wind damage was further increased by the modest increases in the extreme wind climate that are predicted to occur. These results have implications for the development of silvicultural regimes that are resilient to climate change and also indicate that future productivity gains may be offset by greater losses from disturbances. © 2015 John Wiley & Sons Ltd.
Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.
2009-01-01
An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.
Predictive models of moth development
USDA-ARS?s Scientific Manuscript database
Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
Rose, Rachel H; Turner, David B; Neuhoff, Sibylle; Jamei, Masoud
2017-07-01
Following a meal, a transient increase in splanchnic blood flow occurs that can result in increased exposure to orally administered high-extraction drugs. Typically, physiologically based pharmacokinetic (PBPK) models have incorporated this increase in blood flow as a time-invariant fed/fasted ratio, but this approach is unable to explain the extent of increased drug exposure. A model for the time-varying increase in splanchnic blood flow following a moderate- to high-calorie meal (TV-Q Splanch ) was developed to describe the observed data for healthy individuals. This was integrated within a PBPK model and used to predict the contribution of increased splanchnic blood flow to the observed food effect for two orally administered high-extraction drugs, propranolol and ibrutinib. The model predicted geometric mean fed/fasted AUC and C max ratios of 1.24 and 1.29 for propranolol, which were within the range of published values (within 1.0-1.8-fold of values from eight clinical studies). For ibrutinib, the predicted geometric mean fed/fasted AUC and C max ratios were 2.0 and 1.84, respectively, which was within 1.1-fold of the reported fed/fasted AUC ratio but underestimated the reported C max ratio by up to 1.9-fold. For both drugs, the interindividual variability in fed/fasted AUC and C max ratios was underpredicted. This suggests that the postprandial change in splanchnic blood flow is a major mechanism of the food effect for propranolol and ibrutinib but is insufficient to fully explain the observations. The proposed model is anticipated to improve the prediction of food effect for high-extraction drugs, but should be considered with other mechanisms.
Dinosaur Fossils Predict Body Temperatures
Allen, Andrew P; Charnov, Eric L
2006-01-01
Perhaps the greatest mystery surrounding dinosaurs concerns whether they were endotherms, ectotherms, or some unique intermediate form. Here we present a model that yields estimates of dinosaur body temperature based on ontogenetic growth trajectories obtained from fossil bones. The model predicts that dinosaur body temperatures increased with body mass from approximately 25 °C at 12 kg to approximately 41 °C at 13,000 kg. The model also successfully predicts observed increases in body temperature with body mass for extant crocodiles. These results provide direct evidence that dinosaurs were reptiles that exhibited inertial homeothermy. PMID:16817695
Dinosaur fossils predict body temperatures.
Gillooly, James F; Allen, Andrew P; Charnov, Eric L
2006-07-01
Perhaps the greatest mystery surrounding dinosaurs concerns whether they were endotherms, ectotherms, or some unique intermediate form. Here we present a model that yields estimates of dinosaur body temperature based on ontogenetic growth trajectories obtained from fossil bones. The model predicts that dinosaur body temperatures increased with body mass from approximately 25 degrees C at 12 kg to approximately 41 degrees C at 13,000 kg. The model also successfully predicts observed increases in body temperature with body mass for extant crocodiles. These results provide direct evidence that dinosaurs were reptiles that exhibited inertial homeothermy.
Mathematical model to predict drivers' reaction speeds.
Long, Benjamin L; Gillespie, A Isabella; Tanaka, Martin L
2012-02-01
Mental distractions and physical impairments can increase the risk of accidents by affecting a driver's ability to control the vehicle. In this article, we developed a linear mathematical model that can be used to quantitatively predict drivers' performance over a variety of possible driving conditions. Predictions were not limited only to conditions tested, but also included linear combinations of these tests conditions. Two groups of 12 participants were evaluated using a custom drivers' reaction speed testing device to evaluate the effect of cell phone talking, texting, and a fixed knee brace on the components of drivers' reaction speed. Cognitive reaction time was found to increase by 24% for cell phone talking and 74% for texting. The fixed knee brace increased musculoskeletal reaction time by 24%. These experimental data were used to develop a mathematical model to predict reaction speed for an untested condition, talking on a cell phone with a fixed knee brace. The model was verified by comparing the predicted reaction speed to measured experimental values from an independent test. The model predicted full braking time within 3% of the measured value. Although only a few influential conditions were evaluated, we present a general approach that can be expanded to include other types of distractions, impairments, and environmental conditions.
Testing mechanistic models of growth in insects.
Maino, James L; Kearney, Michael R
2015-11-22
Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes. © 2015 The Author(s).
Plant water potential improves prediction of empirical stomatal models.
Anderegg, William R L; Wolf, Adam; Arango-Velez, Adriana; Choat, Brendan; Chmura, Daniel J; Jansen, Steven; Kolb, Thomas; Li, Shan; Meinzer, Frederick; Pita, Pilar; Resco de Dios, Víctor; Sperry, John S; Wolfe, Brett T; Pacala, Stephen
2017-01-01
Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.
Na, Okpin; Cai, Xiao-Chuan; Xi, Yunping
2017-01-01
The prediction of the chloride-induced corrosion is very important because of the durable life of concrete structure. To simulate more realistic durability performance of concrete structures, complex scientific methods and more accurate material models are needed. In order to predict the robust results of corrosion initiation time and to describe the thin layer from concrete surface to reinforcement, a large number of fine meshes are also used. The purpose of this study is to suggest more realistic physical model regarding coupled hygro-chemo transport and to implement the model with parallel finite element algorithm. Furthermore, microclimate model with environmental humidity and seasonal temperature is adopted. As a result, the prediction model of chloride diffusion under unsaturated condition was developed with parallel algorithms and was applied to the existing bridge to validate the model with multi-boundary condition. As the number of processors increased, the computational time decreased until the number of processors became optimized. Then, the computational time increased because the communication time between the processors increased. The framework of present model can be extended to simulate the multi-species de-icing salts ingress into non-saturated concrete structures in future work. PMID:28772714
Blecha, Kevin A.; Alldredge, Mat W.
2015-01-01
Animal space use studies using GPS collar technology are increasingly incorporating behavior based analysis of spatio-temporal data in order to expand inferences of resource use. GPS location cluster analysis is one such technique applied to large carnivores to identify the timing and location of feeding events. For logistical and financial reasons, researchers often implement predictive models for identifying these events. We present two separate improvements for predictive models that future practitioners can implement. Thus far, feeding prediction models have incorporated a small range of covariates, usually limited to spatio-temporal characteristics of the GPS data. Using GPS collared cougar (Puma concolor) we include activity sensor data as an additional covariate to increase prediction performance of feeding presence/absence. Integral to the predictive modeling of feeding events is a ground-truthing component, in which GPS location clusters are visited by human observers to confirm the presence or absence of feeding remains. Failing to account for sources of ground-truthing false-absences can bias the number of predicted feeding events to be low. Thus we account for some ground-truthing error sources directly in the model with covariates and when applying model predictions. Accounting for these errors resulted in a 10% increase in the number of clusters predicted to be feeding events. Using a double-observer design, we show that the ground-truthing false-absence rate is relatively low (4%) using a search delay of 2–60 days. Overall, we provide two separate improvements to the GPS cluster analysis techniques that can be expanded upon and implemented in future studies interested in identifying feeding behaviors of large carnivores. PMID:26398546
NASA Technical Reports Server (NTRS)
Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris
2000-01-01
The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).
Modeling of a resonant heat engine
NASA Astrophysics Data System (ADS)
Preetham, B. S.; Anderson, M.; Richards, C.
2012-12-01
A resonant heat engine in which the piston assembly is replaced by a sealed elastic cavity is modeled and analyzed. A nondimensional lumped-parameter model is derived and used to investigate the factors that control the performance of the engine. The thermal efficiency predicted by the model agrees with that predicted from the relation for the Otto cycle based on compression ratio. The predictions show that for a fixed mechanical load, increasing the heat input results in increased efficiency. The output power and power density are shown to depend on the loading for a given heat input. The loading condition for maximum output power is different from that required for maximum power density.
Four Major South Korea's Rivers Using Deep Learning Models.
Lee, Sangmok; Lee, Donghyun
2018-06-24
Harmful algal blooms are an annual phenomenon that cause environmental damage, economic losses, and disease outbreaks. A fundamental solution to this problem is still lacking, thus, the best option for counteracting the effects of algal blooms is to improve advance warnings (predictions). However, existing physical prediction models have difficulties setting a clear coefficient indicating the relationship between each factor when predicting algal blooms, and many variable data sources are required for the analysis. These limitations are accompanied by high time and economic costs. Meanwhile, artificial intelligence and deep learning methods have become increasingly common in scientific research; attempts to apply the long short-term memory (LSTM) model to environmental research problems are increasing because the LSTM model exhibits good performance for time-series data prediction. However, few studies have applied deep learning models or LSTM to algal bloom prediction, especially in South Korea, where algal blooms occur annually. Therefore, we employed the LSTM model for algal bloom prediction in four major rivers of South Korea. We conducted short-term (one week) predictions by employing regression analysis and deep learning techniques on a newly constructed water quality and quantity dataset drawn from 16 dammed pools on the rivers. Three deep learning models (multilayer perceptron, MLP; recurrent neural network, RNN; and long short-term memory, LSTM) were used to predict chlorophyll-a, a recognized proxy for algal activity. The results were compared to those from OLS (ordinary least square) regression analysis and actual data based on the root mean square error (RSME). The LSTM model showed the highest prediction rate for harmful algal blooms and all deep learning models out-performed the OLS regression analysis. Our results reveal the potential for predicting algal blooms using LSTM and deep learning.
Risk prediction model: Statistical and artificial neural network approach
NASA Astrophysics Data System (ADS)
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W
2016-08-01
Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Developing models for the prediction of hospital healthcare waste generation rate.
Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe
2016-01-01
An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.
Heflin, Laura E.; Makowsky, Robert; Taylor, J. Christopher; Williams, Michael B.; Lawrence, Addison L.; Watts, Stephen A.
2016-01-01
Juvenile Lytechinus variegatus (ca. 3.95± 0.54 g) were fed one of 10 formulated diets with different protein (ranging from 11- 43%) and carbohydrate (12 or 18%; brackets determined from previous studies) levels. Urchins (n= 16 per treatment) were fed a daily sub-satiation ration equivalent to 2.0% of average body weight for 10 weeks. Our objective was (1) to create predictive models of growth, production and efficiency outcomes and (2) to generate economic analysis models in relation to these dietary outcomes for juvenile L. variegatus held in culture. At dietary protein levels below ca. 30%, models for most growth and production outcomes predicted increased rates of growth and production among urchins fed diets containing 18% dietary carbohydrate levels as compared to urchins fed diets containing 12% dietary carbohydrate. For most outcomes, growth and production was predicted to increase with increasing level of dietary protein up to ca. 30%, after which, no further increase in growth and production were predicted. Likewise, dry matter production efficiency was predicted to increase with increasing protein level up to ca. 30%, with urchins fed diets with 18% carbohydrate exhibiting greater efficiency than those fed diets with 12% carbohydrate. The energetic cost of dry matter production was optimal at protein levels less than those required for maximal weight gain and gonad production, suggesting an increased energetic cost (decreased energy efficiency) is required to increase gonad production relative to somatic growth. Economic analysis models predict when cost of feed ingredients are low, the lowest cost per gram of wet weight gain will occur at 18% dietary carbohydrate and ca. 25- 30% dietary protein. In contrast, lowest cost per gram of wet weight gain will occur at 12% dietary carbohydrate and ca. 35- 40% dietary protein when feed ingredient costs are high or average. For both 18 and 12% levels of dietary carbohydrate, cost per gram of wet weight gain is predicted to be maximized at low dietary protein levels, regardless of feed ingredient costs. These models will compare dietary requirements and growth outcomes in relation to economic costs and provide insight for future commercialization of sea urchin aquaculture. PMID:28082753
Heflin, Laura E; Makowsky, Robert; Taylor, J Christopher; Williams, Michael B; Lawrence, Addison L; Watts, Stephen A
2016-10-01
Juvenile Lytechinus variegatus (ca. 3.95± 0.54 g) were fed one of 10 formulated diets with different protein (ranging from 11- 43%) and carbohydrate (12 or 18%; brackets determined from previous studies) levels. Urchins (n= 16 per treatment) were fed a daily sub-satiation ration equivalent to 2.0% of average body weight for 10 weeks. Our objective was (1) to create predictive models of growth, production and efficiency outcomes and (2) to generate economic analysis models in relation to these dietary outcomes for juvenile L. variegatus held in culture. At dietary protein levels below ca. 30%, models for most growth and production outcomes predicted increased rates of growth and production among urchins fed diets containing 18% dietary carbohydrate levels as compared to urchins fed diets containing 12% dietary carbohydrate. For most outcomes, growth and production was predicted to increase with increasing level of dietary protein up to ca. 30%, after which, no further increase in growth and production were predicted. Likewise, dry matter production efficiency was predicted to increase with increasing protein level up to ca. 30%, with urchins fed diets with 18% carbohydrate exhibiting greater efficiency than those fed diets with 12% carbohydrate. The energetic cost of dry matter production was optimal at protein levels less than those required for maximal weight gain and gonad production, suggesting an increased energetic cost (decreased energy efficiency) is required to increase gonad production relative to somatic growth. Economic analysis models predict when cost of feed ingredients are low, the lowest cost per gram of wet weight gain will occur at 18% dietary carbohydrate and ca. 25- 30% dietary protein. In contrast, lowest cost per gram of wet weight gain will occur at 12% dietary carbohydrate and ca. 35- 40% dietary protein when feed ingredient costs are high or average. For both 18 and 12% levels of dietary carbohydrate, cost per gram of wet weight gain is predicted to be maximized at low dietary protein levels, regardless of feed ingredient costs. These models will compare dietary requirements and growth outcomes in relation to economic costs and provide insight for future commercialization of sea urchin aquaculture.
Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C
2018-06-01
Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.
Aguiar, Fabio S; Almeida, Luciana L; Ruffino-Netto, Antonio; Kritski, Afranio Lineu; Mello, Fernanda Cq; Werneck, Guilherme L
2012-08-07
Tuberculosis (TB) remains a public health issue worldwide. The lack of specific clinical symptoms to diagnose TB makes the correct decision to admit patients to respiratory isolation a difficult task for the clinician. Isolation of patients without the disease is common and increases health costs. Decision models for the diagnosis of TB in patients attending hospitals can increase the quality of care and decrease costs, without the risk of hospital transmission. We present a predictive model for predicting pulmonary TB in hospitalized patients in a high prevalence area in order to contribute to a more rational use of isolation rooms without increasing the risk of transmission. Cross sectional study of patients admitted to CFFH from March 2003 to December 2004. A classification and regression tree (CART) model was generated and validated. The area under the ROC curve (AUC), sensitivity, specificity, positive and negative predictive values were used to evaluate the performance of model. Validation of the model was performed with a different sample of patients admitted to the same hospital from January to December 2005. We studied 290 patients admitted with clinical suspicion of TB. Diagnosis was confirmed in 26.5% of them. Pulmonary TB was present in 83.7% of the patients with TB (62.3% with positive sputum smear) and HIV/AIDS was present in 56.9% of patients. The validated CART model showed sensitivity, specificity, positive predictive value and negative predictive value of 60.00%, 76.16%, 33.33%, and 90.55%, respectively. The AUC was 79.70%. The CART model developed for these hospitalized patients with clinical suspicion of TB had fair to good predictive performance for pulmonary TB. The most important variable for prediction of TB diagnosis was chest radiograph results. Prospective validation is still necessary, but our model offer an alternative for decision making in whether to isolate patients with clinical suspicion of TB in tertiary health facilities in countries with limited resources.
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
Approximating prediction uncertainty for random forest regression models
John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne
2016-01-01
Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...
NASA Astrophysics Data System (ADS)
Zho, Chen-Chen; Farr, Erik P.; Glover, William J.; Schwartz, Benjamin J.
2017-08-01
We use one-electron non-adiabatic mixed quantum/classical simulations to explore the temperature dependence of both the ground-state structure and the excited-state relaxation dynamics of the hydrated electron. We compare the results for both the traditional cavity picture and a more recent non-cavity model of the hydrated electron and make definite predictions for distinguishing between the different possible structural models in future experiments. We find that the traditional cavity model shows no temperature-dependent change in structure at constant density, leading to a predicted resonance Raman spectrum that is essentially temperature-independent. In contrast, the non-cavity model predicts a blue-shift in the hydrated electron's resonance Raman O-H stretch with increasing temperature. The lack of a temperature-dependent ground-state structural change of the cavity model also leads to a prediction of little change with temperature of both the excited-state lifetime and hot ground-state cooling time of the hydrated electron following photoexcitation. This is in sharp contrast to the predictions of the non-cavity model, where both the excited-state lifetime and hot ground-state cooling time are expected to decrease significantly with increasing temperature. These simulation-based predictions should be directly testable by the results of future time-resolved photoelectron spectroscopy experiments. Finally, the temperature-dependent differences in predicted excited-state lifetime and hot ground-state cooling time of the two models also lead to different predicted pump-probe transient absorption spectroscopy of the hydrated electron as a function of temperature. We perform such experiments and describe them in Paper II [E. P. Farr et al., J. Chem. Phys. 147, 074504 (2017)], and find changes in the excited-state lifetime and hot ground-state cooling time with temperature that match well with the predictions of the non-cavity model. In particular, the experiments reveal stimulated emission from the excited state with an amplitude and lifetime that decreases with increasing temperature, a result in contrast to the lack of stimulated emission predicted by the cavity model but in good agreement with the non-cavity model. Overall, until ab initio calculations describing the non-adiabatic excited-state dynamics of an excess electron with hundreds of water molecules at a variety of temperatures become computationally feasible, the simulations presented here provide a definitive route for connecting the predictions of cavity and non-cavity models of the hydrated electron with future experiments.
Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal
2017-01-01
Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.
Sperm competition games: a general model for precopulatory male-male competition.
Parker, Geoff A; Lessells, Catherine M; Simmons, Leigh W
2013-01-01
Reproductive males face a trade-off between expenditure on precopulatory male-male competition--increasing the number of females that they secure as mates--and sperm competition--increasing their fertilization success with those females. Previous sperm allocation models have focused on scramble competition in which males compete by searching for mates and the number of matings rises linearly with precopulatory expenditure. However, recent studies have emphasized contest competition involving precopulatory expenditure on armaments, where winning contests may be highly dependent on marginal increases in relative armament level. Here, we develop a general model of sperm allocation that allows us to examine the effect of all forms of precopulatory competition on sperm allocation patterns. The model predicts that sperm allocation decreases if either the "mate-competition loading,"a, or the number of males competing for each mating, M, increases. Other predictions remain unchanged from previous models: (i) expenditure per ejaculate should increase and then decrease, and (ii) total postcopulatory expenditure should increase, as the level of sperm competition increases. A negative correlation between a and M is biologically plausible, and may buffer deviations from the previous models. There is some support for our predictions from comparative analyses across dung beetle species and frog populations. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
USDA-ARS?s Scientific Manuscript database
Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates for selection. Originally these models were developed without considering genotype ' environment interaction (GE). Several authors have proposed extensions of the cannonical GS model that accomm...
Immunogenicity of therapeutic proteins: the use of animal models.
Brinks, Vera; Jiskoot, Wim; Schellekens, Huub
2011-10-01
Immunogenicity of therapeutic proteins lowers patient well-being and drastically increases therapeutic costs. Preventing immunogenicity is an important issue to consider when developing novel therapeutic proteins and applying them in the clinic. Animal models are increasingly used to study immunogenicity of therapeutic proteins. They are employed as predictive tools to assess different aspects of immunogenicity during drug development and have become vital in studying the mechanisms underlying immunogenicity of therapeutic proteins. However, the use of animal models needs critical evaluation. Because of species differences, predictive value of such models is limited, and mechanistic studies can be restricted. This review addresses the suitability of animal models for immunogenicity prediction and summarizes the insights in immunogenicity that they have given so far.
Brightness perception of unrelated self-luminous colors.
Withouck, Martijn; Smet, Kevin A G; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Koenderink, Jan; Hanselaer, Peter
2013-06-01
The perception of brightness of unrelated self-luminous colored stimuli of the same luminance has been investigated. The Helmholtz-Kohlrausch (H-K) effect, i.e., an increase in brightness perception due to an increase in saturation, is clearly observed. This brightness perception is compared with the calculated brightness according to six existing vision models, color appearance models, and models based on the concept of equivalent luminance. Although these models included the H-K effect and half of them were developed to work with unrelated colors, none of the models seemed to be able to fully predict the perceived brightness. A tentative solution to increase the prediction accuracy of the color appearance model CAM97u, developed by Hunt, is presented.
Validation of a probabilistic post-fire erosion model
Pete Robichaud; William J. Elliot; Sarah A. Lewis; Mary Ellen Miller
2016-01-01
Post-fire increases of runoff and erosion often occur and land managers need tools to be able to project the increased risk. The Erosion Risk Management Tool (ERMiT) uses the Water Erosion Prediction Project (WEPP) model as the underlying processor. ERMiT predicts the probability of a given amount of hillslope sediment delivery from a single rainfall or...
McGowan, Conor P.; Allan, Nathan; Servoss, Jeff; Hedwall, Shaula J.; Wooldridge, Brian
2017-01-01
Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises) in transparent and explicit ways tailored to support decision making processes related to endangered species.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
van Strien, Maarten J; Keller, Daniela; Holderegger, Rolf; Ghazoul, Jaboury; Kienast, Felix; Bolliger, Janine
2014-03-01
For conservation managers, it is important to know whether landscape changes lead to increasing or decreasing gene flow. Although the discipline of landscape genetics assesses the influence of landscape elements on gene flow, no studies have yet used landscape-genetic models to predict gene flow resulting from landscape change. A species that has already been severely affected by landscape change is the large marsh grasshopper (Stethophyma grossum), which inhabits moist areas in fragmented agricultural landscapes in Switzerland. From transects drawn between all population pairs within maximum dispersal distance (< 3 km), we calculated several measures of landscape composition as well as some measures of habitat configuration. Additionally, a complete sampling of all populations in our study area allowed incorporating measures of population topology. These measures together with the landscape metrics formed the predictor variables in linear models with gene flow as response variable (F(ST) and mean pairwise assignment probability). With a modified leave-one-out cross-validation approach, we selected the model with the highest predictive accuracy. With this model, we predicted gene flow under several landscape-change scenarios, which simulated construction, rezoning or restoration projects, and the establishment of a new population. For some landscape-change scenarios, significant increase or decrease in gene flow was predicted, while for others little change was forecast. Furthermore, we found that the measures of population topology strongly increase model fit in landscape genetic analysis. This study demonstrates the use of predictive landscape-genetic models in conservation and landscape planning.
Ribeiro, Ilda Patrícia; Caramelo, Francisco; Esteves, Luísa; Menoita, Joana; Marques, Francisco; Barroso, Leonor; Miguéis, Jorge; Melo, Joana Barbosa; Carreira, Isabel Marques
2017-10-24
The head and neck squamous cell carcinoma (HNSCC) population consists mainly of high-risk for recurrence and locally advanced stage patients. Increased knowledge of the HNSCC genomic profile can improve early diagnosis and treatment outcomes. The development of models to identify consistent genomic patterns that distinguish HNSCC patients that will recur and/or develop metastasis after treatment is of utmost importance to decrease mortality and improve survival rates. In this study, we used array comparative genomic hybridization data from HNSCC patients to implement a robust model to predict HNSCC recurrence/metastasis. This predictive model showed a good accuracy (>80%) and was validated in an independent population from TCGA data portal. This predictive genomic model comprises chromosomal regions from 5p, 6p, 8p, 9p, 11q, 12q, 15q and 17p, where several upstream and downstream members of signaling pathways that lead to an increase in cell proliferation and invasion are mapped. The introduction of genomic predictive models in clinical practice might contribute to a more individualized clinical management of the HNSCC patients, reducing recurrences and improving patients' quality of life. The power of this genomic model to predict the recurrence and metastases development should be evaluated in other HNSCC populations.
Mathewson, Paul D; Moyer-Horner, Lucas; Beever, Erik A; Briscoe, Natalie J; Kearney, Michael; Yahn, Jeremiah M; Porter, Warren P
2017-03-01
How climate constrains species' distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8-19% less habitat loss in response to annual temperature increases of ~3-5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect: climate-imposed restrictions on activity. This more complete understanding is necessary to inform climate adaptation actions, management strategies, and conservation plans. © 2016 John Wiley & Sons Ltd.
Mathewson, Paul; Moyer-Horner, Lucas; Beever, Erik; Briscoe, Natalie; Kearney, Michael T.; Yahn, Jeremiah; Porter, Warren P.
2017-01-01
How climate constrains species’ distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8–19% less habitat loss in response to annual temperature increases of ~3–5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect: climate-imposed restrictions on activity. This more complete understanding is necessary to inform climate adaptation actions, management strategies, and conservation plans.
Preclinical models used for immunogenicity prediction of therapeutic proteins.
Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim
2013-07-01
All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.
Sink fast and swim harder! Round-trip cost-of-transport for buoyant divers.
Miller, Patrick J O; Biuw, Martin; Watanabe, Yuuki Y; Thompson, Dave; Fedak, Mike A
2012-10-15
Efficient locomotion between prey resources at depth and oxygen at the surface is crucial for breath-hold divers to maximize time spent in the foraging layer, and thereby net energy intake rates. The body density of divers, which changes with body condition, determines the apparent weight (buoyancy) of divers, which may affect round-trip cost-of-transport (COT) between the surface and depth. We evaluated alternative predictions from external-work and actuator-disc theory of how non-neutral buoyancy affects round-trip COT to depth, and the minimum COT speed for steady-state vertical transit. Not surprisingly, the models predict that one-way COT decreases (increases) when buoyancy aids (hinders) one-way transit. At extreme deviations from neutral buoyancy, gliding at terminal velocity is the minimum COT strategy in the direction aided by buoyancy. In the transit direction hindered by buoyancy, the external-work model predicted that minimum COT speeds would not change at greater deviations from neutral buoyancy, but minimum COT speeds were predicted to increase under the actuator disc model. As previously documented for grey seals, we found that vertical transit rates of 36 elephant seals increased in both directions as body density deviated from neutral buoyancy, indicating that actuator disc theory may more closely predict the power requirements of divers affected by gravity than an external work model. For both models, minor deviations from neutral buoyancy did not affect minimum COT speed or round-trip COT itself. However, at body-density extremes, both models predict that savings in the aided direction do not fully offset the increased COT imposed by the greater thrusting required in the hindered direction.
Li, Xuehua; Zhao, Wenxing; Li, Jing; Jiang, Jingqiu; Chen, Jianji; Chen, Jingwen
2013-08-01
To assess the persistence and fate of volatile organic compounds in the troposphere, the rate constants for the reaction with ozone (kO3) are needed. As kO3 values are only available for hundreds of compounds, and experimental determination of kO3 is costly and time-consuming, it is of importance to develop predictive models on kO3. In this study, a total of 379 logkO3 values at different temperatures were used to develop and validate a model for the prediction of kO3, based on quantum chemical descriptors, Dragon descriptors and structural fragments. Molecular descriptors were screened by stepwise multiple linear regression, and the model was constructed by partial least-squares regression. The cross validation coefficient QCUM(2) of the model is 0.836, and the external validation coefficient Qext(2) is 0.811, indicating that the model has high robustness and good predictive performance. The most significant descriptor explaining logkO3 is the BELm2 descriptor with connectivity information weighted atomic masses. kO3 increases with increasing BELm2, and decreases with increasing ionization potential. The applicability domain of the proposed model was visualized by the Williams plot. The developed model can be used to predict kO3 at different temperatures for a wide range of organic chemicals, including alkenes, cycloalkenes, haloalkenes, alkynes, oxygen-containing compounds, nitrogen-containing compounds (except primary amines) and aromatic compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik
2015-12-01
Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.
LaBeau, Meredith B.; Robertson, Dale M.; Mayer, Alex S.; Pijanowski, Bryan C.; Saad, David A.
2013-01-01
Increased phosphorus (P) loadings threaten the health of the world’s largest freshwater resource, the Laurentian Great Lakes (GL). To understand the linkages between land use and P delivery, we coupled two spatially explicit models, the landscape-scale SPARROW P fate and transport watershed model and the Land Transformation Model (LTM) land use change model, to predict future P export from nonpoint and point sources caused by changes in land use. According to LTM predictions over the period 2010–2040, the GL region of the U.S. may experience a doubling of urbanized areas and agricultural areas may increase by 10%, due to biofuel feedstock cultivation. These land use changes are predicted to increase P loadings from the U.S. side of the GL basin by 3.5–9.5%, depending on the Lake watershed and development scenario. The exception is Lake Ontario, where loading is predicted to decrease by 1.8% for one scenario, due to population losses in the drainage area. Overall, urban expansion is estimated to increase P loadings by 3.4%. Agricultural expansion associated with predicted biofuel feedstock cultivation is predicted to increase P loadings by an additional 2.4%. Watersheds that export P most efficiently and thus are the most vulnerable to increases in P sources tend to be found along southern Lake Ontario, southeastern Lake Erie, western Lake Michigan, and southwestern Lake Superior where watershed areas are concentrated along the coastline with shorter flow paths. In contrast, watersheds with high soil permeabilities, fractions of land underlain by tile drains, and long distances to the GL are less vulnerable.
Load Carriage Capacity of the Dismounted Combatant - A Commanders’ Guide
2012-10-01
predictive model has been used throughout this document to predict the physiological burden (i.e. energy cost ) of representative load carriage...scenarios. As a general guide this model indicates that a 10 kg increase in external load is metabolically equivalent (i.e. energy cost ) to an increase...larger increases in energy cost for a load carriage task. The multi-factorial nature of human load carriage capacity makes it difficult to set
Sebok, Angelia; Wickens, Christopher D
2017-03-01
The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.
Predictive modeling of surimi cake shelf life at different storage temperatures
NASA Astrophysics Data System (ADS)
Wang, Yatong; Hou, Yanhua; Wang, Quanfu; Cui, Bingqing; Zhang, Xiangyu; Li, Xuepeng; Li, Yujin; Liu, Yuanping
2017-04-01
The Arrhenius model of the shelf life prediction which based on the TBARS index was established in this study. The results showed that the significant changed of AV, POV, COV and TBARS with temperature increased, and the reaction rate constants k was obtained by the first order reaction kinetics model. Then the secondary model fitting was based on the Arrhenius equation. There was the optimal fitting accuracy of TBARS in the first and the secondary model fitting (R2≥0.95). The verification test indicated that the relative error between the shelf life model prediction value and actual value was within ±10%, suggesting the model could predict the shelf life of surimi cake.
Tropical and Extratropical Cyclone Damages under Climate Change
NASA Astrophysics Data System (ADS)
Ranson, M.; Kousky, C.; Ruth, M.; Jantarasami, L.; Crimmins, A.; Tarquinio, L.
2014-12-01
This paper provides the first quantitative synthesis of the rapidly growing literature on future tropical and extratropical cyclone losses under climate change. We estimate a probability distribution for the predicted impact of changes in global surface air temperatures on future storm damages, using an ensemble of 296 estimates of the temperature-damage relationship from twenty studies. Our analysis produces three main empirical results. First, we find strong but not conclusive support for the hypothesis that climate change will cause damages from tropical cyclones and wind storms to increase, with most models (84 and 92 percent, respectively) predicting higher future storm damages due to climate change. Second, there is substantial variation in projected changes in losses across regions. Potential changes in damages are greatest in the North Atlantic basin, where the multi-model average predicts that a 2.5°C increase in global surface air temperature would cause hurricane damages to increase by 62 percent. The ensemble predictions for Western North Pacific tropical cyclones and European wind storms (extratropical cyclones) are approximately one third of that magnitude. Finally, our analysis shows that existing models of storm damages under climate change generate a wide range of predictions, ranging from moderate decreases to very large increases in losses.
NASA Astrophysics Data System (ADS)
Knouft, J.; Chu, M. L.
2013-12-01
Natural flow regimes in aquatic systems sustain biodiversity and provide support for basic ecological processes. Nevertheless, the hydrology of aquatic systems is heavily impacted by human activities including land use changes associated with urbanization. Small increases in urban expansion can greatly increase surface runoff while decreasing infiltration. These changes in land use can also affect aquifer recharge and alter streamflow, thus impacting water quality, aquatic biodiversity, and ecosystem productivity. However, there are few studies predicting the effects of various levels of urbanization on flow regimes and the subsequent impacts of these flow alterations on ecosystem endpoints at the watershed scale. We quantified the potential effects of varying degrees of urban expansion on the discharge, velocity, and water depth in the Big River watershed in eastern Missouri using a physically-based watershed model, MIKE-SHE, and a 1D hydrodynamic river model, MIKE-11. Five land cover scenarios corresponding to increasing levels of urban expansion were used to determine the sensitivity of flow in the Big River watershed to increasing urbanization. Results indicate that the frequency of low flow events decreases as urban expansion increases, while the frequency of average and high-flow events increases as urbanization increases. We used current estimates of flow from the MIKE-SHE model to predict variation in fish species richness at 44 sites across the watershed based on standardized fish collections from each site. This model was then used with flow estimates from the urban expansion hydrological models to predict potential changes in fish species richness as urban areas increase. Responses varied among sites with some areas predicted to experience increases in species richness while others are predicted to experience decreases in species richness. Taxonomic identity of species also appeared to influence results with the number of species of Cyprinidae (minnows) expected to increase across the watershed, while the number of species of Centrachidae (bass and sunfish) is expected to decrease across the watershed.
Genomic selection in a commercial winter wheat population.
He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong
2016-03-01
Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.
Smeers, Inge; Decorte, Ronny; Van de Voorde, Wim; Bekaert, Bram
2018-05-01
DNA methylation is a promising biomarker for forensic age prediction. A challenge that has emerged in recent studies is the fact that prediction errors become larger with increasing age due to interindividual differences in epigenetic ageing rates. This phenomenon of non-constant variance or heteroscedasticity violates an assumption of the often used method of ordinary least squares (OLS) regression. The aim of this study was to evaluate alternative statistical methods that do take heteroscedasticity into account in order to provide more accurate, age-dependent prediction intervals. A weighted least squares (WLS) regression is proposed as well as a quantile regression model. Their performances were compared against an OLS regression model based on the same dataset. Both models provided age-dependent prediction intervals which account for the increasing variance with age, but WLS regression performed better in terms of success rate in the current dataset. However, quantile regression might be a preferred method when dealing with a variance that is not only non-constant, but also not normally distributed. Ultimately the choice of which model to use should depend on the observed characteristics of the data. Copyright © 2018 Elsevier B.V. All rights reserved.
Climate change and the eco-hydrology of fire: Will area burned increase in a warming western USA?
Donald McKenzie; Jeremy S. Littell
2017-01-01
Wildfire area is predicted to increase with global warming. Empirical statistical models and process-based simulations agree almost universally. The key relationship for this unanimity, observed at multiple spatial and temporal scales, is between drought and fire. Predictive models often focus on ecosystems in which this relationship appears to be particularly strong,...
NASA Astrophysics Data System (ADS)
Davis, Tom R.; Harasti, David; Smith, Stephen D. A.; Kelaher, Brendan P.
2016-11-01
Climate change induced sea level rise will affect shallow estuarine habitats, which are already under threat from multiple anthropogenic stressors. Here, we present the results of modelling to predict potential impacts of climate change associated processes on seagrass distributions. We use a novel application of relative environmental suitability (RES) modelling to examine relationships between variables of physiological importance to seagrasses (light availability, wave exposure, and current flow) and seagrass distributions within 5 estuarine embayments. Models were constructed separately for Posidonia australis and Zostera muelleri subsp. capricorni using seagrass data from Port Stephens estuary, New South Wales, Australia. Subsequent testing of models used independent datasets from four other estuarine embayments (Wallis Lake, Lake Illawarra, Merimbula Lake, and Pambula Lake) distributed along 570 km of the east Australian coast. Relative environmental suitability models provided adequate predictions for seagrass distributions within Port Stephens and the other estuarine embayments, indicating that they may have broad regional application. Under the predictions of RES models, both sea level rise and increased turbidity are predicted to cause substantial seagrass losses in deeper estuarine areas, resulting in a net shoreward movement of seagrass beds. Seagrass species distribution models developed in this study provide a valuable tool to predict future shifts in estuarine seagrass distributions, allowing identification of areas for protection, monitoring and rehabilitation.
Poststroke Fatigue: Who Is at Risk for an Increase in Fatigue?
van Eijsden, Hanna Maria; van de Port, Ingrid Gerrie Lambert; Visser-Meily, Johanna Maria August; Kwakkel, Gert
2012-01-01
Background. Several studies have examined determinants related to post-stroke fatigue. However, it is unclear which determinants can predict an increase in poststroke fatigue over time. Aim. This prospective cohort study aimed to identify determinants which predict an increase in post-stroke fatigue. Methods. A total of 250 patients with stroke were examined at inpatient rehabilitation discharge (T0) and 24 weeks later (T1). Fatigue was measured using the Fatigue Severity Scale (FSS). An increase in post-stroke fatigue was defined as an increase in the FSS score beyond the 95% limits of the standard error of measurement of the FSS (i.e., 1.41 points) between T0 and T1. Candidate determinants included personal factors, stroke characteristics, physical, cognitive, and emotional functions, and activities and participation and were assessed at T0. Factors predicting an increase in fatigue were identified using forward multivariate logistic regression analysis. Results. The only independent predictor of an increase in post-stroke fatigue was FSS (OR 0.50; 0.38–0.64, P < 0.001). The model including FSS at baseline correctly predicted 7.9% of the patients who showed increased fatigue at T1. Conclusion. The prognostic model to predict an increase in fatigue after stroke has limited predictive value, but baseline fatigue is the most important independent predictor. Overall, fatigue levels remained stable over time. PMID:22028989
Temperature gradient interaction chromatography of polymers: A molecular statistical model.
Radke, Wolfgang; Lee, Sekyung; Chang, Taihyun
2010-11-01
A new model describing the retention in temperature gradient interaction chromatography of polymers is developed. The model predicts that polymers might elute in temperature gradient interaction chromatography in either an increasing or decreasing order or even nearly independent of molar mass, depending on the rate of the temperature increase relative to the flow rate. This is in contrast to solvent gradient elution, where polymers elute either in order of increasing molar mass or molar mass independent. The predictions of the newly developed model were verified with the literature data as well as new experimental data. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nitrate removal in stream ecosystems measured by 15N addition experiments: Total uptake
Hall, R.O.; Tank, J.L.; Sobota, D.J.; Mulholland, P.J.; O'Brien, J. M.; Dodds, W.K.; Webster, J.R.; Valett, H.M.; Poole, G.C.; Peterson, B.J.; Meyer, J.L.; McDowell, W.H.; Johnson, S.L.; Hamilton, S.K.; Grimm, N. B.; Gregory, S.V.; Dahm, Clifford N.; Cooper, L.W.; Ashkenas, L.R.; Thomas, S.M.; Sheibley, R.W.; Potter, J.D.; Niederlehner, B.R.; Johnson, L.T.; Helton, A.M.; Crenshaw, C.M.; Burgin, A.J.; Bernot, M.J.; Beaulieu, J.J.; Arangob, C.P.
2009-01-01
We measured uptake length of 15NO-3 in 72 streams in eight regions across the United States and Puerto Rico to develop quantitative predictive models on controls of NO-3 uptake length. As part of the Lotic Intersite Nitrogen eXperiment II project, we chose nine streams in each region corresponding to natural (reference), suburban-urban, and agricultural land uses. Study streams spanned a range of human land use to maximize variation in NO-3 concentration, geomorphology, and metabolism. We tested a causal model predicting controls on NO-3 uptake length using structural equation modeling. The model included concomitant measurements of ecosystem metabolism, hydraulic parameters, and nitrogen concentration. We compared this structural equation model to multiple regression models which included additional biotic, catchment, and riparian variables. The structural equation model explained 79% of the variation in log uptake length (S Wtot). Uptake length increased with specific discharge (Q/w) and increasing NO-3 concentrations, showing a loss in removal efficiency in streams with high NO-3 concentration. Uptake lengths shortened with increasing gross primary production, suggesting autotrophic assimilation dominated NO-3 removal. The fraction of catchment area as agriculture and suburban-urban land use weakly predicted NO-3 uptake in bivariate regression, and did improve prediction in a set of multiple regression models. Adding land use to the structural equation model showed that land use indirectly affected NO-3 uptake lengths via directly increasing both gross primary production and NO-3 concentration. Gross primary production shortened SWtot, while increasing NO-3 lengthened SWtot resulting in no net effect of land use on NO- 3 removal. ?? 2009.
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
Carlisle, D.M.; Hawkins, C.P.
2008-01-01
Inferences drawn from regional bioassessments could be strengthened by integrating data from different monitoring programs. We combined data from the US Geological Survey National Water-Quality Assessment (NAWQA) program and the US Environmental Protection Agency Wadeable Streams Assessment (WSA) to expand the scope of an existing River InVertebrate Prediction and Classification System (RIVPACS)-type predictive model and to assess the biological condition of streams across the western US in a variety of landuse classes. We used model-derived estimates of taxon-specific probabilities of capture and observed taxon occurrences to identify taxa that were absent from sites where they were predicted to occur (decreasers) and taxa that were present at sites where they were not predicted to occur (increasers). Integration of 87 NAWQA reference sites increased the scope of the existing WSA predictive model to include larger streams and later season sampling. Biological condition at 336 NAWQA test sites was significantly (p < 0.001) associated with basin land use and tended to be lower in basins with intensive landuse modification (e.g., mixed, urban, and agricultural basins) than in basins with relatively undisturbed land use (e.g., forested basins). Of the 437 taxa observed among reference and test sites, 180 (41%) were increasers or decreasers. In general, decreasers had a different set of ecological traits (functional traits or tolerance values) than did increasers. We could predict whether a taxon was a decreaser or an increaser based on just a few traits, e.g., desiccation resistance, timing of larval development, habit, and thermal preference, but we were unable to predict the type of basin land use from trait states present in invertebrate assemblages. Refined characterization of traits might be required before bioassessment data can be used routinely to aid in the diagnoses of the causes of biological impairment. ?? 2008 by The North American Benthological Society.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
Ransom, Katherine M.; Nolan, Bernard T.; Traum, Jonathan A.; Faunt, Claudia; Bell, Andrew M.; Gronberg, Jo Ann M.; Wheeler, David C.; Zamora, Celia; Jurgens, Bryant; Schwarz, Gregory E.; Belitz, Kenneth; Eberts, Sandra; Kourakos, George; Harter, Thomas
2017-01-01
Intense demand for water in the Central Valley of California and related increases in groundwater nitrate concentration threaten the sustainability of the groundwater resource. To assess contamination risk in the region, we developed a hybrid, non-linear, machine learning model within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface. A database of 145 predictor variables representing well characteristics, historical and current field and landscape-scale nitrogen mass balances, historical and current land use, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The boosted regression tree (BRT) method was used to screen and rank variables to predict nitrate concentration at the depths of domestic and public well supplies. The novel approach included as predictor variables outputs from existing physically based models of the Central Valley. The top five most important predictor variables included two oxidation/reduction variables (probability of manganese concentration to exceed 50 ppb and probability of dissolved oxygen concentration to be below 0.5 ppm), field-scale adjusted unsaturated zone nitrogen input for the 1975 time period, average difference between precipitation and evapotranspiration during the years 1971–2000, and 1992 total landscape nitrogen input. Twenty-five variables were selected for the final model for log-transformed nitrate. In general, increasing probability of anoxic conditions and increasing precipitation relative to potential evapotranspiration had a corresponding decrease in nitrate concentration predictions. Conversely, increasing 1975 unsaturated zone nitrogen leaching flux and 1992 total landscape nitrogen input had an increasing relative impact on nitrate predictions. Three-dimensional visualization indicates that nitrate predictions depend on the probability of anoxic conditions and other factors, and that nitrate predictions generally decreased with increasing groundwater age.
Land-atmosphere coupling and climate prediction over the U.S. Southern Great Plains
NASA Astrophysics Data System (ADS)
Williams, Ian N.; Lu, Yaqiong; Kueppers, Lara M.; Riley, William J.; Biraud, Sebastien C.; Bagley, Justin E.; Torn, Margaret S.
2016-10-01
Biases in land-atmosphere coupling in climate models can contribute to climate prediction biases, but land models are rarely evaluated in the context of this coupling. We tested land-atmosphere coupling and explored effects of land surface parameterizations on climate prediction in a single-column version of the National Center for Atmospheric Research Community Earth System Model (CESM1.2.2) and an off-line Community Land Model (CLM4.5). The correlation between leaf area index (LAI) and surface evaporative fraction (ratio of latent to total turbulent heat flux) was substantially underpredicted compared to observations in the U.S. Southern Great Plains, while the correlation between soil moisture and evaporative fraction was overpredicted by CLM4.5. To estimate the impacts of these errors on climate prediction, we modified CLM4.5 by prescribing observed LAI, increasing soil resistance to evaporation, increasing minimum stomatal conductance, and increasing leaf reflectance. The modifications improved the predicted soil moisture-evaporative fraction (EF) and LAI-EF correlations in off-line CLM4.5 and reduced the root-mean-square error in summer 2 m air temperature and precipitation in the coupled model. The modifications had the largest effect on prediction during a drought in summer 2006, when a warm bias in daytime 2 m air temperature was reduced from +6°C to a smaller cold bias of -1.3°C, and a corresponding dry bias in precipitation was reduced from -111 mm to -23 mm. The role of vegetation in droughts and heat waves is underpredicted in CESM1.2.2, and improvements in land surface models can improve prediction of climate extremes.
Relationship between Quantitative CT Metrics and Health Status and Bode in COPD
Martinez, Carlos H.; Chen, Ya-Hong; Westgate, Phillip M.; Liu, Lyrica X.; Murray, Susan; Curtis, Jeffrey L.; Make, Barry J.; Kazerooni, Ella A.; Lynch, David A.; Marchetti, Nathaniel; Washko, George R.; Martinez, Fernando J.; Han, MeiLan K.
2013-01-01
Background The value of quantitative computed tomography (QCT) to identify chronic obstructive pulmonary disease (COPD) phenotypes is increasingly appreciated. We hypothesized that QCT-defined emphysema and airway abnormalities relate to St. George's Respiratory Questionnaire (SGRQ) and BODE. Methods 1,200 COPDGene subjects meeting GOLD criteria for COPD with QCT analysis were included. Total lung emphysema was measured using density mask technique with a -950 HU threshold. An automated program measured mean wall thickness (WT), wall area percent (WA%) and pi10 in six segmental bronchi. Separate multivariate analyses examined the relative influence of airway measures and emphysema on SGRQ and BODE. Results In separate models predicting SGRQ score, a one unit standard deviation (SD) increase in each airway measure predicted higher SGRQ scores (for WT, 1.90 points higher, p=0.002; for WA%, 1.52 points higher, p=0.02; for pi10, 2.83 points higher p<0.001). The comparable increase in SGRQ for a one unit SD increase in percent emphysema in these models was relatively weaker, significant only in the pi10 model (for percent emphysema, 1.45 points higher, p=0.01). In separate models predicting BODE, a one unit SD increase in each airway measure predicted higher BODE scores (for WT, 1.07 fold increase, p<0.001; for WA%, 1.20 fold increase, p<0.001; for pi10, 1.16 fold increase, p<0.001). In these models, emphysema more strongly influenced BODE (range 1.24-1.26 fold increase, p<0.001). Conclusion Emphysema and airway disease both relate to clinically important parameters. The relative influence of airway disease is greater for SGRQ; the relative influence of emphysema is greater for BODE. PMID:22514236
Methodology for the evaluation of vascular surgery manpower in France.
Berger, L; Mace, J M; Ricco, J B; Saporta, G
2013-01-01
The French population is growing and ageing. It is expected to increase by 2.7% by 2020, and the number of individuals over 65 years of age is expected to increase by 3.3 million, a 33% increase, between 2005 and 2020. As the number of vascular surgery procedures is closely associated with the age of a population, it is anticipated that there will be a significant increase in the workload of vascular surgeons. A model is presented to predict changes in vascular surgery activity according to population ageing, including other parameters that could affect workload evolution. Three types of arterial procedures were studied: infrarenal abdominal aortic aneurysm (AAA) surgery, peripheral arterial occlusive disease (PAOD) procedures and carotid artery (CEA) procedures. Data were selected and extracted from the national PMSI (Medical Information System Program) database. Data obtained from 2000 were used to predict data based on an ageing population for 2008. From this model, a weighted index was defined for each group by comparing expected and observed workloads. According to the model, over this 8-year period, there was an overall increase in vascular procedures of 52.2%, with an increase of 89% in PAOD procedures. Between 2000 and 2009, the total increase was 58.0%, with 3.9% for AAA procedures, 101.7% for PAOD procedures and 13.2% for CEA procedures. The weighted model based on an ageing population and corrected by a weighted factor predicted this increase. This weighted model is able to predict the workload of vascular surgeons over the coming years. An ageing population and other factors could result in a significant increase in demand for vascular surgical services. Copyright © 2012 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Chang, Chia-Yuan; Rupp, Jonathan D; Reed, Matthew P; Hughes, Richard E; Schneider, Lawrence W
2009-11-01
In a previous study, the authors reported on the development of a finite-element model of the midsize male pelvis and lower extremities with lower-extremity musculature that was validated using PMHS knee-impact response data. Knee-impact simulations with this model were performed using forces from four muscles in the lower extremities associated with two-foot bracing reported in the literature to provide preliminary estimates of the effects of lower-extremity muscle activation on knee-thigh-hip injury potential in frontal impacts. The current study addresses a major limitation of these preliminary simulations by using the AnyBody three-dimensional musculoskeletal model to estimate muscle forces produced in 35 muscles in each lower extremity during emergency one-foot braking. To check the predictions of the AnyBody Model, activation levels of twelve major muscles in the hip and lower extremities were measured using surface EMG electrodes on 12 midsize-male subjects performing simulated maximum and 50% of maximum braking in a laboratory seating buck. Comparisons between test results and the predictions of the AnyBody Model when it was used to simulate these same braking tests suggest that the AnyBody model appropriately predicts agonistic muscle activations but under predicts antagonistic muscle activations. Simulations of knee-to-knee-bolster impacts were performed by impacting the knees of the lower-extremity finite element model with and without the muscle forces predicted by the validated AnyBody Model. Results of these simulations confirm previous findings that muscle tension increases knee-impact force by increasing the effective mass of the KTH complex due to tighter coupling of muscle mass to bone. They also indicate that muscle activation preferentially couples mass distal to the hip, thereby accentuating the decrease in femur force from the knee to the hip. However, the reduction in force transmitted from the knee to the hip is offset by the increased force at the knee and by increased compressive forces at the hip due to activation of lower-extremity muscles. As a result, approximately 45% to 60% and 50% to 65% of the force applied to the knee is applied to the hip in the simulations without and with muscle tension, respectively. The simulation results suggest that lower-extremity muscle tension has little effect on the risk of hip injuries, but it increases the bending moments in the femoral shaft, thereby increasing the risk of femoral shaft fractures by 20%-40%. However, these findings may be affected by the inability of the AnyBody Model to appropriately predict antagonistic muscle forces.
Uncertainty analysis of a groundwater flow model in East-central Florida.
Sepúlveda, Nicasio; Doherty, John
2015-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan Aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The "Null Space Monte Carlo" method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model's capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial or temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context. © 2014, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Saleh, F.; Garambois, P. A.; Biancamaria, S.
2017-12-01
Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.
Multi-Node Thermal System Model for Lithium-Ion Battery Packs: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Ying; Smith, Kandler; Wood, Eric
Temperature is one of the main factors that controls the degradation in lithium ion batteries. Accurate knowledge and control of cell temperatures in a pack helps the battery management system (BMS) to maximize cell utilization and ensure pack safety and service life. In a pack with arrays of cells, a cells temperature is not only affected by its own thermal characteristics but also by its neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model,more » which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs. neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model, which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs.« less
McBride, Devin W.; Rodgers, Victor G. J.
2013-01-01
The activity coefficient is largely considered an empirical parameter that was traditionally introduced to correct the non-ideality observed in thermodynamic systems such as osmotic pressure. Here, the activity coefficient of free-solvent is related to physically realistic parameters and a mathematical expression is developed to directly predict the activity coefficients of free-solvent, for aqueous protein solutions up to near-saturation concentrations. The model is based on the free-solvent model, which has previously been shown to provide excellent prediction of the osmotic pressure of concentrated and crowded globular proteins in aqueous solutions up to near-saturation concentrations. Thus, this model uses only the independently determined, physically realizable quantities: mole fraction, solvent accessible surface area, and ion binding, in its prediction. Predictions are presented for the activity coefficients of free-solvent for near-saturated protein solutions containing either bovine serum albumin or hemoglobin. As a verification step, the predictability of the model for the activity coefficient of sucrose solutions was evaluated. The predicted activity coefficients of free-solvent are compared to the calculated activity coefficients of free-solvent based on osmotic pressure data. It is observed that the predicted activity coefficients are increasingly dependent on the solute-solvent parameters as the protein concentration increases to near-saturation concentrations. PMID:24324733
Kowall, Bernd; Rathmann, Wolfgang; Giani, Guido; Schipf, Sabine; Baumeister, Sebastian; Wallaschofski, Henri; Nauck, Matthias; Völzke, Henry
2013-04-01
Random glucose is widely used in routine clinical practice. We investigated whether this non-standardized glycemic measure is useful for individual diabetes prediction. The Study of Health in Pomerania (SHIP), a population-based cohort study in north-east Germany, included 3107 diabetes-free persons aged 31-81 years at baseline in 1997-2001. 2475 persons participated at 5-year follow-up and gave self-reports of incident diabetes. For the total sample and for subjects aged ≥50 years, statistical properties of prediction models with and without random glucose were compared. A basic model (including age, sex, diabetes of parents, hypertension and waist circumference) and a comprehensive model (additionally including various lifestyle variables and blood parameters, but not HbA1c) performed statistically significantly better after adding random glucose (e.g., the area under the receiver-operating curve (AROC) increased from 0.824 to 0.856 after adding random glucose to the comprehensive model in the total sample). Likewise, adding random glucose to prediction models which included HbA1c led to significant improvements of predictive ability (e.g., for subjects ≥50 years, AROC increased from 0.824 to 0.849 after adding random glucose to the comprehensive model+HbA1c). Random glucose is useful for individual diabetes prediction, and improves prediction models including HbA1c. Copyright © 2012 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Rossow, Heidi A; Calvert, C Chris
2014-10-01
The goal of this research was to use a computational model of human metabolism to predict energy metabolism for lean and obese men. The model is composed of 6 state variables representing amino acids, muscle protein, visceral protein, glucose, triglycerides, and fatty acids (FAs). Differential equations represent carbohydrate, amino acid, and FA uptake and output by tissues based on ATP creation and use for both lean and obese men. Model parameterization is based on data from previous studies. Results from sensitivity analyses indicate that model predictions of resting energy expenditure (REE) and respiratory quotient (RQ) are dependent on FA and glucose oxidation rates with the highest sensitivity coefficients (0.6, 0.8 and 0.43, 0.15, respectively, for lean and obese models). Metabolizable energy (ME) is influenced by ingested energy intake with a sensitivity coefficient of 0.98, and a phosphate-to-oxygen ratio by FA oxidation rate and amino acid oxidation rate (0.32, 0.24 and 0.55, 0.65 for lean and obese models, respectively). Simulations of previously published studies showed that the model is able to predict ME ranging from 6.6 to 9.3 with 0% differences between published and model values, and RQ ranging from 0.79 to 0.86 with 1% differences between published and model values. REEs >7 MJ/d are predicted with 6% differences between published and model values. Glucose oxidation increases by ∼0.59 mol/d, RQ increases by 0.03, REE increases by 2 MJ/d, and heat production increases by 1.8 MJ/d in the obese model compared with lean model simulations. Increased FA oxidation results in higher changes in RQ and lower relative changes in REE. These results suggest that because fat mass is directly related to REE and rate of FA oxidation, body fat content could be used as a predictor of RQ. © 2014 American Society for Nutrition.
NASA Technical Reports Server (NTRS)
Smith, Arthur F.
1985-01-01
Results of static stability wind tunnel tests of three 62.2 cm (24.5 in) diameter models of the Prop-Fan are presented. Measurements of blade stresses were made with the Prop-Fans mounted on an isolated nacelle in an open 5.5 m (18 ft) wind tunnel test section with no tunnel flow. The tests were conducted in the United Technology Research Center Large Subsonic Wind Tunnel. Stall flutter was determined by regions of high stress, which were compared with predictions of boundaries of zero total viscous damping. The structural analysis used beam methods for the model with straight blades and finite element methods for the models with swept blades. Increasing blade sweep tends to suppress stall flutter. Comparisons with similar test data acquired at NASA/Lewis are good. Correlations between measured and predicted critical speeds for all the models are good. The trend of increased stability with increased blade sweep is well predicted. Calculated flutter boundaries generaly coincide with tested boundaries. Stall flutter is predicted to occur in the third (torsion) mode. The straight blade test shows third mode response, while the swept blades respond in other modes.
NASA Astrophysics Data System (ADS)
Takagi, M.; Gyokusen, Koichiro; Saito, Akira
It was found that the atmospheric carbon dioxide (CO2) concentration in an urban canyon in Fukuoka city, Japan during August 1997 was about 30 µmol mol-1 higher than that in the suburbs. When fully exposed to sunlight, in situ the rate of photosynthesis in single leaves of Ilex rotunda planted in the urban canyon was higher when the atmospheric CO2 concentration was elevated. A biochemically based model was able to predict the in situ rate of photosynthesis well. The model also predicted an increase in the daily CO2 exchange rate for leaves in the urban canyon with an increase in atmospheric CO2 concentration. However, in situ such an increase in the daily CO2 exchange rate may be offset by diminished sunlight, a higher air temperature and a lower relative humidity. Thus, the daily CO2 exchange rate predicted using the model based soleley on the environmental conditions prevailing in the urban canyon was lower than that predicted based only on environmental factors found in the suburbs.
Gratitude depends on the relational model of communal sharing.
Simão, Cláudia; Seibt, Beate
2014-01-01
We studied the relation between benefits, perception of social relationships and gratitude. Across three studies, we provide evidence that benefits increase gratitude to the extent to which one applies a mental model of a communal relationship. In Study 1, the communal sharing relational model, and no other relational models, predicted the amount of gratitude participants felt after imagining receiving a benefit from a new acquaintance. In Study 2, participants recalled a large benefit they had received. Applying a communal sharing relational model increased feelings of gratitude for the benefit. In Study 3, we manipulated whether the participant or another person received a benefit from an unknown other. Again, we found that the extent of communal sharing perceived in the relationship with the stranger predicted gratitude. An additional finding of Study 2 was that communal sharing predicted future gratitude regarding the relational partner in a longitudinal design. To conclude, applying a communal sharing model predicts gratitude regarding concrete benefits and regarding the relational partner, presumably because one perceives the communal partner as motivated to meet one's needs. Finally, in Study 3, we found in addition that being the recipient of a benefit without opportunity to repay directly increased communal sharing, and indirectly increased gratitude. These circumstances thus seem to favor the attribution of communal norms, leading to a communal sharing representation and in turn to gratitude. We discuss the importance of relational models as mental representations of relationships for feelings of gratitude.
Gratitude Depends on the Relational Model of Communal Sharing
Simão, Cláudia; Seibt, Beate
2014-01-01
We studied the relation between benefits, perception of social relationships and gratitude. Across three studies, we provide evidence that benefits increase gratitude to the extent to which one applies a mental model of a communal relationship. In Study 1, the communal sharing relational model, and no other relational models, predicted the amount of gratitude participants felt after imagining receiving a benefit from a new acquaintance. In Study 2, participants recalled a large benefit they had received. Applying a communal sharing relational model increased feelings of gratitude for the benefit. In Study 3, we manipulated whether the participant or another person received a benefit from an unknown other. Again, we found that the extent of communal sharing perceived in the relationship with the stranger predicted gratitude. An additional finding of Study 2 was that communal sharing predicted future gratitude regarding the relational partner in a longitudinal design. To conclude, applying a communal sharing model predicts gratitude regarding concrete benefits and regarding the relational partner, presumably because one perceives the communal partner as motivated to meet one's needs. Finally, in Study 3, we found in addition that being the recipient of a benefit without opportunity to repay directly increased communal sharing, and indirectly increased gratitude. These circumstances thus seem to favor the attribution of communal norms, leading to a communal sharing representation and in turn to gratitude. We discuss the importance of relational models as mental representations of relationships for feelings of gratitude. PMID:24465933
The extension of total gain (TG) statistic in survival models: properties and applications.
Choodari-Oskooei, Babak; Royston, Patrick; Parmar, Mahesh K B
2015-07-01
The results of multivariable regression models are usually summarized in the form of parameter estimates for the covariates, goodness-of-fit statistics, and the relevant p-values. These statistics do not inform us about whether covariate information will lead to any substantial improvement in prediction. Predictive ability measures can be used for this purpose since they provide important information about the practical significance of prognostic factors. R (2)-type indices are the most familiar forms of such measures in survival models, but they all have limitations and none is widely used. In this paper, we extend the total gain (TG) measure, proposed for a logistic regression model, to survival models and explore its properties using simulations and real data. TG is based on the binary regression quantile plot, otherwise known as the predictiveness curve. Standardised TG ranges from 0 (no explanatory power) to 1 ('perfect' explanatory power). The results of our simulations show that unlike many of the other R (2)-type predictive ability measures, TG is independent of random censoring. It increases as the effect of a covariate increases and can be applied to different types of survival models, including models with time-dependent covariate effects. We also apply TG to quantify the predictive ability of multivariable prognostic models developed in several disease areas. Overall, TG performs well in our simulation studies and can be recommended as a measure to quantify the predictive ability in survival models.
Zhou, L; Lund, M S; Wang, Y; Su, G
2014-08-01
This study investigated genomic predictions across Nordic Holstein and Nordic Red using various genomic relationship matrices. Different sources of information, such as consistencies of linkage disequilibrium (LD) phase and marker effects, were used to construct the genomic relationship matrices (G-matrices) across these two breeds. Single-trait genomic best linear unbiased prediction (GBLUP) model and two-trait GBLUP model were used for single-breed and two-breed genomic predictions. The data included 5215 Nordic Holstein bulls and 4361 Nordic Red bulls, which was composed of three populations: Danish Red, Swedish Red and Finnish Ayrshire. The bulls were genotyped with 50 000 SNP chip. Using the two-breed predictions with a joint Nordic Holstein and Nordic Red reference population, accuracies increased slightly for all traits in Nordic Red, but only for some traits in Nordic Holstein. Among the three subpopulations of Nordic Red, accuracies increased more for Danish Red than for Swedish Red and Finnish Ayrshire. This is because closer genetic relationships exist between Danish Red and Nordic Holstein. Among Danish Red, individuals with higher genomic relationship coefficients with Nordic Holstein showed more increased accuracies in the two-breed predictions. Weighting the two-breed G-matrices by LD phase consistencies, marker effects or both did not further improve accuracies of the two-breed predictions. © 2014 Blackwell Verlag GmbH.
Using Toxicological Evidence from QSAR Models in Practice
The new generation of QSAR models provides supporting documentation in addition to the predicted toxicological value. Such information enables the toxicologist to explore the properties of chemical substances and to review and increase the reliability of toxicity predictions. Thi...
An integrative model of risk for high school disordered eating.
Davis, Heather A; Smith, Gregory T
2018-06-21
Binge eating and purging behaviors are associated with significant harm and distress among adolescents. The process by which these behaviors develop (often in the high school years) is not fully understood. We tested the Acquired Preparedness (AP) model of risk involving transactions among biological, personality, and psychosocial factors to predict binge eating and purging behavior in a sample of 1,906 children assessed in the spring of 5th grade (the last year of elementary school), the fall of 6th grade (the first year of middle school), spring of 6th grade, and spring of 10th grade (second year of high school). Pubertal onset in spring of 5th grade predicted increases in negative urgency, but not negative affect, in the fall of 6th grade. Negative urgency in the fall of 6th grade predicted increases in expectancies for reinforcement from eating in the spring of 6th grade, which in turn predicted increases in binge eating behavior in the spring of 10th grade. Negative affect in the fall of 6th grade predicted increases in thinness expectancies in the spring of 6th grade, which in turn predicted increases in purging in the spring of 10th grade. Results demonstrate similarities and differences in the development of these two different bulimic behaviors. Intervention efforts targeting the risk factors evident in this model may prove fruitful in the treatment of eating disorders characterized by binge eating and purging. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Seasonal prediction of winter haze days in the north central North China Plain
NASA Astrophysics Data System (ADS)
Yin, Zhicong; Wang, Huijun
2016-11-01
Recently, the winter (December-February) haze pollution over the north central North China Plain (NCP) has become severe. By treating the year-to-year increment as the predictand, two new statistical schemes were established using the multiple linear regression (MLR) and the generalized additive model (GAM). By analyzing the associated increment of atmospheric circulation, seven leading predictors were selected to predict the upcoming winter haze days over the NCP (WHDNCP). After cross validation, the root mean square error and explained variance of the MLR (GAM) prediction model was 3.39 (3.38) and 53 % (54 %), respectively. For the final predicted WHDNCP, both of these models could capture the interannual and interdecadal trends and the extremums successfully. Independent prediction tests for 2014 and 2015 also confirmed the good predictive skill of the new schemes. The predicted bias of the MLR (GAM) prediction model in 2014 and 2015 was 0.09 (-0.07) and -3.33 (-1.01), respectively. Compared to the MLR model, the GAM model had a higher predictive skill in reproducing the rapid and continuous increase of WHDNCP after 2010.
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
NASA Astrophysics Data System (ADS)
Lehner, Flavio; Wood, Andrew W.; Llewellyn, Dagmar; Blatchford, Douglas B.; Goodbody, Angus G.; Pappenberger, Florian
2017-12-01
Seasonal streamflow predictions provide a critical management tool for water managers in the American Southwest. In recent decades, persistent prediction errors for spring and summer runoff volumes have been observed in a number of watersheds in the American Southwest. While mostly driven by decadal precipitation trends, these errors also relate to the influence of increasing temperature on streamflow in these basins. Here we show that incorporating seasonal temperature forecasts from operational global climate prediction models into streamflow forecasting models adds prediction skill for watersheds in the headwaters of the Colorado and Rio Grande River basins. Current dynamical seasonal temperature forecasts now show sufficient skill to reduce streamflow forecast errors in snowmelt-driven regions. Such predictions can increase the resilience of streamflow forecasting and water management systems in the face of continuing warming as well as decadal-scale temperature variability and thus help to mitigate the impacts of climate nonstationarity on streamflow predictability.
Snyder, James
2014-01-01
Objective Demonstrate multivariate multilevel survival analysis within a larger structural equation model. Test the 3 hypotheses that when confronted by a negative parent, child rates of angry, sad/fearful, and positive emotion will increase, decrease, and stay the same, respectively, for antisocial compared with normal children. This same pattern will predict increases in future antisocial behavior. Methods Parent–child dyads were videotaped in the fall of kindergarten in the laboratory and antisocial behavior ratings were obtained in the fall of kindergarten and third grade. Results Kindergarten antisocial predicted less child sad/fear and child positive but did not predict child anger given parent negative. Less child positive and more child neutral given parent negative predicted increases in third-grade antisocial behavior. Conclusions The model is a useful analytic tool for studying rates of social behavior. Lack of positive affect or excess neutral affect may be a new risk factor for child antisocial behavior. PMID:24133296
Attention Modulates Spatial Precision in Multiple-Object Tracking.
Srivastava, Nisheeth; Vul, Ed
2016-01-01
We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.
Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes
NASA Astrophysics Data System (ADS)
Tsoy, A. S.; Snegirev, A. Yu.
2015-09-01
The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.
A New Approach to Predict the Fish Fillet Shelf-Life in Presence of Natural Preservative Agents.
Giuffrida, Alessandro; Giarratana, Filippo; Valenti, Davide; Muscolino, Daniele; Parisi, Roberta; Parco, Alessio; Marotta, Stefania; Ziino, Graziella; Panebianco, Antonio
2017-04-13
Three data sets concerning the behaviour of spoilage flora of fillets treated with natural preservative substances (NPS) were used to construct a new kind of mathematical predictive model. This model, unlike other ones, allows expressing the antibacterial effect of the NPS separately from the prediction of the growth rate. This approach, based on the introduction of a parameter into the predictive primary model, produced a good fitting of observed data and allowed characterising quantitatively the increase of shelf-life of fillets.
Acosta-Pech, Rocío; Crossa, José; de Los Campos, Gustavo; Teyssèdre, Simon; Claustres, Bruno; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino
2017-07-01
A new genomic model that incorporates genotype × environment interaction gave increased prediction accuracy of untested hybrid response for traits such as percent starch content, percent dry matter content and silage yield of maize hybrids. The prediction of hybrid performance (HP) is very important in agricultural breeding programs. In plant breeding, multi-environment trials play an important role in the selection of important traits, such as stability across environments, grain yield and pest resistance. Environmental conditions modulate gene expression causing genotype × environment interaction (G × E), such that the estimated genetic correlations of the performance of individual lines across environments summarize the joint action of genes and environmental conditions. This article proposes a genomic statistical model that incorporates G × E for general and specific combining ability for predicting the performance of hybrids in environments. The proposed model can also be applied to any other hybrid species with distinct parental pools. In this study, we evaluated the predictive ability of two HP prediction models using a cross-validation approach applied in extensive maize hybrid data, comprising 2724 hybrids derived from 507 dent lines and 24 flint lines, which were evaluated for three traits in 58 environments over 12 years; analyses were performed for each year. On average, genomic models that include the interaction of general and specific combining ability with environments have greater predictive ability than genomic models without interaction with environments (ranging from 12 to 22%, depending on the trait). We concluded that including G × E in the prediction of untested maize hybrids increases the accuracy of genomic models.
Oertel, Bruno Georg; Lötsch, Jörn
2013-01-01
The medical impact of pain is such that much effort is being applied to develop novel analgesic drugs directed towards new targets and to investigate the analgesic efficacy of known drugs. Ongoing research requires cost-saving tools to translate basic science knowledge into clinically effective analgesic compounds. In this review we have re-examined the prediction of clinical analgesia by human experimental pain models as a basis for model selection in phase I studies. The overall prediction of analgesic efficacy or failure of a drug correlated well between experimental and clinical settings. However, correct model selection requires more detailed information about which model predicts a particular clinical pain condition. We hypothesized that if an analgesic drug was effective in an experimental pain model and also a specific clinical pain condition, then that model might be predictive for that particular condition and should be selected for development as an analgesic for that condition. The validity of the prediction increases with an increase in the numbers of analgesic drug classes for which this agreement was shown. From available evidence, only five clinical pain conditions were correctly predicted by seven different pain models for at least three different drugs. Most of these models combine a sensitization method. The analysis also identified several models with low impact with respect to their clinical translation. Thus, the presently identified agreements and non-agreements between analgesic effects on experimental and on clinical pain may serve as a solid basis to identify complex sets of human pain models that bridge basic science with clinical pain research. PMID:23082949
A dynamic model for predicting growth in zinc-deficient stunted infants given supplemental zinc.
Wastney, Meryl E; McDonald, Christine M; King, Janet C
2018-05-01
Zinc deficiency limits infant growth and increases susceptibility to infections, which further compromises growth. Zinc supplementation improves the growth of zinc-deficient stunted infants, but the amount, frequency, and duration of zinc supplementation required to restore growth in an individual child is unknown. A dynamic model of zinc metabolism that predicts changes in weight and length of zinc-deficient, stunted infants with dietary zinc would be useful to define effective zinc supplementation regimens. The aims of this study were to develop a dynamic model for zinc metabolism in stunted, zinc-deficient infants and to use that model to predict the growth response when those infants are given zinc supplements. A model of zinc metabolism was developed using data on zinc kinetics, tissue zinc, and growth requirements for healthy 9-mo-old infants. The kinetic model was converted to a dynamic model by replacing the rate constants for zinc absorption and excretion with functions for these processes that change with zinc intake. Predictions of the dynamic model, parameterized for zinc-deficient, stunted infants, were compared with the results of 5 published zinc intervention trials. The model was then used to predict the results for zinc supplementation regimes that varied in the amount, frequency, and duration of zinc dosing. Model predictions agreed with published changes in plasma zinc after zinc supplementation. Predictions of weight and length agreed with 2 studies, but overpredicted values from a third study in which other nutrient deficiencies may have been growth limiting; the model predicted that zinc absorption was impaired in that study. The model suggests that frequent, smaller doses (5-10 mg Zn/d) are more effective for increasing growth in stunted, zinc-deficient 9-mo-old infants than are larger, less-frequent doses. The dose amount affects the duration of dosing necessary to restore and maintain plasma zinc concentration and growth.
Uncertainty analysis of a groundwater flow model in east-central Florida
Sepúlveda, Nicasio; Doherty, John E.
2014-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The “Null Space Monte Carlo” method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model’s capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial/temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context.
Howe, P D; Bryant, S R; Shreeve, T G
2007-10-01
We use field observations in two geographic regions within the British Isles and regression and neural network models to examine the relationship between microhabitat use, thoracic temperatures and activity in a widespread lycaenid butterfly, Polyommatus icarus. We also make predictions for future activity under climate change scenarios. Individuals from a univoltine northern population initiated flight with significantly lower thoracic temperatures than individuals from a bivoltine southern population. Activity is dependent on body temperature and neural network models of body temperature are better at predicting body temperature than generalized linear models. Neural network models of activity with a sole input of predicted body temperature (using weather and microclimate variables) are good predictors of observed activity and were better predictors than generalized linear models. By modelling activity under climate change scenarios for 2080 we predict differences in activity in relation to both regional differences of climate change and differing body temperature requirements for activity in different populations. Under average conditions for low-emission scenarios there will be little change in the activity of individuals from central-southern Britain and a reduction in northwest Scotland from 2003 activity levels. Under high-emission scenarios, flight-dependent activity in northwest Scotland will increase the greatest, despite smaller predicted increases in temperature and decreases in cloud cover. We suggest that neural network models are an effective way of predicting future activity in changing climates for microhabitat-specialist butterflies and that regional differences in the thermoregulatory response of populations will have profound effects on how they respond to climate change.
2011-01-01
Background Genetic risk models could potentially be useful in identifying high-risk groups for the prevention of complex diseases. We investigated the performance of this risk stratification strategy by examining epidemiological parameters that impact the predictive ability of risk models. Methods We assessed sensitivity, specificity, and positive and negative predictive value for all possible risk thresholds that can define high-risk groups and investigated how these measures depend on the frequency of disease in the population, the frequency of the high-risk group, and the discriminative accuracy of the risk model, as assessed by the area under the receiver-operating characteristic curve (AUC). In a simulation study, we modeled genetic risk scores of 50 genes with equal odds ratios and genotype frequencies, and varied the odds ratios and the disease frequency across scenarios. We also performed a simulation of age-related macular degeneration risk prediction based on published odds ratios and frequencies for six genetic risk variants. Results We show that when the frequency of the high-risk group was lower than the disease frequency, positive predictive value increased with the AUC but sensitivity remained low. When the frequency of the high-risk group was higher than the disease frequency, sensitivity was high but positive predictive value remained low. When both frequencies were equal, both positive predictive value and sensitivity increased with increasing AUC, but higher AUC was needed to maximize both measures. Conclusions The performance of risk stratification is strongly determined by the frequency of the high-risk group relative to the frequency of disease in the population. The identification of high-risk groups with appreciable combinations of sensitivity and positive predictive value requires higher AUC. PMID:21797996
The origins of computer weather prediction and climate modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Peter
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less
The origins of computer weather prediction and climate modeling
NASA Astrophysics Data System (ADS)
Lynch, Peter
2008-03-01
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
Deep learning architecture for air quality predictions.
Li, Xiang; Peng, Ling; Hu, Yuan; Shao, Jing; Chi, Tianhe
2016-11-01
With the rapid development of urbanization and industrialization, many developing countries are suffering from heavy air pollution. Governments and citizens have expressed increasing concern regarding air pollution because it affects human health and sustainable development worldwide. Current air quality prediction methods mainly use shallow models; however, these methods produce unsatisfactory results, which inspired us to investigate methods of predicting air quality based on deep architecture models. In this paper, a novel spatiotemporal deep learning (STDL)-based air quality prediction method that inherently considers spatial and temporal correlations is proposed. A stacked autoencoder (SAE) model is used to extract inherent air quality features, and it is trained in a greedy layer-wise manner. Compared with traditional time series prediction models, our model can predict the air quality of all stations simultaneously and shows the temporal stability in all seasons. Moreover, a comparison with the spatiotemporal artificial neural network (STANN), auto regression moving average (ARMA), and support vector regression (SVR) models demonstrates that the proposed method of performing air quality predictions has a superior performance.
Thiels, Cornelius A; Yu, Denny; Abdelrahman, Amro M; Habermann, Elizabeth B; Hallbeck, Susan; Pasupathy, Kalyan S; Bingener, Juliane
2017-01-01
Reliable prediction of operative duration is essential for improving patient and care team satisfaction, optimizing resource utilization and reducing cost. Current operative scheduling systems are unreliable and contribute to costly over- and underestimation of operative time. We hypothesized that the inclusion of patient-specific factors would improve the accuracy in predicting operative duration. We reviewed all elective laparoscopic cholecystectomies performed at a single institution between 01/2007 and 06/2013. Concurrent procedures were excluded. Univariate analysis evaluated the effect of age, gender, BMI, ASA, laboratory values, smoking, and comorbidities on operative duration. Multivariable linear regression models were constructed using the significant factors (p < 0.05). The patient factors model was compared to the traditional surgical scheduling system estimates, which uses historical surgeon-specific and procedure-specific operative duration. External validation was done using the ACS-NSQIP database (n = 11,842). A total of 1801 laparoscopic cholecystectomy patients met inclusion criteria. Female sex was associated with reduced operative duration (-7.5 min, p < 0.001 vs. male sex) while increasing BMI (+5.1 min BMI 25-29.9, +6.9 min BMI 30-34.9, +10.4 min BMI 35-39.9, +17.0 min BMI 40 + , all p < 0.05 vs. normal BMI), increasing ASA (+7.4 min ASA III, +38.3 min ASA IV, all p < 0.01 vs. ASA I), and elevated liver function tests (+7.9 min, p < 0.01 vs. normal) were predictive of increased operative duration on univariate analysis. A model was then constructed using these predictive factors. The traditional surgical scheduling system was poorly predictive of actual operative duration (R 2 = 0.001) compared to the patient factors model (R 2 = 0.08). The model remained predictive on external validation (R 2 = 0.14).The addition of surgeon as a variable in the institutional model further improved predictive ability of the model (R 2 = 0.18). The use of routinely available pre-operative patient factors improves the prediction of operative duration during cholecystectomy.
Predicting human chronically paralyzed muscle force: a comparison of three mathematical models.
Frey Law, Laura A; Shields, Richard K
2006-03-01
Chronic spinal cord injury (SCI) induces detrimental musculoskeletal adaptations that adversely affect health status, ranging from muscle paralysis and skin ulcerations to osteoporosis. SCI rehabilitative efforts may increasingly focus on preserving the integrity of paralyzed extremities to maximize health quality using electrical stimulation for isometric training and/or functional activities. Subject-specific mathematical muscle models could prove valuable for predicting the forces necessary to achieve therapeutic loading conditions in individuals with paralyzed limbs. Although numerous muscle models are available, three modeling approaches were chosen that can accommodate a variety of stimulation input patterns. To our knowledge, no direct comparisons between models using paralyzed muscle have been reported. The three models include 1) a simple second-order linear model with three parameters and 2) two six-parameter nonlinear models (a second-order nonlinear model and a Hill-derived nonlinear model). Soleus muscle forces from four individuals with complete, chronic SCI were used to optimize each model's parameters (using an increasing and decreasing frequency ramp) and to assess the models' predictive accuracies for constant and variable (doublet) stimulation trains at 5, 10, and 20 Hz in each individual. Despite the large differences in modeling approaches, the mean predicted force errors differed only moderately (8-15% error; P=0.0042), suggesting physiological force can be adequately represented by multiple mathematical constructs. The two nonlinear models predicted specific force characteristics better than the linear model in nearly all stimulation conditions, with minimal differences between the two nonlinear models. Either nonlinear mathematical model can provide reasonable force estimates; individual application needs may dictate the preferred modeling strategy.
de Man-van Ginkel, Janneke M; Hafsteinsdóttir, Thóra B; Lindeman, Eline; Ettema, Roelof G A; Grobbee, Diederick E; Schuurmans, Marieke J
2013-09-01
The timely detection of post-stroke depression is complicated by a decreasing length of hospital stay. Therefore, the Post-stroke Depression Prediction Scale was developed and validated. The Post-stroke Depression Prediction Scale is a clinical prediction model for the early identification of stroke patients at increased risk for post-stroke depression. The study included 410 consecutive stroke patients who were able to communicate adequately. Predictors were collected within the first week after stroke. Between 6 to 8 weeks after stroke, major depressive disorder was diagnosed using the Composite International Diagnostic Interview. Multivariable logistic regression models were fitted. A bootstrap-backward selection process resulted in a reduced model. Performance of the model was expressed by discrimination, calibration, and accuracy. The model included a medical history of depression or other psychiatric disorders, hypertension, angina pectoris, and the Barthel Index item dressing. The model had acceptable discrimination, based on an area under the receiver operating characteristic curve of 0.78 (0.72-0.85), and calibration (P value of the U-statistic, 0.96). Transforming the model to an easy-to-use risk-assessment table, the lowest risk category (sum score, <-10) showed a 2% risk of depression, which increased to 82% in the highest category (sum score, >21). The clinical prediction model enables clinicians to estimate the degree of the depression risk for an individual patient within the first week after stroke.
Predicting ecological responses in a changing ocean: the effects of future climate uncertainty.
Freer, Jennifer J; Partridge, Julian C; Tarling, Geraint A; Collins, Martin A; Genner, Martin J
2018-01-01
Predicting how species will respond to climate change is a growing field in marine ecology, yet knowledge of how to incorporate the uncertainty from future climate data into these predictions remains a significant challenge. To help overcome it, this review separates climate uncertainty into its three components (scenario uncertainty, model uncertainty, and internal model variability) and identifies four criteria that constitute a thorough interpretation of an ecological response to climate change in relation to these parts (awareness, access, incorporation, communication). Through a literature review, the extent to which the marine ecology community has addressed these criteria in their predictions was assessed. Despite a high awareness of climate uncertainty, articles favoured the most severe emission scenario, and only a subset of climate models were used as input into ecological analyses. In the case of sea surface temperature, these models can have projections unrepresentative against a larger ensemble mean. Moreover, 91% of studies failed to incorporate the internal variability of a climate model into results. We explored the influence that the choice of emission scenario, climate model, and model realisation can have when predicting the future distribution of the pelagic fish, Electrona antarctica . Future distributions were highly influenced by the choice of climate model, and in some cases, internal variability was important in determining the direction and severity of the distribution change. Increased clarity and availability of processed climate data would facilitate more comprehensive explorations of climate uncertainty, and increase in the quality and standard of marine prediction studies.
Lofgren, B.M.; Quinn, F.H.; Clites, A.H.; Assel, R.A.; Eberhardt, A.J.; Luukkonen, C.L.
2002-01-01
The results of general circulation model predictions of the effects of climate change from the Canadian Centre for Climate Modeling and Analysis (model CGCM1) and the United Kingdom Meteorological Office's Hadley Centre (model HadCM2) have been used to derive potential impacts on the water resources of the Great Lakes basin. These impacts can influence the levels of the Great Lakes and the volumes of channel flow among them, thus affecting their value for interests such as riparians, shippers, recreational boaters, and natural ecosystems. On one hand, a hydrological modeling suite using input data from the CGCM1 predicts large drops in lake levels, up to a maximum of 1.38 m on Lakes Michigan and Huron by 2090. This is due to a combination of a decrease in precipitation and an increase in air temperature that leads to an increase in evaporation. On the other hand, using input from HadCM2, rises in lake levels are predicted, up to a maximum of 0.35 m on Lakes Michigan and Huron by 2090, due to increased precipitation and a reduced increase in air temperature. An interest satisfaction model shows sharp decreases in the satisfaction of the interests of commercial navigation, recreational boating, riparians, and hydropower due to lake level decreases. Most interest satisfaction scores are also reduced by lake level increases. Drastic reductions in ice cover also result from the temperature increases such that under the CGCM1 predictions, most of Lake Erie has 96% of its winters ice-free by 2090. Assessment is also made of impacts on the groundwater-dependent region of Lansing, Michigan.
USDA-ARS?s Scientific Manuscript database
Simulation models are increasingly used to predict effects of conservation practices on transport of pesticides to water bodies. We used two models - the Agricultural Policy/Environmental eXtender (APEX) and the Riparian Ecosystem Management Model (REMM) to predict the movement of the herbicide, at...
Tomáš Václavík; Ross K. Meentemeyer
2009-01-01
Species distribution models (SDMs) based on statistical relationships between occurrence data and underlying environmental conditions are increasingly used to predict spatial patterns of biological invasions and prioritize locations for early detection and control of invasion outbreaks. However, invasive species distribution models (iSDMs) face special challenges...
Jaffe, B.E.; Rubin, D.M.
1996-01-01
The time-dependent response of sediment suspension to flow velocity was explored by modeling field measurements collected in the surf zone during a large storm. Linear and nonlinear models were created and tested using flow velocity as input and suspended-sediment concentration as output. A sequence of past velocities (velocity history), as well as velocity from the same instant as the suspended-sediment concentration, was used as input; this velocity history length was allowed to vary. The models also allowed for a lag between input (instantaneous velocity or end of velocity sequence) and output (suspended-sediment concentration). Predictions of concentration from instantaneous velocity or instantaneous velocity raised to a power (up to 8) using linear models were poor (correlation coefficients between predicted and observed concentrations were less than 0.10). Allowing a lag between velocity and concentration improved linear models (correlation coefficient of 0.30), with optimum lag time increasing with elevation above the seabed (from 1.5 s at 13 cm to 8.5 s at 60 cm). These lags are largely due to the time for an observed flow event to effect the bed and mix sediment upward. Using a velocity history further improved linear models (correlation coefficient of 0.43). The best linear model used 12.5 s of velocity history (approximately one wave period) to predict concentration. Nonlinear models gave better predictions than linear models, and, as with linear models, nonlinear models using a velocity history performed better than models using only instantaneous velocity as input. Including a lag time between the velocity and concentration also improved the predictions. The best model (correlation coefficient of 0.58) used 3 s (approximately a quarter wave period) of the cross-shore velocity squared, starting at 4.5 s before the observed concentration, to predict concentration. Using a velocity history increases the performance of the models by specifying a more complete description of the dynamical forcing of the flow (including accelerations and wave phase and shape) responsible for sediment suspension. Incorporating such a velocity history and a lag time into the formulation of the forcing for time-dependent models for sediment suspension in the surf zone will greatly increase our ability to predict suspended-sediment transport.
Integrating linear optimization with structural modeling to increase HIV neutralization breadth.
Sevy, Alexander M; Panda, Swetasudha; Crowe, James E; Meiler, Jens; Vorobeychik, Yevgeniy
2018-02-01
Computational protein design has been successful in modeling fixed backbone proteins in a single conformation. However, when modeling large ensembles of flexible proteins, current methods in protein design have been insufficient. Large barriers in the energy landscape are difficult to traverse while redesigning a protein sequence, and as a result current design methods only sample a fraction of available sequence space. We propose a new computational approach that combines traditional structure-based modeling using the Rosetta software suite with machine learning and integer linear programming to overcome limitations in the Rosetta sampling methods. We demonstrate the effectiveness of this method, which we call BROAD, by benchmarking the performance on increasing predicted breadth of anti-HIV antibodies. We use this novel method to increase predicted breadth of naturally-occurring antibody VRC23 against a panel of 180 divergent HIV viral strains and achieve 100% predicted binding against the panel. In addition, we compare the performance of this method to state-of-the-art multistate design in Rosetta and show that we can outperform the existing method significantly. We further demonstrate that sequences recovered by this method recover known binding motifs of broadly neutralizing anti-HIV antibodies. Finally, our approach is general and can be extended easily to other protein systems. Although our modeled antibodies were not tested in vitro, we predict that these variants would have greatly increased breadth compared to the wild-type antibody.
NASA Astrophysics Data System (ADS)
Bray, Casey D.; Battye, William; Aneja, Viney P.; Tong, Daniel; Lee, Pius; Tang, Youhua; Nowak, John B.
2017-08-01
Atmospheric ammonia (NH3) is not only a major precursor gas for fine particulate matter (PM2.5), but it also negatively impacts the environment through eutrophication and acidification. As the need for agriculture, the largest contributing source of NH3, increases, NH3 emissions will also increase. Therefore, it is crucial to accurately predict ammonia concentrations. The objective of this study is to determine how well the U.S. National Oceanic and Atmospheric Administration (NOAA) National Air Quality Forecast Capability (NAQFC) system predicts ammonia concentrations using their Community Multiscale Air Quality (CMAQ) model (v4.6). Model predictions of atmospheric ammonia are compared against measurements taken during the NOAA California Nexus (CalNex) field campaign that took place between May and July of 2010. Additionally, the model predictions were also compared against ammonia measurements obtained from the Tropospheric Emission Spectrometer (TES) on the Aura satellite. The results of this study showed that the CMAQ model tended to under predict concentrations of NH3. When comparing the CMAQ model with the CalNex measurements, the model under predicted NH3 by a factor of 2.4 (NMB = -58%). However, the ratio of the median measured NH3 concentration to the median of the modeled NH3 concentration was 0.8. When compared with the TES measurements, the model under predicted concentrations of NH3 by a factor of 4.5 (NMB = -77%), with a ratio of the median retrieved NH3 concentration to the median of the modeled NH3 concentration of 3.1. Because the model was the least accurate over agricultural regions, it is likely that the major source of error lies within the agricultural emissions in the National Emissions Inventory. In addition to this, the lack of the use of bidirectional exchange of NH3 in the model could also contribute to the observed bias.
Climate warming causes life-history evolution in a model for Atlantic cod (Gadus morhua).
Holt, Rebecca E; Jørgensen, Christian
2014-01-01
Climate change influences the marine environment, with ocean warming being the foremost driving factor governing changes in the physiology and ecology of fish. At the individual level, increasing temperature influences bioenergetics and numerous physiological and life-history processes, which have consequences for the population level and beyond. We provide a state-dependent energy allocation model that predicts temperature-induced adaptations for life histories and behaviour for the North-East Arctic stock (NEA) of Atlantic cod (Gadus morhua) in response to climate warming. The key constraint is temperature-dependent respiratory physiology, and the model includes a number of trade-offs that reflect key physiological and ecological processes. Dynamic programming is used to find an evolutionarily optimal strategy of foraging and energy allocation that maximizes expected lifetime reproductive output given constraints from physiology and ecology. The optimal strategy is then simulated in a population, where survival, foraging behaviour, growth, maturation and reproduction emerge. Using current forcing, the model reproduces patterns of growth, size-at-age, maturation, gonad production and natural mortality for NEA cod. The predicted climate responses are positive for this stock; under a 2°C warming, the model predicted increased growth rates and a larger asymptotic size. Maturation age was unaffected, but gonad weight was predicted to more than double. Predictions for a wider range of temperatures, from 2 to 7°C, show that temperature responses were gradual; fish were predicted to grow faster and increase reproductive investment at higher temperatures. An emergent pattern of higher risk acceptance and increased foraging behaviour was also predicted. Our results provide important insight into the effects of climate warming on NEA cod by revealing the underlying mechanisms and drivers of change. We show how temperature-induced adaptations of behaviour and several life-history traits are not only mediated by physiology but also by trade-offs with survival, which has consequences for conservation physiology.
Climate warming causes life-history evolution in a model for Atlantic cod (Gadus morhua)
Holt, Rebecca E.; Jørgensen, Christian
2014-01-01
Climate change influences the marine environment, with ocean warming being the foremost driving factor governing changes in the physiology and ecology of fish. At the individual level, increasing temperature influences bioenergetics and numerous physiological and life-history processes, which have consequences for the population level and beyond. We provide a state-dependent energy allocation model that predicts temperature-induced adaptations for life histories and behaviour for the North-East Arctic stock (NEA) of Atlantic cod (Gadus morhua) in response to climate warming. The key constraint is temperature-dependent respiratory physiology, and the model includes a number of trade-offs that reflect key physiological and ecological processes. Dynamic programming is used to find an evolutionarily optimal strategy of foraging and energy allocation that maximizes expected lifetime reproductive output given constraints from physiology and ecology. The optimal strategy is then simulated in a population, where survival, foraging behaviour, growth, maturation and reproduction emerge. Using current forcing, the model reproduces patterns of growth, size-at-age, maturation, gonad production and natural mortality for NEA cod. The predicted climate responses are positive for this stock; under a 2°C warming, the model predicted increased growth rates and a larger asymptotic size. Maturation age was unaffected, but gonad weight was predicted to more than double. Predictions for a wider range of temperatures, from 2 to 7°C, show that temperature responses were gradual; fish were predicted to grow faster and increase reproductive investment at higher temperatures. An emergent pattern of higher risk acceptance and increased foraging behaviour was also predicted. Our results provide important insight into the effects of climate warming on NEA cod by revealing the underlying mechanisms and drivers of change. We show how temperature-induced adaptations of behaviour and several life-history traits are not only mediated by physiology but also by trade-offs with survival, which has consequences for conservation physiology. PMID:27293671
Kesorn, Kraisak; Ongruk, Phatsavee; Chompoosri, Jakkrawarn; Phumee, Atchara; Thavara, Usavadee; Tawatsin, Apiwat; Siriyasatien, Padet
2015-01-01
Background In the past few decades, several researchers have proposed highly accurate prediction models that have typically relied on climate parameters. However, climate factors can be unreliable and can lower the effectiveness of prediction when they are applied in locations where climate factors do not differ significantly. The purpose of this study was to improve a dengue surveillance system in areas with similar climate by exploiting the infection rate in the Aedes aegypti mosquito and using the support vector machine (SVM) technique for forecasting the dengue morbidity rate. Methods and Findings Areas with high incidence of dengue outbreaks in central Thailand were studied. The proposed framework consisted of the following three major parts: 1) data integration, 2) model construction, and 3) model evaluation. We discovered that the Ae. aegypti female and larvae mosquito infection rates were significantly positively associated with the morbidity rate. Thus, the increasing infection rate of female mosquitoes and larvae led to a higher number of dengue cases, and the prediction performance increased when those predictors were integrated into a predictive model. In this research, we applied the SVM with the radial basis function (RBF) kernel to forecast the high morbidity rate and take precautions to prevent the development of pervasive dengue epidemics. The experimental results showed that the introduced parameters significantly increased the prediction accuracy to 88.37% when used on the test set data, and these parameters led to the highest performance compared to state-of-the-art forecasting models. Conclusions The infection rates of the Ae. aegypti female mosquitoes and larvae improved the morbidity rate forecasting efficiency better than the climate parameters used in classical frameworks. We demonstrated that the SVM-R-based model has high generalization performance and obtained the highest prediction performance compared to classical models as measured by the accuracy, sensitivity, specificity, and mean absolute error (MAE). PMID:25961289
Improving Genomic Prediction in Cassava Field Experiments Using Spatial Analysis.
Elias, Ani A; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2018-01-04
Cassava ( Manihot esculenta Crantz) is an important staple food in sub-Saharan Africa. Breeding experiments were conducted at the International Institute of Tropical Agriculture in cassava to select elite parents. Taking into account the heterogeneity in the field while evaluating these trials can increase the accuracy in estimation of breeding values. We used an exploratory approach using the parametric spatial kernels Power, Spherical, and Gaussian to determine the best kernel for a given scenario. The spatial kernel was fit simultaneously with a genomic kernel in a genomic selection model. Predictability of these models was tested through a 10-fold cross-validation method repeated five times. The best model was chosen as the one with the lowest prediction root mean squared error compared to that of the base model having no spatial kernel. Results from our real and simulated data studies indicated that predictability can be increased by accounting for spatial variation irrespective of the heritability of the trait. In real data scenarios we observed that the accuracy can be increased by a median value of 3.4%. Through simulations, we showed that a 21% increase in accuracy can be achieved. We also found that Range (row) directional spatial kernels, mostly Gaussian, explained the spatial variance in 71% of the scenarios when spatial correlation was significant. Copyright © 2018 Elias et al.
Prediction of climate change in Brunei Darussalam using statistical downscaling model
NASA Astrophysics Data System (ADS)
Hasan, Dk. Siti Nurul Ain binti Pg. Ali; Ratnayake, Uditha; Shams, Shahriar; Nayan, Zuliana Binti Hj; Rahman, Ena Kartina Abdul
2017-06-01
Climate is changing and evidence suggests that the impact of climate change would influence our everyday lives, including agriculture, built environment, energy management, food security and water resources. Brunei Darussalam located within the heart of Borneo will be affected both in terms of precipitation and temperature. Therefore, it is crucial to comprehend and assess how important climate indicators like temperature and precipitation are expected to vary in the future in order to minimise its impact. This study assesses the application of a statistical downscaling model (SDSM) for downscaling General Circulation Model (GCM) results for maximum and minimum temperatures along with precipitation in Brunei Darussalam. It investigates future climate changes based on numerous scenarios using Hadley Centre Coupled Model, version 3 (HadCM3), Canadian Earth System Model (CanESM2) and third-generation Coupled Global Climate Model (CGCM3) outputs. The SDSM outputs were improved with the implementation of bias correction and also using a monthly sub-model instead of an annual sub-model. The outcomes of this assessment show that monthly sub-model performed better than the annual sub-model. This study indicates a satisfactory applicability for generation of maximum temperatures, minimum temperatures and precipitation for future periods of 2017-2046 and 2047-2076. All considered models and the scenarios were consistent in predicting increasing trend of maximum temperature, increasing trend of minimum temperature and decreasing trend of precipitations. Maximum overall trend of Tmax was also observed for CanESM2 with Representative Concentration Pathways (RCP) 8.5 scenario. The increasing trend is 0.014 °C per year. Accordingly, by 2076, the highest prediction of average maximum temperatures is that it will increase by 1.4 °C. The same model predicts an increasing trend of Tmin of 0.004 °C per year, while the highest trend is seen under CGCM3-A2 scenario which is 0.009 °C per year. The highest change predicted for the Tmin is therefore 0.9 °C by 2076. The precipitation showed a maximum trend of decrease of 12.7 mm year. It is also seen in the output using CanESM2 data that precipitation will be more chaotic with some reaching 4800 mm per year and also producing low rainfall about 1800 mm per year. All GCMs considered are consistent in predicting it is very likely that Brunei is expected to experience more warming as well as less frequent precipitation events but with a possibility of intensified and drastically high rainfalls in the future.
Chouinard, Maud-Christine; Robichaud-Ekstrand, Sylvie
2007-02-01
Several authors have questioned the transtheoretical model. Determining the predictive value of each cognitive-behavioural element within this model could explain the multiple successes reported in smoking cessation programmes. The purpose of this study was to predict point-prevalent smoking abstinence at 2 and 6 months, using the constructs of the transtheoretical model, when applied to a pooled sample of individuals who were hospitalized for a cardiovascular event. The study follows a predictive correlation design. Recently hospitalized patients (n=168) with cardiovascular disease were pooled from a randomized, controlled trial. Independent variables of the predictive transtheoretical model comprise stages and processes of change, pros and cons to quit smoking (decisional balance), self-efficacy, and social support. These were evaluated at baseline, 2 and 6 months. Compared to smokers, individuals who abstained from smoking at 2 and 6 months were more confident at baseline to remain non-smokers, perceived less pros and cons to continue smoking, utilized less consciousness raising and self-re-evaluation experiential processes of change, and received more positive reinforcement from their social network with regard to their smoke-free behaviour. Self-efficacy and stages of change at baseline were predictive of smoking abstinence after 6 months. Other variables found to be predictive of smoking abstinence at 6 months were an increase in self-efficacy; an increase in positive social support behaviour and a decrease of the pros within the decisional balance. The results partially support the predictive value of the transtheoretical model constructs in smoking cessation for cardiovascular disease patients.
Hu, Chen; Steingrimsson, Jon Arni
2018-01-01
A crucial component of making individualized treatment decisions is to accurately predict each patient's disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.
Dependence of particle volume fraction on sound velocity and attenuation of EPDM composites.
Kim, K S; Lee, K I; Kim, H Y; Yoon, S W; Hong, S H
2007-05-01
The sound velocity and the attenuation coefficient of EPDM (Ethylene-propylene Diene Monomer) composites incorporated with Silicon Carbide particles (SiCp's) of various volume fractions (0-40%) were experimentally and theoretically investigated. For the experiment a through-transmission technique was used. For the theoretical prediction, some mechanical property models such as Reuss model and Coherent Potential Approximation (CPA) model etc. were employed. The experimental results showed that the sound velocity decreased with the increase of the SiCp volume fraction up to 30% and then increased with the 40 vol% specimen. The attenuation coefficient was increased with the increasing SiCp volume fractions. The modified Reuss model with a longitudinal elastic modulus predicted most well the experimental sound velocity and elastic modulus results.
Effect of tumor amplitude and frequency on 4D modeling of Vero4DRT system.
Miura, Hideharu; Ozawa, Shuichi; Hayata, Masahiro; Tsuda, Shintaro; Yamada, Kiyoshi; Nagata, Yasushi
2017-01-01
An important issue in indirect dynamic tumor tracking with the Vero4DRT system is the accuracy of the model predictions of the internal target position based on surrogate infrared (IR) marker measurement. We investigated the predictive uncertainty of 4D modeling using an external IR marker, focusing on the effect of the target and surrogate amplitudes and periods. A programmable respiratory motion table was used to simulate breathing induced organ motion. Sinusoidal motion sequences were produced by a dynamic phantom with different amplitudes and periods. To investigate the 4D modeling error, the following amplitudes (peak-to-peak: 10-40 mm) and periods (2-8 s) were considered. The 95th percentile 4D modeling error (4D- E 95% ) between the detected and predicted target position ( μ + 2SD) was calculated to investigate the 4D modeling error. 4D- E 95% was linearly related to the target motion amplitude with a coefficient of determination R 2 = 0.99 and ranged from 0.21 to 0.88 mm. The 4D modeling error ranged from 1.49 to 0.14 mm and gradually decreased with increasing target motion period. We analyzed the predictive error in 4D modeling and the error due to the amplitude and period of target. 4D modeling error substantially increased with increasing amplitude and decreasing period of the target motion.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Lu, Jun-Qi; Wang, Shan; Yin, Jia; Wu, Shan; He, Yan; Zheng, Hui-Min; Sheng, Hua-Fang; Zhou, Hong-Wei
2017-03-20
To establish a machine learning model based on gut microbiota for predicting the level of trimethylamine N-oxide (TMAO) metabolism in vivo after choline intake to provide guidance of individualized precision diet and evidence for screening population at high risks of cardiovascular disease. We quantified plasma levels of TMAO in 18 healthy volunteers before and 8 h after a choline challenge (ingestion of two boiled eggs). The volunteers were divided into two groups with increased or decreased TMAO level following choline challenge. Fresh fecal samples were collected before taking fasting blood samples for amplifying 16S rRNA V4 tags, and the PCR products were sequenced using the platform of Illumina HiSeq 2000. The differences in gut microbiata between subjects with increased and decreased plasma TMAO were analyzed using QIIME. Based on the gut microbiota data and TMAO levels in the two groups, the prediction model was established using the machine learning random forest algorithm, and the validity of the model was tested using a verified dataset. An obvious difference was found in beta diversity of the gut microbota between the subjects with increased and decreased plasma TMAO level following choline challenge. The area under the curve (AUC) of the model was 86.39% (95% CI: 72.7%-100%). Using the verified dataset, the model showed a much higher probability for correctly predicting TMAO variation following choline challenge. The model is feasible and reliable for predicting the level of TMAO metabolism in vivo based on gut microbiota.
A New Scheme to Characterize and Identify Protein Ubiquitination Sites.
Nguyen, Van-Nui; Huang, Kai-Yao; Huang, Chien-Hsun; Lai, K Robert; Lee, Tzong-Yi
2017-01-01
Protein ubiquitination, involving the conjugation of ubiquitin on lysine residue, serves as an important modulator of many cellular functions in eukaryotes. Recent advancements in proteomic technology have stimulated increasing interest in identifying ubiquitination sites. However, most computational tools for predicting ubiquitination sites are focused on small-scale data. With an increasing number of experimentally verified ubiquitination sites, we were motivated to design a predictive model for identifying lysine ubiquitination sites for large-scale proteome dataset. This work assessed not only single features, such as amino acid composition (AAC), amino acid pair composition (AAPC) and evolutionary information, but also the effectiveness of incorporating two or more features into a hybrid approach to model construction. The support vector machine (SVM) was applied to generate the prediction models for ubiquitination site identification. Evaluation by five-fold cross-validation showed that the SVM models learned from the combination of hybrid features delivered a better prediction performance. Additionally, a motif discovery tool, MDDLogo, was adopted to characterize the potential substrate motifs of ubiquitination sites. The SVM models integrating the MDDLogo-identified substrate motifs could yield an average accuracy of 68.70 percent. Furthermore, the independent testing result showed that the MDDLogo-clustered SVM models could provide a promising accuracy (78.50 percent) and perform better than other prediction tools. Two cases have demonstrated the effective prediction of ubiquitination sites with corresponding substrate motifs.
Labor estimation by informational objective assessment (LEIOA) for preterm delivery prediction.
Malaina, Iker; Aranburu, Larraitz; Martínez, Luis; Fernández-Llebrez, Luis; Bringas, Carlos; De la Fuente, Ildefonso M; Pérez, Martín Blás; González, Leire; Arana, Itziar; Matorras, Roberto
2018-05-01
To introduce LEIOA, a new screening method to forecast which patients admitted to the hospital because of suspected threatened premature delivery will give birth in < 7 days, so that it can be used to assist in the prognosis and treatment jointly with other clinical tools. From 2010 to 2013, 286 tocographies from women with gestational ages comprehended between 24 and 37 weeks were collected and studied. Then, we developed a new predictive model based on uterine contractions which combine the Generalized Hurst Exponent and the Approximate Entropy by logistic regression (LEIOA model). We compared it with a model using exclusively obstetric variables, and afterwards, we joined both to evaluate the gain. Finally, a cross validation was performed. The combination of LEIOA with the medical model resulted in an increase (in average) of predictive values of 12% with respect to the medical model alone, giving a sensitivity of 0.937, a specificity of 0.747, a positive predictive value of 0.907 and a negative predictive value of 0.819. Besides, adding LEIOA reduced the percentage of incorrectly classified cases by the medical model by almost 50%. Due to the significant increase in predictive parameters and the reduction of incorrectly classified cases when LEIOA was combined with the medical variables, we conclude that it could be a very useful tool to improve the estimation of the immediacy of preterm delivery.
Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Shtyrov, Yury; Bekinschtein, Tristan A; Henson, Richard
2016-08-10
There is increasing evidence that human perception is realized by a hierarchy of neural processes in which predictions sent backward from higher levels result in prediction errors that are fed forward from lower levels, to update the current model of the environment. Moreover, the precision of prediction errors is thought to be modulated by attention. Much of this evidence comes from paradigms in which a stimulus differs from that predicted by the recent history of other stimuli (generating a so-called "mismatch response"). There is less evidence from situations where a prediction is not fulfilled by any sensory input (an "omission" response). This situation arguably provides a more direct measure of "top-down" predictions in the absence of confounding "bottom-up" input. We applied Dynamic Causal Modeling of evoked electromagnetic responses recorded by EEG and MEG to an auditory paradigm in which we factorially crossed the presence versus absence of "bottom-up" stimuli with the presence versus absence of "top-down" attention. Model comparison revealed that both mismatch and omission responses were mediated by increased forward and backward connections, differing primarily in the driving input. In both responses, modeling results suggested that the presence of attention selectively modulated backward "prediction" connections. Our results provide new model-driven evidence of the pure top-down prediction signal posited in theories of hierarchical perception, and highlight the role of attentional precision in strengthening this prediction. Human auditory perception is thought to be realized by a network of neurons that maintain a model of and predict future stimuli. Much of the evidence for this comes from experiments where a stimulus unexpectedly differs from previous ones, which generates a well-known "mismatch response." But what happens when a stimulus is unexpectedly omitted altogether? By measuring the brain's electromagnetic activity, we show that it also generates an "omission response" that is contingent on the presence of attention. We model these responses computationally, revealing that mismatch and omission responses only differ in the location of inputs into the same underlying neuronal network. In both cases, we show that attention selectively strengthens the brain's prediction of the future. Copyright © 2016 Chennu et al.
Han, Bing; Mao, Jialin; Chien, Jenny Y; Hall, Stephen D
2013-07-01
Ketoconazole is a potent CYP3A inhibitor used to assess the contribution of CYP3A to drug clearance and quantify the increase in drug exposure due to a strong inhibitor. Physiologically based pharmacokinetic (PBPK) models have been used to evaluate treatment regimens resulting in maximal CYP3A inhibition by ketoconazole but have reached different conclusions. We compare two PBPK models of the ketoconazole-midazolam interaction, model 1 (Chien et al., 2006) and model 2 implemented in Simcyp (version 11), to predict 16 published treatment regimens. With use of model 2, 41% of the study point estimates of area under the curve (AUC) ratio and 71% of the 90% confidence intervals were predicted within 1.5-fold of the observed, but these increased to 82 and 100%, respectively, with model 1. For midazolam, model 2 predicted a maximal midazolam AUC ratio of 8 and a hepatic fraction metabolized by CYP3A (f(m)) of 0.97, whereas model 1 predicted 17 and 0.90, respectively, which are more consistent with observed data. On the basis of model 1, ketoconazole (400 mg QD) for at least 3 days and substrate administration within 2 hours is required for maximal CYP3A inhibition. Ketoconazole treatment regimens that use 200 mg BID underestimate the systemic fraction metabolized by CYP3A (0.86 versus 0.90) for midazolam. The systematic underprediction also applies to CYP3A substrates with high bioavailability and long half-lives. The superior predictive performance of model 1 reflects the need for accumulation of ketoconazole at enzyme site and protracted inhibition. Model 2 is not recommended for inferring optimal study design and estimation of fraction metabolized by CYP3A.
O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite
2012-01-01
Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.
Baker, Stuart G; Schuit, Ewoud; Steyerberg, Ewout W; Pencina, Michael J; Vickers, Andrew; Vickers, Andew; Moons, Karel G M; Mol, Ben W J; Lindeman, Karen S
2014-09-28
An important question in the evaluation of an additional risk prediction marker is how to interpret a small increase in the area under the receiver operating characteristic curve (AUC). Many researchers believe that a change in AUC is a poor metric because it increases only slightly with the addition of a marker with a large odds ratio. Because it is not possible on purely statistical grounds to choose between the odds ratio and AUC, we invoke decision analysis, which incorporates costs and benefits. For example, a timely estimate of the risk of later non-elective operative delivery can help a woman in labor decide if she wants an early elective cesarean section to avoid greater complications from possible later non-elective operative delivery. A basic risk prediction model for later non-elective operative delivery involves only antepartum markers. Because adding intrapartum markers to this risk prediction model increases AUC by 0.02, we questioned whether this small improvement is worthwhile. A key decision-analytic quantity is the risk threshold, here the risk of later non-elective operative delivery at which a patient would be indifferent between an early elective cesarean section and usual care. For a range of risk thresholds, we found that an increase in the net benefit of risk prediction requires collecting intrapartum marker data on 68 to 124 women for every correct prediction of later non-elective operative delivery. Because data collection is non-invasive, this test tradeoff of 68 to 124 is clinically acceptable, indicating the value of adding intrapartum markers to the risk prediction model. Copyright © 2014 John Wiley & Sons, Ltd.
Lambert, Emily; Pierce, Graham J; Hall, Karen; Brereton, Tom; Dunn, Timothy E; Wall, Dave; Jepson, Paul D; Deaville, Rob; MacLeod, Colin D
2014-06-01
There is increasing evidence that the distributions of a large number of species are shifting with global climate change as they track changing surface temperatures that define their thermal niche. Modelling efforts to predict species distributions under future climates have increased with concern about the overall impact of these distribution shifts on species ecology, and especially where barriers to dispersal exist. Here we apply a bio-climatic envelope modelling technique to investigate the impacts of climate change on the geographic range of ten cetacean species in the eastern North Atlantic and to assess how such modelling can be used to inform conservation and management. The modelling process integrates elements of a species' habitat and thermal niche, and employs "hindcasting" of historical distribution changes in order to verify the accuracy of the modelled relationship between temperature and species range. If this ability is not verified, there is a risk that inappropriate or inaccurate models will be used to make future predictions of species distributions. Of the ten species investigated, we found that while the models for nine could successfully explain current spatial distribution, only four had a good ability to predict distribution changes over time in response to changes in water temperature. Applied to future climate scenarios, the four species-specific models with good predictive abilities indicated range expansion in one species and range contraction in three others, including the potential loss of up to 80% of suitable white-beaked dolphin habitat. Model predictions allow identification of affected areas and the likely time-scales over which impacts will occur. Thus, this work provides important information on both our ability to predict how individual species will respond to future climate change and the applicability of predictive distribution models as a tool to help construct viable conservation and management strategies. © 2014 John Wiley & Sons Ltd.
Duthie, A. Bradley; Reid, Jane M.
2015-01-01
Avoiding inbreeding, and therefore avoiding inbreeding depression in offspring fitness, is widely assumed to be adaptive in systems with biparental reproduction. However, inbreeding can also confer an inclusive fitness benefit stemming from increased relatedness between parents and inbred offspring. Whether or not inbreeding or avoiding inbreeding is adaptive therefore depends on a balance between inbreeding depression and increased parent-offspring relatedness. Existing models of biparental inbreeding predict threshold values of inbreeding depression above which males and females should avoid inbreeding, and predict sexual conflict over inbreeding because these thresholds diverge. However, these models implicitly assume that if a focal individual avoids inbreeding, then both it and its rejected relative will subsequently outbreed. We show that relaxing this assumption of reciprocal outbreeding, and the assumption that focal individuals are themselves outbred, can substantially alter the predicted thresholds for inbreeding avoidance for focal males. Specifically, the magnitude of inbreeding depression below which inbreeding increases a focal male’s inclusive fitness increases with increasing depression in the offspring of a focal female and her alternative mate, and it decreases with increasing relatedness between a focal male and a focal female’s alternative mate, thereby altering the predicted zone of sexual conflict. Furthermore, a focal male’s inclusive fitness gain from avoiding inbreeding is reduced by indirect opportunity costs if his rejected relative breeds with another relative of his. By demonstrating that variation in relatedness and inbreeding can affect intra- and inter-sexual conflict over inbreeding, our models lead to novel predictions for family dynamics. Specifically, parent-offspring conflict over inbreeding might depend on the alternative mates of rejected relatives, and male-male competition over inbreeding might lead to mixed inbreeding strategies. Making testable quantitative predictions regarding inbreeding strategies occurring in nature will therefore require new models that explicitly capture variation in relatedness and inbreeding among interacting population members. PMID:25909185
A generalized procedure for the prediction of multicomponent adsorption equilibria
Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas
2015-04-07
Prediction of multicomponent adsorption equilibria has been investigated for several decades. While there are theories available to predict the adsorption behavior of ideal mixtures, there are few purely predictive theories to account for nonidealities in real systems. Most models available for dealing with nonidealities contain interaction parameters that must be obtained through correlation with binary-mixture data. However, as the number of components in a system grows, the number of parameters needed to be obtained increases exponentially. Here, a generalized procedure is proposed, as an extension of the predictive real adsorbed solution theory, for determining the parameters of any activity model,more » for any number of components, without correlation. This procedure is then combined with the adsorbed solution theory to predict the adsorption behavior of mixtures. As this method can be applied to any isotherm model and any activity model, it is referred to as the generalized predictive adsorbed solution theory.« less
Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie
2017-01-01
Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.
Multi-nutrient, multi-group model of present and future oceanic phytoplankton communities
NASA Astrophysics Data System (ADS)
Litchman, E.; Klausmeier, C. A.; Miller, J. R.; Schofield, O. M.; Falkowski, P. G.
2006-11-01
Phytoplankton community composition profoundly affects patterns of nutrient cycling and the dynamics of marine food webs; therefore predicting present and future phytoplankton community structure is crucial to understand how ocean ecosystems respond to physical forcing and nutrient limitations. We develop a mechanistic model of phytoplankton communities that includes multiple taxonomic groups (diatoms, coccolithophores and prasinophytes), nutrients (nitrate, ammonium, phosphate, silicate and iron), light, and a generalist zooplankton grazer. Each taxonomic group was parameterized based on an extensive literature survey. We test the model at two contrasting sites in the modern ocean, the North Atlantic (North Atlantic Bloom Experiment, NABE) and subarctic North Pacific (ocean station Papa, OSP). The model successfully predicts general patterns of community composition and succession at both sites: In the North Atlantic, the model predicts a spring diatom bloom, followed by coccolithophore and prasinophyte blooms later in the season. In the North Pacific, the model reproduces the low chlorophyll community dominated by prasinophytes and coccolithophores, with low total biomass variability and high nutrient concentrations throughout the year. Sensitivity analysis revealed that the identity of the most sensitive parameters and the range of acceptable parameters differed between the two sites. We then use the model to predict community reorganization under different global change scenarios: a later onset and extended duration of stratification, with shallower mixed layer depths due to increased greenhouse gas concentrations; increase in deep water nitrogen; decrease in deep water phosphorus and increase or decrease in iron concentration. To estimate uncertainty in our predictions, we used a Monte Carlo sampling of the parameter space where future scenarios were run using parameter combinations that produced acceptable modern day outcomes and the robustness of the predictions was determined. Change in the onset and duration of stratification altered the timing and the magnitude of the spring diatom bloom in the North Atlantic and increased total phytoplankton and zooplankton biomass in the North Pacific. Changes in nutrient concentrations in some cases changed dominance patterns of major groups, as well as total chlorophyll and zooplankton biomass. Based on these scenarios, our model suggests that global environmental change will inevitably alter phytoplankton community structure and potentially impact global biogeochemical cycles.
Numerical and Qualitative Contrasts of Two Statistical Models ...
Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-
California's Snow Gun and its implications for mass balance predictions under greenhouse warming
NASA Astrophysics Data System (ADS)
Howat, I.; Snyder, M.; Tulaczyk, S.; Sloan, L.
2003-12-01
Precipitation has received limited treatment in glacier and snowpack mass balance models, largely due to the poor resolution and confidence of precipitation predictions relative to temperature predictions derived from atmospheric models. Most snow and glacier mass balance models rely on statistical or lapse rate-based downscaling of general or regional circulation models (GCM's and RCM's), essentially decoupling sub-grid scale, orographically-driven evolution of atmospheric heat and moisture. Such models invariably predict large losses in the snow and ice volume under greenhouse warming. However, positive trends in the mass balance of glaciers in some warming maritime climates, as well as at high elevations of the Greenland Ice Sheet, suggest that increased precipitation may play an important role in snow- and glacier-climate interactions. Here, we present a half century of April snowpack data from the Sierra Nevada and Cascade mountains of California, USA. This high-density network of snow-course data indicates that a gain in winter snow accumulation at higher elevations has compensated loss in snow volume at lower elevations by over 50% and has led to glacier expansion on Mt. Shasta. These trends are concurrent with a region-wide increase in winter temperatures up to 2° C. They result from the orographic lifting and saturation of warmer, more humid air leading to increased precipitation at higher elevations. Previous studies have invoked such a "Snow Gun" effect to explain contemporaneous records of Tertiary ocean warming and rapid glacial expansion. A climatological context of the California's "snow gun" effect is elucidated by correlation between the elevation distribution of April SWE observations and the phase of the Pacific Decadal Oscillation and the El Nino Southern Oscillation, both controlling the heat and moisture delivered to the U.S. Pacific coast. The existence of a significant "Snow Gun" effect presents two challenges to snow and glacier mass balance modeling. Firstly, the link between amplification of orographic precipitation and the temporal evolution of ocean-climate oscillations indicates that prediction of future mass balance trends requires consideration of the timing and amplitude of such oscillations. Only recently have ocean-atmosphere models begun to realistically produce such temporal variability. Secondly, the steepening snow mass-balance elevation-gradient associated with the "Snow Gun" implies greater spatial variability in balance with warming. In a warming climate, orographic processes at a scale finer that the highest resolution RCM (>20km grid) become increasingly important and predictions based on lower elevations become increasingly inaccurate for higher elevations. Therefore, thermodynamic interaction between atmospheric heat, moisture and topography must be included in downscaling techniques. In order to demonstrate the importance of the thermodynamic downscaling in mass balance predictions, we nest a high-resolution (100m grid), coupled Orographic Precipitation and Surface Energy balance Model (OPSEM) into the RegC2.5 RCM (40 km grid) and compare results. We apply this nesting technique to Mt. Shasta, California, an area of high topography (~4000m) relative to its RegCM2.5 grid elevation (1289m). These models compute average April snow volume under present and doubled-present Atmospheric CO2 concentrations. While the RegCM2.5 regional model predicts an 83% decrease in April SWE, OPSEM predicts a 16% increase. These results indicate that thermodynamic interactions between the atmosphere and topography at sub- RCM grid resolution must be considered in mass balance models.
Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L
2016-09-01
Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (<1%), and 601 (29%) were excluded due to incomplete psychosocial data, McKenzie classification data, and missing FS at discharge, respectively. The final sample for analyses was 723 (35%). Overall R(2) for the baseline prediction FS model was 0.40. Adding classification variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in the baseline model. Conclusion The small added prognostic capabilities identified when combining McKenzie or pain-pattern classifications with the SCL BPPM classification did not significantly improve prediction of FS outcomes in this study. Additional research is warranted to investigate the importance of classification variables compared with those used in the baseline model to maximize predictive power. Level of Evidence Prognosis, level 4. J Orthop Sports Phys Ther 2016;46(9):726-741. Epub 31 Jul 2016. doi:10.2519/jospt.2016.6266.
Harris, Ted D.; Graham, Jennifer L.
2017-01-01
Cyanobacterial blooms degrade water quality in drinking water supply reservoirs by producing toxic and taste-and-odor causing secondary metabolites, which ultimately cause public health concerns and lead to increased treatment costs for water utilities. There have been numerous attempts to create models that predict cyanobacteria and their secondary metabolites, most using linear models; however, linear models are limited by assumptions about the data and have had limited success as predictive tools. Thus, lake and reservoir managers need improved modeling techniques that can accurately predict large bloom events that have the highest impact on recreational activities and drinking-water treatment processes. In this study, we compared 12 unique linear and nonlinear regression modeling techniques to predict cyanobacterial abundance and the cyanobacterial secondary metabolites microcystin and geosmin using 14 years of physiochemical water quality data collected from Cheney Reservoir, Kansas. Support vector machine (SVM), random forest (RF), boosted tree (BT), and Cubist modeling techniques were the most predictive of the compared modeling approaches. SVM, RF, and BT modeling techniques were able to successfully predict cyanobacterial abundance, microcystin, and geosmin concentrations <60,000 cells/mL, 2.5 µg/L, and 20 ng/L, respectively. Only Cubist modeling predicted maxima concentrations of cyanobacteria and geosmin; no modeling technique was able to predict maxima microcystin concentrations. Because maxima concentrations are a primary concern for lake and reservoir managers, Cubist modeling may help predict the largest and most noxious concentrations of cyanobacteria and their secondary metabolites.
Olugasa, Babasola O; Odigie, Eugene A; Lawani, Mike; Ojo, Johnson F
2015-01-01
The objective was to develop a case-pattern model for Lassa fever (LF) among humans and derive predictors of time-trend point distribution of LF cases in Liberia in view of the prevailing under-reporting and public health challenge posed by the disease in the country. A retrospective 5 years data of LF distribution countrywide among humans were used to train a time-trend model of the disease in Liberia. A time-trend quadratic model was selected due to its goodness-of-fit (R2 = 0.89, and P < 0.05) and best performance compared to linear and exponential models. Parameter predictors were run on least square method to predict LF cases for a prospective 5 years period, covering 2013-2017. The two-stage predictive model of LF case-pattern between 2013 and 2017 was characterized by a prospective decline within the South-coast County of Grand Bassa over the forecast period and an upward case-trend within the Northern County of Nimba. Case specific exponential increase was predicted for the first 2 years (2013-2014) with a geometric increase over the next 3 years (2015-2017) in Nimba County. This paper describes a translational application of the space-time distribution pattern of LF epidemics, 2008-2012 reported in Liberia, on which a predictive model was developed. We proposed a computationally feasible two-stage space-time permutation approach to estimate the time-trend parameters and conduct predictive inference on LF in Liberia.
Corridor Use Predicted from Behaviors at Habitat Boundaries.
Haddad, Nick M
1999-02-01
Through empirical studies and simulation, I demonstrate how simple behaviors can be used in lieu of detailed dispersal studies to predict the effects of corridors on interpatch movements. Movement paths of three butterfly species were measured in large (1.64 ha) experimental patches of open habitat, some of which were connected by corridors. Butterflies that "reflected" off boundaries between open patches and the surrounding forest also emigrated from patches through corridors at rates higher than expected from random movement. This was observed for two open-habitat species, Eurema nicippe and Phoebis sennae; however, edges and corridors had no effect on a habitat generalist, Papilio troilus. Behaviorally based simulation models, which departed from correlated random walks only at habitat boundaries, predicted that corridors increase interpatch movement rates of both open-habitat species. Models also predicted that corridors have proportionately greater effects as corridor width increases, that movement rates increase before leveling off as corridor width increases, and that corridor effects decrease as patch size increases. This study suggests that corridors direct movements of habitat-restricted species and that local behaviors may be used to predict the conservation potential of corridors in fragmented landscapes.
Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model
NASA Astrophysics Data System (ADS)
Tang, Jingshi; Liu, Lin; Miao, Manqian
Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.
NASA Astrophysics Data System (ADS)
Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith
2011-12-01
Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.
Stated response to increased enforcement density and penalty size for speeding and driving unbelted.
Hössinger, Reinhard; Berger, Wolfgang J
2012-11-01
To what extent can traffic offences be reduced through stronger enforcement, higher penalties, and the provision of information to road users? This question was addressed with respect to the offences of "speeding" and "driving unbelted." Data were collected by a telephone survey of admitted speeders, followed by 438 face-to-face stated response interviews. Based on the data collected, separate statistical models were developed for the two offences. The models predict the behavioral effect of increasing enforcement density and/or penalty size as well as the additional effect of providing information to car drivers. All three factors are predicted to be effective in reducing speeding. According to the model, one additional enforcement event per year will cause a driver to reduce his current frequency of speeding by 5%. A penalty increase of 10 Euros is predicted to have the same effect. An announcement of stronger enforcement or higher fines is predicted to have an additional effect on behavior, independent of the actual magnitudes of increase in enforcement or fines. With respect to the use of a seat belt, however, neither an increase in enforcement density nor its announcement is predicted to have a significant effect on driver behavior. An increase in the penalty size is predicted to raise the stated wearing rate, which is already 90% in Austria. It seems that both the fear of punishment and the motivation for driving unbelted are limited, so that there is only a weak tradeoff between the two. This may apply to most traffic offences, with the exception of speeding, which accounts for over 80% of tickets alone, whereas all other offences account for less than 3% each. Copyright © 2012 Elsevier Ltd. All rights reserved.
Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.
2014-01-01
Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.
Predicting School Enrollments Using the Modified Regression Technique.
ERIC Educational Resources Information Center
Grip, Richard S.; Young, John W.
This report is based on a study in which a regression model was constructed to increase accuracy in enrollment predictions. A model, known as the Modified Regression Technique (MRT), was used to examine K-12 enrollment over the past 20 years in 2 New Jersey school districts of similar size and ethnicity. To test the model's accuracy, MRT was…
Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh
2017-01-01
The increasing need to predict how climate change will impact wildlife species has exposed limitations in how well current approaches model important biological processes at scales at which those processes interact with climate. We used a comprehensive approach that combined recent advances in landscape and population modeling into dynamic-landscape metapopulation...
Refinement of the Arc-Habcap model to predict habitat effectiveness for elk
Lakhdar Benkobi; Mark A. Rumble; Gary C. Brundige; Joshua J. Millspaugh
2004-01-01
Wildlife habitat modeling is increasingly important for managers who need to assess the effects of land management activities. We evaluated the performance of a spatially explicit deterministic habitat model (Arc-Habcap) that predicts habitat effectiveness for elk. We used five years of radio-telemetry locations of elk from Custer State Park (CSP), South Dakota, to...
Modeling of exposure to carbon monoxide in fires
NASA Technical Reports Server (NTRS)
Cagliostro, D. E.
1980-01-01
A mathematical model is developed to predict carboxyhemoglobin concentrations in regions of the body for short exposures to carbon monoxide levels expected during escape from aircraft fires. The model includes the respiratory and circulatory dynamics of absorption and distribution of carbon monoxide and carboxyhemoglobin. Predictions of carboxyhemoglobin concentrations are compared to experimental values obtained for human exposures to constant high carbon monoxide levels. Predictions are within 20% of experimental values. For short exposure times, transient concentration effects are predicted. The effect of stress is studied and found to increase carboxyhemoglobin levels substantially compared to a rest state.
On the role of passion for work in burnout: a process model.
Vallerand, Robert J; Paquet, Yvan; Philippe, Frederick L; Charest, Julie
2010-02-01
The purpose of the present research was to test a model on the role of passion for work in professional burnout. This model posits that obsessive passion produces conflict between work and other life activities because the person cannot let go of the work activity. Conversely, harmonious passion is expected to prevent conflict while positively contributing to work satisfaction. Finally, conflict is expected to contribute to burnout, whereas work satisfaction should prevent its occurrence. This model was tested in 2 studies with nurses in 2 cultures. Using a cross-sectional design, Study 1 (n=97) provided support for the model with nurses from France. In Study 2 (n=258), a prospective design was used to further test the model with nurses from the Province of Quebec over a 6-month period. Results provided support for the model. Specifically, harmonious passion predicted an increase in work satisfaction and a decrease in conflict. Conversely, obsessive passion predicted an increase of conflict. In turn, work satisfaction and conflict predicted decreases and increases in burnout changes that took place over time. The results have important implications for theory and research on passion as well as burnout.
Comparison of statistical models for analyzing wheat yield time series.
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.
Exploring the social dimension of sandy beaches through predictive modelling.
Domínguez-Tejo, Elianny; Metternicht, Graciela; Johnston, Emma L; Hedge, Luke
2018-05-15
Sandy beaches are unique ecosystems increasingly exposed to human-induced pressures. Consistent with emerging frameworks promoting this holistic approach towards beach management, is the need to improve the integration of social data into management practices. This paper aims to increase understanding of links between demographics and community values and preferred beach activities, as key components of the social dimension of the beach environment. A mixed method approach was adopted to elucidate users' opinions on beach preferences and community values through a survey carried out in Manly Local Government Area in Sydney Harbour, Australia. A proposed conceptual model was used to frame demographic models (using age, education, employment, household income and residence status) as predictors of these two community responses. All possible regression-model combinations were compared using Akaike's information criterion. Best models were then used to calculate quantitative likelihoods of the responses, presented as heat maps. Findings concur with international research indicating the relevance of social and restful activities as important social links between the community and the beach environment. Participant's age was a significant variable in the four predictive models. The use of predictive models informed by demographics could potentially increase our understanding of interactions between the social and ecological systems of the beach environment, as a prelude to integrated beach management approaches. The research represents a practical demonstration of how demographic predictive models could support proactive approaches to beach management. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bibby, Chris; Hodgson, Murray
2017-01-01
The work reported here, part of a study on the performance and optimal design of interior natural-ventilation openings and silencers ("ventilators"), discusses the prediction of the acoustical performance of such ventilators, and the factors that affect it. A wave-based numerical approach-the finite-element method (FEM)-is applied. The development of a FEM technique for the prediction of ventilator diffuse-field transmission loss is presented. Model convergence is studied with respect to mesh, frequency-sampling and diffuse-field convergence. The modeling technique is validated by way of predictions and the comparison of them to analytical and experimental results. The transmission-loss performance of crosstalk silencers of four shapes, and the factors that affect it, are predicted and discussed. Performance increases with flow-path length for all silencer types. Adding elbows significantly increases high-frequency transmission loss, but does not increase overall silencer performance which is controlled by low-to-mid-frequency transmission loss.
A Global Model for Bankruptcy Prediction
Alaminos, David; del Castillo, Agustín; Fernández, Manuel Ángel
2016-01-01
The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy. PMID:27880810
Czerniecki, Joseph M; Turner, Aaron P; Williams, Rhonda M; Thompson, Mary Lou; Landry, Greg; Hakimi, Kevin; Speckman, Rebecca; Norvell, Daniel C
2017-01-01
The objective of this study was the development of AMPREDICT-Mobility, a tool to predict the probability of independence in either basic or advanced (iBASIC or iADVANCED) mobility 1 year after dysvascular major lower extremity amputation. Two prospective cohort studies during consecutive 4-year periods (2005-2009 and 2010-2014) were conducted at seven medical centers. Multiple demographic and biopsychosocial predictors were collected in the periamputation period among individuals undergoing their first major amputation because of complications of peripheral arterial disease or diabetes. The primary outcomes were iBASIC and iADVANCED mobility, as measured by the Locomotor Capabilities Index. Combined data from both studies were used for model development and internal validation. Backwards stepwise logistic regression was used to develop the final prediction models. The discrimination and calibration of each model were assessed. Internal validity of each model was assessed with bootstrap sampling. Twelve-month follow-up was reached by 157 of 200 (79%) participants. Among these, 54 (34%) did not achieve iBASIC mobility, 103 (66%) achieved at least iBASIC mobility, and 51 (32%) also achieved iADVANCED mobility. Predictive factors associated with reduced odds of achieving iBASIC mobility were increasing age, chronic obstructive pulmonary disease, dialysis, diabetes, prior history of treatment for depression or anxiety, and very poor to fair self-rated health. Those who were white, were married, and had at least a high-school degree had a higher probability of achieving iBASIC mobility. The odds of achieving iBASIC mobility increased with increasing body mass index up to 30 kg/m 2 and decreased with increasing body mass index thereafter. The prediction model of iADVANCED mobility included the same predictors with the exception of diabetes, chronic obstructive pulmonary disease, and education level. Both models showed strong discrimination with C statistics of 0.85 and 0.82, respectively. The mean difference in predicted probabilities for those who did and did not achieve iBASIC and iADVANCED mobility was 33% and 29%, respectively. Tests for calibration and observed vs predicted plots suggested good fit for both models; however, the precision of the estimates of the predicted probabilities was modest. Internal validation through bootstrapping demonstrated some overoptimism of the original model development, with the optimism-adjusted C statistic for iBASIC and iADVANCED mobility being 0.74 and 0.71, respectively, and the discrimination slope 19% and 16%, respectively. AMPREDICT-Mobility is a user-friendly prediction tool that can inform the patient undergoing a dysvascular amputation and the patient's provider about the probability of independence in either basic or advanced mobility at each major lower extremity amputation level. Copyright © 2016 Society for Vascular Surgery. All rights reserved.
Thermal regimes of Rocky Mountain lakes warm with climate change
Roberts, James J.
2017-01-01
Anthropogenic climate change is causing a wide range of stresses in aquatic ecosystems, primarily through warming thermal conditions. Lakes, in response to these changes, are experiencing increases in both summer temperatures and ice-free days. We used continuous records of lake surface temperature and air temperature to create statistical models of daily mean lake surface temperature to assess thermal changes in mountain lakes. These models were combined with downscaled climate projections to predict future thermal conditions for 27 high-elevation lakes in the southern Rocky Mountains. The models predict a 0.25°C·decade-1 increase in mean annual lake surface temperature through the 2080s, which is greater than warming rates of streams in this region. Most striking is that on average, ice-free days are predicted to increase by 5.9 days ·decade-1, and summer mean lake surface temperature is predicted to increase by 0.47°C·decade-1. Both could profoundly alter the length of the growing season and potentially change the structure and function of mountain lake ecosystems. These results highlight the changes expected of mountain lakes and stress the importance of incorporating climate-related adaptive strategies in the development of resource management plans. PMID:28683083
Thermal regimes of Rocky Mountain lakes warm with climate change
Roberts, James J.; Fausch, Kurt D.; Schmidt, Travis S.; Walters, David M.
2017-01-01
Anthropogenic climate change is causing a wide range of stresses in aquatic ecosystems, primarily through warming thermal conditions. Lakes, in response to these changes, are experiencing increases in both summer temperatures and ice-free days. We used continuous records of lake surface temperature and air temperature to create statistical models of daily mean lake surface temperature to assess thermal changes in mountain lakes. These models were combined with downscaled climate projections to predict future thermal conditions for 27 high-elevation lakes in the southern Rocky Mountains. The models predict a 0.25°C·decade-1increase in mean annual lake surface temperature through the 2080s, which is greater than warming rates of streams in this region. Most striking is that on average, ice-free days are predicted to increase by 5.9 days ·decade-1, and summer mean lake surface temperature is predicted to increase by 0.47°C·decade-1. Both could profoundly alter the length of the growing season and potentially change the structure and function of mountain lake ecosystems. These results highlight the changes expected of mountain lakes and stress the importance of incorporating climate-related adaptive strategies in the development of resource management plans.
Thermal regimes of Rocky Mountain lakes warm with climate change.
Roberts, James J; Fausch, Kurt D; Schmidt, Travis S; Walters, David M
2017-01-01
Anthropogenic climate change is causing a wide range of stresses in aquatic ecosystems, primarily through warming thermal conditions. Lakes, in response to these changes, are experiencing increases in both summer temperatures and ice-free days. We used continuous records of lake surface temperature and air temperature to create statistical models of daily mean lake surface temperature to assess thermal changes in mountain lakes. These models were combined with downscaled climate projections to predict future thermal conditions for 27 high-elevation lakes in the southern Rocky Mountains. The models predict a 0.25°C·decade-1 increase in mean annual lake surface temperature through the 2080s, which is greater than warming rates of streams in this region. Most striking is that on average, ice-free days are predicted to increase by 5.9 days ·decade-1, and summer mean lake surface temperature is predicted to increase by 0.47°C·decade-1. Both could profoundly alter the length of the growing season and potentially change the structure and function of mountain lake ecosystems. These results highlight the changes expected of mountain lakes and stress the importance of incorporating climate-related adaptive strategies in the development of resource management plans.
Single Droplet Combustion of Decane in Microgravity: Experiments and Numerical Modeling
NASA Technical Reports Server (NTRS)
Dietrich, D. L.; Struk, P. M.; Ikegam, M.; Xu, G.
2004-01-01
This paper presents experimental data on single droplet combustion of decane in microgravity and compares the results to a numerical model. The primary independent experiment variables are the ambient pressure and oxygen mole fraction, pressure, droplet size (over a relatively small range) and ignition energy. The droplet history (D(sup 2) history) is non-linear with the burning rate constant increasing throughout the test. The average burning rate constant, consistent with classical theory, increased with increasing ambient oxygen mole fraction and was nearly independent of pressure, initial droplet size and ignition energy. The flame typically increased in size initially, and then decreased in size, in response to the shrinking droplet. The flame standoff increased linearly for the majority of the droplet lifetime. The flame surrounding the droplet extinguished at a finite droplet size at lower ambient pressures and an oxygen mole fraction of 0.15. The extinction droplet size increased with decreasing pressure. The model is transient and assumes spherical symmetry, constant thermo-physical properties (specific heat, thermal conductivity and species Lewis number) and single step chemistry. The model includes gas-phase radiative loss and a spherically symmetric, transient liquid phase. The model accurately predicts the droplet and flame histories of the experiments. Good agreement requires that the ignition in the experiment be reasonably approximated in the model and that the model accurately predict the pre-ignition vaporization of the droplet. The model does not accurately predict the dependence of extinction droplet diameter on pressure, a result of the simplified chemistry in the model. The transient flame behavior suggests the potential importance of fuel vapor accumulation. The model results, however, show that the fractional mass consumption rate of fuel in the flame relative to fuel vaporized is close to 1.0 for all but the lowest ambient oxygen mole fractions.
Ability of crime, demographic and business data to forecast areas of increased violence.
Bowen, Daniel A; Mercer Kollar, Laura M; Wu, Daniel T; Fraser, David A; Flood, Charles E; Moore, Jasmine C; Mays, Elizabeth W; Sumner, Steven A
2018-05-24
Identifying geographic areas and time periods of increased violence is of considerable importance in prevention planning. This study compared the performance of multiple data sources to prospectively forecast areas of increased interpersonal violence. We used 2011-2014 data from a large metropolitan county on interpersonal violence (homicide, assault, rape and robbery) and forecasted violence at the level of census block-groups and over a one-month moving time window. Inputs to a Random Forest model included historical crime records from the police department, demographic data from the US Census Bureau, and administrative data on licensed businesses. Among 279 block groups, a model utilizing all data sources was found to prospectively improve the identification of the top 5% most violent block-group months (positive predictive value = 52.1%; negative predictive value = 97.5%; sensitivity = 43.4%; specificity = 98.2%). Predictive modelling with simple inputs can help communities more efficiently focus violence prevention resources geographically.
Sediment transport through self-adjusting, bedrock-walled waterfall plunge pools
NASA Astrophysics Data System (ADS)
Scheingross, Joel S.; Lamb, Michael P.
2016-05-01
Many waterfalls have deep plunge pools that are often partially or fully filled with sediment. Sediment fill may control plunge-pool bedrock erosion rates, partially determine habitat availability for aquatic organisms, and affect sediment routing and debris flow initiation. Currently, there exists no mechanistic model to describe sediment transport through waterfall plunge pools. Here we develop an analytical model to predict steady-state plunge-pool depth and sediment-transport capacity by combining existing jet theory with sediment transport mechanics. Our model predicts plunge-pool sediment-transport capacity increases with increasing river discharge, flow velocity, and waterfall drop height and decreases with increasing plunge-pool depth, radius, and grain size. We tested the model using flume experiments under varying waterfall and plunge-pool geometries, flow hydraulics, and sediment size. The model and experiments show that through morphodynamic feedbacks, plunge pools aggrade to reach shallower equilibrium pool depths in response to increases in imposed sediment supply. Our theory for steady-state pool depth matches the experiments with an R2 value of 0.8, with discrepancies likely due to model simplifications of the hydraulics and sediment transport. Analysis of 75 waterfalls suggests that the water depths in natural plunge pools are strongly influenced by upstream sediment supply, and our model provides a mass-conserving framework to predict sediment and water storage in waterfall plunge pools for sediment routing, habitat assessment, and bedrock erosion modeling.
A Complete Procedure for Predicting and Improving the Performance of HAWT's
NASA Astrophysics Data System (ADS)
Al-Abadi, Ali; Ertunç, Özgür; Sittig, Florian; Delgado, Antonio
2014-06-01
A complete procedure for predicting and improving the performance of the horizontal axis wind turbine (HAWT) has been developed. The first process is predicting the power extracted by the turbine and the derived rotor torque, which should be identical to that of the drive unit. The BEM method and a developed post-stall treatment for resolving stall-regulated HAWT is incorporated in the prediction. For that, a modified stall-regulated prediction model, which can predict the HAWT performance over the operating range of oncoming wind velocity, is derived from existing models. The model involves radius and chord, which has made it more general in applications for predicting the performance of different scales and rotor shapes of HAWTs. The second process is modifying the rotor shape by an optimization process, which can be applied to any existing HAWT, to improve its performance. A gradient- based optimization is used for adjusting the chord and twist angle distribution of the rotor blade to increase the extraction of the power while keeping the drive torque constant, thus the same drive unit can be kept. The final process is testing the modified turbine to predict its enhanced performance. The procedure is applied to NREL phase-VI 10kW as a baseline turbine. The study has proven the applicability of the developed model in predicting the performance of the baseline as well as the optimized turbine. In addition, the optimization method has shown that the power coefficient can be increased while keeping same design rotational speed.
Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM
NASA Astrophysics Data System (ADS)
Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan
2018-02-01
The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.
Tong, Juxiu; Hu, Bill X; Yang, Jinzhong; Zhu, Yan
2016-06-01
The mixing layer theory is not suitable for predicting solute transfer from initially saturated soil to surface runoff water under controlled drainage conditions. By coupling the mixing layer theory model with the numerical model Hydrus-1D, a hybrid solute transfer model has been proposed to predict soil solute transfer from an initially saturated soil into surface water, under controlled drainage water conditions. The model can also consider the increasing ponding water conditions on soil surface before surface runoff. The data of solute concentration in surface runoff and drainage water from a sand experiment is used as the reference experiment. The parameters for the water flow and solute transfer model and mixing layer depth under controlled drainage water condition are identified. Based on these identified parameters, the model is applied to another initially saturated sand experiment with constant and time-increasing mixing layer depth after surface runoff, under the controlled drainage water condition with lower drainage height at the bottom. The simulation results agree well with the observed data. Study results suggest that the hybrid model can accurately simulate the solute transfer from initially saturated soil into surface runoff under controlled drainage water condition. And it has been found that the prediction with increasing mixing layer depth is better than that with the constant one in the experiment with lower drainage condition. Since lower drainage condition and deeper ponded water depth result in later runoff start time, more solute sources in the mixing layer are needed for the surface water, and larger change rate results in the increasing mixing layer depth.
Additional challenges for uncertainty analysis in river engineering
NASA Astrophysics Data System (ADS)
Berends, Koen; Warmink, Jord; Hulscher, Suzanne
2016-04-01
The management of rivers for improving safety, shipping and environment requires conscious effort on the part of river managers. River engineers design hydraulic works to tackle various challenges, from increasing flow conveyance to ensuring minimal water depths for environmental flow and inland shipping. Last year saw the completion of such large scale river engineering in the 'Room for the River' programme for the Dutch Rhine River system, in which several dozen of human interventions were built to increase flood safety. Engineering works in rivers are not completed in isolation from society. Rather, their benefits - increased safety, landscaping beauty - and their disadvantages - expropriation, hindrance - directly affect inhabitants. Therefore river managers are required to carefully defend their plans. The effect of engineering works on river dynamics is being evaluated using hydraulic river models. Two-dimensional numerical models based on the shallow water equations provide the predictions necessary to make decisions on designs and future plans. However, like all environmental models, these predictions are subject to uncertainty. In recent years progress has been made in the identification of the main sources of uncertainty for hydraulic river models. Two of the most important sources are boundary conditions and hydraulic roughness (Warmink et al. 2013). The result of these sources of uncertainty is that the identification of single, deterministic prediction model is a non-trivial task. This is this is a well-understood problem in other fields as well - most notably hydrology - and known as equifinality. However, the particular case of human intervention modelling with hydraulic river models compounds the equifinality case. The model that provides the reference baseline situation is usually identified through calibration and afterwards modified for the engineering intervention. This results in two distinct models, the evaluation of which yields the effect of the proposed intervention. The implicit assumption underlying such analysis is that both models are commensurable. We hypothesize that they are commensurable only to a certain extent. In an idealised study we have demonstrated that prediction performance loss should be expected with increasingly large engineering works. When accounting for parametric uncertainty of floodplain roughness in model identification, we see uncertainty bounds for predicted effects of interventions increase with increasing intervention scale. Calibration of these types of models therefore seems to have a shelf-life, beyond which calibration does not longer improves prediction. Therefore a qualification scheme for model use is required that can be linked to model validity. In this study, we characterize model use along three dimensions: extrapolation (using the model with different external drivers), extension (using the model for different output or indicators) and modification (using modified models). Such use of models is expected to have implications for the applicability of surrogating modelling for efficient uncertainty analysis as well, which is recommended for future research. Warmink, J. J.; Straatsma, M. W.; Huthoff, F.; Booij, M. J. & Hulscher, S. J. M. H. 2013. Uncertainty of design water levels due to combined bed form and vegetation roughness in the Dutch river Waal. Journal of Flood Risk Management 6, 302-318 . DOI: 10.1111/jfr3.12014
Penn, Lauren A.; Qian, Meng; Zhang, Enhan; Ng, Elise; Shao, Yongzhao; Berwick, Marianne; Lazovich, DeAnn; Polsky, David
2014-01-01
Background Identifying individuals at increased risk for melanoma could potentially improve public health through targeted surveillance and early detection. Studies have separately demonstrated significant associations between melanoma risk, melanocortin receptor (MC1R) polymorphisms, and indoor ultraviolet light (UV) exposure. Existing melanoma risk prediction models do not include these factors; therefore, we investigated their potential to improve the performance of a risk model. Methods Using 875 melanoma cases and 765 controls from the population-based Minnesota Skin Health Study we compared the predictive ability of a clinical melanoma risk model (Model A) to an enhanced model (Model F) using receiver operating characteristic (ROC) curves. Model A used self-reported conventional risk factors including mole phenotype categorized as “none”, “few”, “some” or “many” moles. Model F added MC1R genotype and measures of indoor and outdoor UV exposure to Model A. We also assessed the predictive ability of these models in subgroups stratified by mole phenotype (e.g. nevus-resistant (“none” and “few” moles) and nevus-prone (“some” and “many” moles)). Results Model A (the reference model) yielded an area under the ROC curve (AUC) of 0.72 (95% CI = 0.69, 0.74). Model F was improved with an AUC = 0.74 (95% CI = 0.71–0.76, p<0.01). We also observed substantial variations in the AUCs of Models A & F when examined in the nevus-prone and nevus-resistant subgroups. Conclusions These results demonstrate that adding genotypic information and environmental exposure data can increase the predictive ability of a clinical melanoma risk model, especially among nevus-prone individuals. PMID:25003831
Effect of Phosphate, Fluoride, and Nitrate on Gibbsite Dissolution Rate and Solubility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herting, Daniel L.
2014-01-29
Laboratory tests have been completed with simulated tank waste samples to investigate the effects of phosphate, fluoride, and nitrate on the dissolution rate and equilibrium solubility of gibbsite in sodium hydroxide solution at 22 and 40{degrees}C. Results are compared to relevant literature data and to computer model predictions. The presence of sodium nitrate (3 M) caused a reduction in the rate of gibbsite dissolution in NaOH, but a modest increase in the equilibrium solubility of aluminum. The increase in solubility was not as large, though, as the increase predicted by the computer model. The presence of phosphate, either as sodiummore » phosphate or sodium fluoride phosphate, had a negligible effect on the rate of gibbsite dissolution, but caused a slight increase in aluminum solubility. The magnitude of the increased solubility, relative to the increase caused by sodium nitrate, suggests that the increase is due to ionic strength (or water activity) effects, rather than being associated with the specific ion involved. The computer model predicted that phosphate would cause a slight decrease in aluminum solubility, suggesting some Al-PO4 interaction. No evidence was found of such an interaction.« less
Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.
2017-01-01
Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875
Negative Self-Focused Cognitions Mediate the Effect of Trait Social Anxiety on State Anxiety
Schulz, Stefan M.; Alpers, Georg W.; Hofmann, Stefan G.
2008-01-01
The cognitive model of social anxiety predicts that negative self-focused cognitions increase anxiety when anticipating social threat. To test this prediction, 36 individuals were asked to anticipate and perform a public speaking task. During anticipation, negative self-focused cognitions or relaxation were experimentally induced while self-reported anxiety, autonomic arousal (heart rate, heart rate variability, skin conductance level), and acoustic eye-blink startle response were assessed. As predicted, negative self-focused cognitions mediated the effects of trait social anxiety on self-reported anxiety and heart rate variability during negative anticipation. Furthermore, trait social anxiety predicted increased startle amplitudes. These findings support a central assumption of the cognitive model of social anxiety. PMID:18321469
Nevers, Meredith B.; Whitman, Richard L.
2011-01-01
Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
ERIC Educational Resources Information Center
Branand, Brittany; Mashek, Debra; Wray-Lake, Laura; Coffey, John K.
2015-01-01
Consistent with predictions derived from the self-expansion model, this 3-year longitudinal study found that participation in more college groups during sophomore year predicted increases in inclusion of the college community in the self at the end of junior year, which further predicted increases in satisfaction with the college experience at the…
Predictability of the Indian Ocean Dipole in the coupled models
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao
2017-03-01
In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Ng, Kenney; Steinhubl, Steven R; deFilippi, Christopher; Dey, Sanjoy; Stewart, Walter F
2016-11-01
Using electronic health records data to predict events and onset of diseases is increasingly common. Relatively little is known, although, about the tradeoffs between data requirements and model utility. We examined the performance of machine learning models trained to detect prediagnostic heart failure in primary care patients using longitudinal electronic health records data. Model performance was assessed in relation to data requirements defined by the prediction window length (time before clinical diagnosis), the observation window length (duration of observation before prediction window), the number of different data domains (data diversity), the number of patient records in the training data set (data quantity), and the density of patient encounters (data density). A total of 1684 incident heart failure cases and 13 525 sex, age-category, and clinic matched controls were used for modeling. Model performance improved as (1) the prediction window length decreases, especially when <2 years; (2) the observation window length increases but then levels off after 2 years; (3) the training data set size increases but then levels off after 4000 patients; (4) more diverse data types are used, but, in order, the combination of diagnosis, medication order, and hospitalization data was most important; and (5) data were confined to patients who had ≥10 phone or face-to-face encounters in 2 years. These empirical findings suggest possible guidelines for the minimum amount and type of data needed to train effective disease onset predictive models using longitudinal electronic health records data. © 2016 American Heart Association, Inc.
Optimal flight initiation distance.
Cooper, William E; Frederick, William G
2007-01-07
Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.
Ágreda, Teresa; Águeda, Beatriz; Olano, José M; Vicente-Serrano, Sergio M; Fernández-Toirán, Marina
2015-09-01
Wild fungi play a critical role in forest ecosystems, and its recollection is a relevant economic activity. Understanding fungal response to climate is necessary in order to predict future fungal production in Mediterranean forests under climate change scenarios. We used a 15-year data set to model the relationship between climate and epigeous fungal abundance and productivity, for mycorrhizal and saprotrophic guilds in a Mediterranean pine forest. The obtained models were used to predict fungal productivity for the 2021-2080 period by means of regional climate change models. Simple models based on early spring temperature and summer-autumn rainfall could provide accurate estimates for fungal abundance and productivity. Models including rainfall and climatic water balance showed similar results and explanatory power for the analyzed 15-year period. However, their predictions for the 2021-2080 period diverged. Rainfall-based models predicted a maintenance of fungal yield, whereas water balance-based models predicted a steady decrease of fungal productivity under a global warming scenario. Under Mediterranean conditions fungi responded to weather conditions in two distinct periods: early spring and late summer-autumn, suggesting a bimodal pattern of growth. Saprotrophic and mycorrhizal fungi showed differences in the climatic control. Increased atmospheric evaporative demand due to global warming might lead to a drop in fungal yields during the 21st century. © 2015 John Wiley & Sons Ltd.
Genomic selection for slaughter age in pigs using the Cox frailty model.
Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F
2015-10-19
The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.
Steen, Paul J.; Wiley, Michael J.; Schaeffer, Jeffrey S.
2010-01-01
Future alterations in land cover and climate are likely to cause substantial changes in the ranges of fish species. Predictive distribution models are an important tool for assessing the probability that these changes will cause increases or decreases in or the extirpation of species. Classification tree models that predict the probability of game fish presence were applied to the streams of the Muskegon River watershed, Michigan. The models were used to study three potential future scenarios: (1) land cover change only, (2) land cover change and a 3°C increase in air temperature by 2100, and (3) land cover change and a 5°C increase in air temperature by 2100. The analysis indicated that the expected change in air temperature and subsequent change in water temperatures would result in the decline of coldwater fish in the Muskegon watershed by the end of the 21st century while cool- and warmwater species would significantly increase their ranges. The greatest decline detected was a 90% reduction in the probability that brook trout Salvelinus fontinalis would occur in Bigelow Creek. The greatest increase was a 276% increase in the probability that northern pike Esox lucius would occur in the Middle Branch River. Changes in land cover are expected to cause large changes in a few fish species, such as walleye Sander vitreus and Chinook salmon Oncorhynchus tshawytscha, but not to drive major changes in species composition. Managers can alter stream environmental conditions to maximize the probability that species will reside in particular stream reaches through application of the classification tree models. Such models represent a good way to predict future changes, as they give quantitative estimates of the n-dimensional niches for particular species.
Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan
2015-10-01
. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Kulkarni, Chetan S.
2016-01-01
As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.
Rehnus, Maik; Bollmann, Kurt; Schmatz, Dirk R; Hackländer, Klaus; Braunisch, Veronika
2018-03-13
Alpine and Arctic species are considered to be particularly vulnerable to climate change, which is expected to cause habitat loss, fragmentation and-ultimately-extinction of cold-adapted species. However, the impact of climate change on glacial relict populations is not well understood, and specific recommendations for adaptive conservation management are lacking. We focused on the mountain hare (Lepus timidus) as a model species and modelled species distribution in combination with patch and landscape-based connectivity metrics. They were derived from graph-theory models to quantify changes in species distribution and to estimate the current and future importance of habitat patches for overall population connectivity. Models were calibrated based on 1,046 locations of species presence distributed across three biogeographic regions in the Swiss Alps and extrapolated according to two IPCC scenarios of climate change (RCP 4.5 & 8.5), each represented by three downscaled global climate models. The models predicted an average habitat loss of 35% (22%-55%) by 2100, mainly due to an increase in temperature during the reproductive season. An increase in habitat fragmentation was reflected in a 43% decrease in patch size, a 17% increase in the number of habitat patches and a 34% increase in inter-patch distance. However, the predicted changes in habitat availability and connectivity varied considerably between biogeographic regions: Whereas the greatest habitat losses with an increase in inter-patch distance were predicted at the southern and northern edges of the species' Alpine distribution, the greatest increase in patch number and decrease in patch size is expected in the central Swiss Alps. Finally, both the number of isolated habitat patches and the number of patches crucial for maintaining the habitat network increased under the different variants of climate change. Focusing conservation action on the central Swiss Alps may help mitigate the predicted effects of climate change on population connectivity. © 2018 John Wiley & Sons Ltd.
Predicting Insulin Absorption and Glucose Uptake during Exercise in Type 1 Diabetes
NASA Astrophysics Data System (ADS)
Frank, Spencer; Hinshaw, Ling; Basu, Rita; Szeri, Andrew; Basu, Ananda
2017-11-01
A dose of insulin infused into subcutaneous tissue has been shown to absorb more quickly during exercise, potentially causing hypoglycemia in persons with type 1 diabetes. We develop a model that relates exercise-induced physiological changes to enhanced insulin-absorption (k) and glucose uptake (GU). Drawing on concepts of the microcirculation we derive a relationship that reveals that k and GU are mainly determined by two physiological parameters that characterize the tissue: the tissue perfusion rate (Q) and the capillary permeability surface area (PS). Independently measured values of Q and PS from the literature are used in the model to make predictions of k and GU. We compare these predictions to experimental observations of healthy and diabetic patients that are given a meal followed by rest or exercise. The experiments show that during exercise insulin concentrations significantly increase and that glucose levels fall rapidly. The model predictions are consistent with the experiments and show that increases in Q and PS directly increase k and GU. This mechanistic understanding provides a basis for handling exercise in control algorithms for an artificial pancreas. Now at University of British Columbia.
Cantiello, Francesco; Russo, Giorgio Ivan; Cicione, Antonio; Ferro, Matteo; Cimino, Sebastiano; Favilla, Vincenzo; Perdonà, Sisto; De Cobelli, Ottavio; Magno, Carlo; Morgia, Giuseppe; Damiano, Rocco
2016-04-01
To assess the performance of prostate health index (PHI) and prostate cancer antigen 3 (PCA3) when added to the PRIAS or Epstein criteria in predicting the presence of pathologically insignificant prostate cancer (IPCa) in patients who underwent radical prostatectomy (RP) but eligible for active surveillance (AS). An observational retrospective study was performed in 188 PCa patients treated with laparoscopic or robot-assisted RP but eligible for AS according to Epstein or PRIAS criteria. Blood and urinary specimens were collected before initial prostate biopsy for PHI and PCA3 measurements. Multivariate logistic regression analyses and decision curve analysis were carried out to identify predictors of IPCa using the updated ERSPC definition. At the multivariate analyses, the inclusion of both PCA3 and PHI significantly increased the accuracy of the Epstein multivariate model in predicting IPCa with an increase of 17 % (AUC = 0.77) and of 32 % (AUC = 0.92), respectively. The inclusion of both PCA3 and PHI also increased the predictive accuracy of the PRIAS multivariate model with an increase of 29 % (AUC = 0.87) and of 39 % (AUC = 0.97), respectively. DCA revealed that the multivariable models with the addition of PHI or PCA3 showed a greater net benefit and performed better than the reference models. In a direct comparison, PHI outperformed PCA3 performance resulting in higher net benefit. In a same cohort of patients eligible for AS, the addition of PHI and PCA3 to Epstein or PRIAS models improved their prognostic performance. PHI resulted in greater net benefit in predicting IPCa compared to PCA3.
An application of quantile random forests for predictive mapping of forest attributes
E.A. Freeman; G.G. Moisen
2015-01-01
Increasingly, random forest models are used in predictive mapping of forest attributes. Traditional random forests output the mean prediction from the random trees. Quantile regression forests (QRF) is an extension of random forests developed by Nicolai Meinshausen that provides non-parametric estimates of the median predicted value as well as prediction quantiles. It...
Comas, Jorge; Benfeitas, Rui; Vilaprinyo, Ester; Sorribas, Albert; Solsona, Francesc; Farré, Gemma; Berman, Judit; Zorrilla, Uxue; Capell, Teresa; Sandmann, Gerhard; Zhu, Changfu; Christou, Paul; Alves, Rui
2016-09-01
Plant synthetic biology is still in its infancy. However, synthetic biology approaches have been used to manipulate and improve the nutritional and health value of staple food crops such as rice, potato and maize. With current technologies, production yields of the synthetic nutrients are a result of trial and error, and systematic rational strategies to optimize those yields are still lacking. Here, we present a workflow that combines gene expression and quantitative metabolomics with mathematical modeling to identify strategies for increasing production yields of nutritionally important carotenoids in the seed endosperm synthesized through alternative biosynthetic pathways in synthetic lines of white maize, which is normally devoid of carotenoids. Quantitative metabolomics and gene expression data are used to create and fit parameters of mathematical models that are specific to four independent maize lines. Sensitivity analysis and simulation of each model is used to predict which gene activities should be further engineered in order to increase production yields for carotenoid accumulation in each line. Some of these predictions (e.g. increasing Zmlycb/Gllycb will increase accumulated β-carotenes) are valid across the four maize lines and consistent with experimental observations in other systems. Other predictions are line specific. The workflow is adaptable to any other biological system for which appropriate quantitative information is available. Furthermore, we validate some of the predictions using experimental data from additional synthetic maize lines for which no models were developed. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
King, Zachary A; O'Brien, Edward J; Feist, Adam M; Palsson, Bernhard O
2017-01-01
The metabolic byproducts secreted by growing cells can be easily measured and provide a window into the state of a cell; they have been essential to the development of microbiology, cancer biology, and biotechnology. Progress in computational modeling of cells has made it possible to predict metabolic byproduct secretion with bottom-up reconstructions of metabolic networks. However, owing to a lack of data, it has not been possible to validate these predictions across a wide range of strains and conditions. Through literature mining, we were able to generate a database of Escherichia coli strains and their experimentally measured byproduct secretions. We simulated these strains in six historical genome-scale models of E. coli, and we report that the predictive power of the models has increased as they have expanded in size and scope. The latest genome-scale model of metabolism correctly predicts byproduct secretion for 35/89 (39%) of designs. The next-generation genome-scale model of metabolism and gene expression (ME-model) correctly predicts byproduct secretion for 40/89 (45%) of designs, and we show that ME-model predictions could be further improved through kinetic parameterization. We analyze the failure modes of these simulations and discuss opportunities to improve prediction of byproduct secretion. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Varying execution discipline to increase performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, P.L.; Maccabe, A.B.
1993-12-22
This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less
Task-Specific Response Strategy Selection on the Basis of Recent Training Experience
Fulvio, Jacqueline M.; Green, C. Shawn; Schrater, Paul R.
2014-01-01
The goal of training is to produce learning for a range of activities that are typically more general than the training task itself. Despite a century of research, predicting the scope of learning from the content of training has proven extremely difficult, with the same task producing narrowly focused learning strategies in some cases and broadly scoped learning strategies in others. Here we test the hypothesis that human subjects will prefer a decision strategy that maximizes performance and reduces uncertainty given the demands of the training task and that the strategy chosen will then predict the extent to which learning is transferable. To test this hypothesis, we trained subjects on a moving dot extrapolation task that makes distinct predictions for two types of learning strategy: a narrow model-free strategy that learns an input-output mapping for training stimuli, and a general model-based strategy that utilizes humans' default predictive model for a class of trajectories. When the number of distinct training trajectories is low, we predict better performance for the mapping strategy, but as the number increases, a predictive model is increasingly favored. Consonant with predictions, subject extrapolations for test trajectories were consistent with using a mapping strategy when trained on a small number of training trajectories and a predictive model when trained on a larger number. The general framework developed here can thus be useful both in interpreting previous patterns of task-specific versus task-general learning, as well as in building future training paradigms with certain desired outcomes. PMID:24391490
The global increase of noxious bloom occurrences has increased the need for phytoplankton management schemes. Such schemes require the ability to predict phytoplankton succession. Equilibrium Resources Competition theory, which is popular for predicting succession in lake systems...
Sweat loss prediction using a multi-model approach
NASA Astrophysics Data System (ADS)
Xu, Xiaojiang; Santee, William R.
2011-07-01
A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.
Salgado, J Cristian; Andrews, Barbara A; Ortuzar, Maria Fernanda; Asenjo, Juan A
2008-01-18
The prediction of the partition behaviour of proteins in aqueous two-phase systems (ATPS) using mathematical models based on their amino acid composition was investigated. The predictive models are based on the average surface hydrophobicity (ASH). The ASH was estimated by means of models that use the three-dimensional structure of proteins and by models that use only the amino acid composition of proteins. These models were evaluated for a set of 11 proteins with known experimental partition coefficient in four-phase systems: polyethylene glycol (PEG) 4000/phosphate, sulfate, citrate and dextran and considering three levels of NaCl concentration (0.0% w/w, 0.6% w/w and 8.8% w/w). The results indicate that such prediction is feasible even though the quality of the prediction depends strongly on the ATPS and its operational conditions such as the NaCl concentration. The ATPS 0 model which use the three-dimensional structure obtains similar results to those given by previous models based on variables measured in the laboratory. In addition it maintains the main characteristics of the hydrophobic resolution and intrinsic hydrophobicity reported before. Three mathematical models, ATPS I-III, based only on the amino acid composition were evaluated. The best results were obtained by the ATPS I model which assumes that all of the amino acids are completely exposed. The performance of the ATPS I model follows the behaviour reported previously, i.e. its correlation coefficients improve as the NaCl concentration increases in the system and, therefore, the effect of the protein hydrophobicity prevails over other effects such as charge or size. Its best predictive performance was obtained for the PEG/dextran system at high NaCl concentration. An increase in the predictive capacity of at least 54.4% with respect to the models which use the three-dimensional structure of the protein was obtained for that system. In addition, the ATPS I model exhibits high correlation coefficients in that system being higher than 0.88 on average. The ATPS I model exhibited correlation coefficients higher than 0.67 for the rest of the ATPS at high NaCl concentration. Finally, we tested our best model, the ATPS I model, on the prediction of the partition coefficient of the protein invertase. We found that the predictive capacities of the ATPS I model are better in PEG/dextran systems, where the relative error of the prediction with respect to the experimental value is 15.6%.
Forecasting of monsoon heavy rains: challenges in NWP
NASA Astrophysics Data System (ADS)
Sharma, Kuldeep; Ashrit, Raghavendra; Iyengar, Gopal; Bhatla, R.; Rajagopal, E. N.
2016-05-01
Last decade has seen a tremendous improvement in the forecasting skill of numerical weather prediction (NWP) models. This is attributed to increased sophistication in NWP models, which resolve complex physical processes, advanced data assimilation, increased grid resolution and satellite observations. However, prediction of heavy rains is still a challenge since the models exhibit large error in amounts as well as spatial and temporal distribution. Two state-of-art NWP models have been investigated over the Indian monsoon region to assess their ability in predicting the heavy rainfall events. The unified model operational at National Center for Medium Range Weather Forecasting (NCUM) and the unified model operational at the Australian Bureau of Meteorology (Australian Community Climate and Earth-System Simulator -- Global (ACCESS-G)) are used in this study. The recent (JJAS 2015) Indian monsoon season witnessed 6 depressions and 2 cyclonic storms which resulted in heavy rains and flooding. The CRA method of verification allows the decomposition of forecast errors in terms of error in the rainfall volume, pattern and location. The case by case study using CRA technique shows that contribution to the rainfall errors come from pattern and displacement is large while contribution due to error in predicted rainfall volume is least.
Dermal uptake of phthalates from clothing: Comparison of model to human participant results.
Morrison, G C; Weschler, C J; Bekö, G
2017-05-01
In this research, we extend a model of transdermal uptake of phthalates to include a layer of clothing. When compared with experimental results, this model better estimates dermal uptake of diethylphthalate and di-n-butylphthalate (DnBP) than a previous model. The model predictions are consistent with the observation that previously exposed clothing can increase dermal uptake over that observed in bare-skin participants for the same exposure air concentrations. The model predicts that dermal uptake from clothing of DnBP is a substantial fraction of total uptake from all sources of exposure. For compounds that have high dermal permeability coefficients, dermal uptake is increased for (i) thinner clothing, (ii) a narrower gap between clothing and skin, and (iii) longer time intervals between laundering and wearing. Enhanced dermal uptake is most pronounced for compounds with clothing-air partition coefficients between 10 4 and 10 7 . In the absence of direct measurements of cotton cloth-air partition coefficients, dermal exposure may be predicted using equilibrium data for compounds in equilibrium with cellulose and water, in combination with computational methods of predicting partition coefficients. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Hierarchical time series bottom-up approach for forecast the export value in Central Java
NASA Astrophysics Data System (ADS)
Mahkya, D. A.; Ulama, B. S.; Suhartono
2017-10-01
The purpose of this study is Getting the best modeling and predicting the export value of Central Java using a Hierarchical Time Series. The export value is one variable injection in the economy of a country, meaning that if the export value of the country increases, the country’s economy will increase even more. Therefore, it is necessary appropriate modeling to predict the export value especially in Central Java. Export Value in Central Java are grouped into 21 commodities with each commodity has a different pattern. One approach that can be used time series is a hierarchical approach. Hierarchical Time Series is used Buttom-up. To Forecast the individual series at all levels using Autoregressive Integrated Moving Average (ARIMA), Radial Basis Function Neural Network (RBFNN), and Hybrid ARIMA-RBFNN. For the selection of the best models used Symmetric Mean Absolute Percentage Error (sMAPE). Results of the analysis showed that for the Export Value of Central Java, Bottom-up approach with Hybrid ARIMA-RBFNN modeling can be used for long-term predictions. As for the short and medium-term predictions, it can be used a bottom-up approach RBFNN modeling. Overall bottom-up approach with RBFNN modeling give the best result.
Sowande, O S; Oyewale, B F; Iyasere, O S
2010-06-01
The relationships between live weight and eight body measurements of West African Dwarf (WAD) goats were studied using 211 animals under farm condition. The animals were categorized based on age and sex. Data obtained on height at withers (HW), heart girth (HG), body length (BL), head length (HL), and length of hindquarter (LHQ) were fitted into simple linear, allometric, and multiple-regression models to predict live weight from the body measurements according to age group and sex. Results showed that live weight, HG, BL, LHQ, HL, and HW increased with the age of the animals. In multiple-regression model, HG and HL best fit the model for goat kids; HG, HW, and HL for goat aged 13-24 months; while HG, LHQ, HW, and HL best fit the model for goats aged 25-36 months. Coefficients of determination (R(2)) values for linear and allometric models for predicting the live weight of WAD goat increased with age in all the body measurements, with HG being the most satisfactory single measurement in predicting the live weight of WAD goat. Sex had significant influence on the model with R(2) values consistently higher in females except the models for LHQ and HW.
NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Lee-Rausch, E. M.
2012-01-01
Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.
A Systems Model of Parkinson's Disease Using Biochemical Systems Theory.
Sasidharakurup, Hemalatha; Melethadathil, Nidheesh; Nair, Bipin; Diwakar, Shyam
2017-08-01
Parkinson's disease (PD), a neurodegenerative disorder, affects millions of people and has gained attention because of its clinical roles affecting behaviors related to motor and nonmotor symptoms. Although studies on PD from various aspects are becoming popular, few rely on predictive systems modeling approaches. Using Biochemical Systems Theory (BST), this article attempts to model and characterize dopaminergic cell death and understand pathophysiology of progression of PD. PD pathways were modeled using stochastic differential equations incorporating law of mass action, and initial concentrations for the modeled proteins were obtained from literature. Simulations suggest that dopamine levels were reduced significantly due to an increase in dopaminergic quinones and 3,4-dihydroxyphenylacetaldehyde (DOPAL) relating to imbalances compared to control during PD progression. Associating to clinically observed PD-related cell death, simulations show abnormal parkin and reactive oxygen species levels with an increase in neurofibrillary tangles. While relating molecular mechanistic roles, the BST modeling helps predicting dopaminergic cell death processes involved in the progression of PD and provides a predictive understanding of neuronal dysfunction for translational neuroscience.
Das, Rudra Narayan; Roy, Kunal
2014-06-01
Hazardous potential of ionic liquids is becoming an issue of high concern with increasing application of these compounds in various industrial processes. Predictive toxicological modeling on ionic liquids provides a rational assessment strategy and aids in developing suitable guidance for designing novel analogues. The present study attempts to explore the chemical features of ionic liquids responsible for their ecotoxicity towards the green algae Scenedesmus vacuolatus by developing mathematical models using extended topochemical atom (ETA) indices along with other categories of chemical descriptors. The entire study has been conducted with reference to the OECD guidelines for QSAR model development using predictive classification and regression modeling strategies. The best models from both the analyses showed that ecotoxicity of ionic liquids can be decreased by reducing chain length of cationic substituents and increasing hydrogen bond donor feature in cations, and replacing bulky unsaturated anions with simple saturated moiety having less lipophilic heteroatoms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Factors affecting species distribution predictions: A simulation modeling experiment
Gordon C. Reese; Kenneth R. Wilson; Jennifer A. Hoeting; Curtis H. Flather
2005-01-01
Geospatial species sample data (e.g., records with location information from natural history museums or annual surveys) are rarely collected optimally, yet are increasingly used for decisions concerning our biological heritage. Using computer simulations, we examined factors that could affect the performance of autologistic regression (ALR) models that predict species...
Evaluation of a Social Contextual Model of Delinquency: A Cross-Study Replication.
ERIC Educational Resources Information Center
Scaramella, Laura V.; Conger, Rand D.; Spoth, Richard; Simons, Ronald L.
2002-01-01
Examined three theories for predicting risk for delinquency during adolescence with sixth- and seventh-grade students: an individual difference perspective, social interactional model, and social contextual approach. Found that lack of nurturant and involved parenting indirectly predicted delinquency by increasing antisocial behavior and deviant…
Interest is increasing in using biological community data to provide information on the specific types of anthropogenic influences impacting streams. We built empirical models that predict the level of six different types of stress with fish and benthic macroinvertebrate data as...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paini, Alicia, E-mail: alicia.paini@rdls.nestle.co; Nestle Research Center, PO Box 44, Lausanne; Punt, Ans
2010-05-15
Estragole has been shown to be hepatocarcinogenic in rodent species at high-dose levels. Translation of these results into the likelihood of formation of DNA adducts, mutation, and ultimately cancer upon more realistic low-dose exposures remains a challenge. Recently we have developed physiologically based biokinetic (PBBK) models for rat and human predicting bioactivation of estragole. These PBBK models, however, predict only kinetic characteristics. The present study describes the extension of the PBBK model to a so-called physiologically based biodynamic (PBBD) model predicting in vivo DNA adduct formation of estragole in rat liver. This PBBD model was developed using in vitro datamore » on DNA adduct formation in rat primary hepatocytes exposed to 1'-hydroxyestragole. The model was extended by linking the area under the curve for 1'-hydroxyestragole formation predicted by the PBBK model to the area under the curve for 1'-hydroxyestragole in the in vitro experiments. The outcome of the PBBD model revealed a linear increase in DNA adduct formation with increasing estragole doses up to 100 mg/kg bw. Although DNA adduct formation of genotoxic carcinogens is generally seen as a biomarker of exposure rather than a biomarker of response, the PBBD model now developed is one step closer to the ultimate toxic effect of estragole than the PBBK model described previously. Comparison of the PBBD model outcome to available data showed that the model adequately predicts the dose-dependent level of DNA adduct formation. The PBBD model predicts DNA adduct formation at low levels of exposure up to a dose level showing to cause cancer in rodent bioassays, providing a proof of principle for modeling a toxicodynamic in vivo endpoint on the basis of solely in vitro experimental data.« less
The development of a model to predict BW gain of growing cattle fed grass silage-based diets.
Huuskonen, A; Huhtanen, P
2015-08-01
The objective of this meta-analysis was to develop and validate empirical equations predicting BW gain (BWG) and carcass traits of growing cattle from intake and diet composition variables. The modelling was based on treatment mean data from feeding trials in growing cattle, in which the nutrient supply was manipulated by wide ranges of forage and concentrate factors. The final dataset comprised 527 diets in 116 studies. The diets were mainly based on grass silage or grass silage partly or completely replaced by whole-crop silages, hay or straw. The concentrate feeds consisted of cereal grains, fibrous by-products and protein supplements. Mixed model regression analysis with a random study effect was used to develop prediction equations for BWG and carcass traits. The best-fit models included linear and quadratic effects of metabolisable energy (ME) intake per metabolic BW (BW0.75), linear effects of BW0.75, and dietary concentrations of NDF, fat and feed metabolisable protein (MP) as significant variables. Although diet variables had significant effects on BWG, their contribution to improve the model predictions compared with ME intake models was small. Feed MP rather than total MP was included in the final model, since it is less correlated to dietary ME concentration than total MP. None of the quadratic terms of feed variables was significant (P>0.10) when included in the final models. Further, additional feed variables (e.g. silage fermentation products, forage digestibility) did not have significant effects on BWG. For carcass traits, increased ME intake (ME/BW0.75) improved both dressing proportion (P0.10) effect on dressing proportion or carcass conformation score, but it increased (P<0.01) carcass fat score. The current study demonstrated that ME intake per BW0.75 was clearly the most important variable explaining the BWG response in growing cattle. The effect of increased ME supply displayed diminishing responses that could be associated with increased energy concentration of BWG, reduced diet metabolisability (proportion of ME of gross energy) and/or decreased efficiency of ME utilisation for growth with increased intake. Negative effects of increased dietary NDF concentration on BWG were smaller compared to responses that energy evaluation systems predict for energy retention. The present results showed only marginal effects of protein supply on BWG in growing cattle.
NASA Astrophysics Data System (ADS)
Wichmann, Matthias C.; Groeneveld, Jürgen; Jeltsch, Florian; Grimm, Volker
2005-07-01
The predicted climate change causes deep concerns on the effects of increasing temperatures and changing precipitation patterns on species viability and, in turn, on biodiversity. Models of Population Viability Analysis (PVA) provide a powerful tool to assess the risk of species extinction. However, most PVA models do not take into account the potential effects of behavioural adaptations. Organisms might adapt to new environmental situations and thereby mitigate negative effects of climate change. To demonstrate such mitigation effects, we use an existing PVA model describing a population of the tawny eagle ( Aquila rapax) in the southern Kalahari. This model does not include behavioural adaptations. We develop a new model by assuming that the birds enlarge their average territory size to compensate for lower amounts of precipitation. Here, we found the predicted increase in risk of extinction due to climate change to be much lower than in the original model. However, this "buffering" of climate change by behavioural adaptation is not very effective in coping with increasing interannual variances. We refer to further examples of ecological "buffering mechanisms" from the literature and argue that possible buffering mechanisms should be given due consideration when the effects of climate change on biodiversity are to be predicted.
Using a knowledge-based planning solution to select patients for proton therapy.
Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R
2017-08-01
Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.
Frequency-dependent variation in mimetic fidelity in an intraspecific mimicry system
Iserbyt, Arne; Bots, Jessica; Van Dongen, Stefan; Ting, Janice J.; Van Gossum, Hans; Sherratt, Thomas N.
2011-01-01
Contemporary theory predicts that the degree of mimetic similarity of mimics towards their model should increase as the mimic/model ratio increases. Thus, when the mimic/model ratio is high, then the mimic has to resemble the model very closely to still gain protection from the signal receiver. To date, empirical evidence of this effect is limited to a single example where mimicry occurs between species. Here, for the first time, we test whether mimetic fidelity varies with mimic/model ratios in an intraspecific mimicry system, in which signal receivers are the same species as the mimics and models. To this end, we studied a polymorphic damselfly with a single male phenotype and two female morphs, in which one morph resembles the male phenotype while the other does not. Phenotypic similarity of males to both female morphs was quantified using morphometric data for multiple populations with varying mimic/model ratios repeated over a 3 year period. Our results demonstrate that male-like females were overall closer in size to males than the other female morph. Furthermore, the extent of morphological similarity between male-like females and males, measured as Mahalanobis distances, was frequency-dependent in the direction predicted. Hence, this study provides direct quantitative support for the prediction that the mimetic similarity of mimics to their models increases as the mimic/model ratio increases. We suggest that the phenomenon may be widespread in a range of mimicry systems. PMID:21367784
Modeling of surface tension effects in venturi scrubbing
NASA Astrophysics Data System (ADS)
Ott, Robert M.; Wu, Tatsu K. L.; Crowder, Jerry W.
A modified model of venturi scrubber performance has been developed that addresses two effects of liquid surface tension: its effect on droplet size and its effect on particle penetration into the droplet. The predictions of the model indicate that, in general, collection efficiency increases with a decrease in liquid surface tension, but the range over which this increase is significant depends on the particle size and on the scrubber operating parameters. The predictions further indicate that the increases in collection efficiency are almost totally due to the effect of liquid surface tension on the mean droplet size, and that the collection efficiency is not significantly affected by the ability of the particle to penetrate the droplet.
Eco-genetic modeling of contemporary life-history evolution.
Dunlop, Erin S; Heino, Mikko; Dieckmann, Ulf
2009-10-01
We present eco-genetic modeling as a flexible tool for exploring the course and rates of multi-trait life-history evolution in natural populations. We build on existing modeling approaches by combining features that facilitate studying the ecological and evolutionary dynamics of realistically structured populations. In particular, the joint consideration of age and size structure enables the analysis of phenotypically plastic populations with more than a single growth trajectory, and ecological feedback is readily included in the form of density dependence and frequency dependence. Stochasticity and life-history trade-offs can also be implemented. Critically, eco-genetic models permit the incorporation of salient genetic detail such as a population's genetic variances and covariances and the corresponding heritabilities, as well as the probabilistic inheritance and phenotypic expression of quantitative traits. These inclusions are crucial for predicting rates of evolutionary change on both contemporary and longer timescales. An eco-genetic model can be tightly coupled with empirical data and therefore may have considerable practical relevance, in terms of generating testable predictions and evaluating alternative management measures. To illustrate the utility of these models, we present as an example an eco-genetic model used to study harvest-induced evolution of multiple traits in Atlantic cod. The predictions of our model (most notably that harvesting induces a genetic reduction in age and size at maturation, an increase or decrease in growth capacity depending on the minimum-length limit, and an increase in reproductive investment) are corroborated by patterns observed in wild populations. The predicted genetic changes occur together with plastic changes that could phenotypically mask the former. Importantly, our analysis predicts that evolutionary changes show little signs of reversal following a harvest moratorium. This illustrates how predictions offered by eco-genetic models can enable and guide evolutionarily sustainable resource management.
Hayn, Dieter; Kreiner, Karl; Ebner, Hubert; Kastner, Peter; Breznik, Nada; Rzepka, Angelika; Hofmann, Axel; Gombotz, Hans; Schreier, Günter
2017-06-14
Blood transfusion is a highly prevalent procedure in hospitalized patients and in some clinical scenarios it has lifesaving potential. However, in most cases transfusion is administered to hemodynamically stable patients with no benefit, but increased odds of adverse patient outcomes and substantial direct and indirect cost. Therefore, the concept of Patient Blood Management has increasingly gained importance to pre-empt and reduce transfusion and to identify the optimal transfusion volume for an individual patient when transfusion is indicated. It was our aim to describe, how predictive modeling and machine learning tools applied on pre-operative data can be used to predict the amount of red blood cells to be transfused during surgery and to prospectively optimize blood ordering schedules. In addition, the data derived from the predictive models should be used to benchmark different hospitals concerning their blood transfusion patterns. 6,530 case records obtained for elective surgeries from 16 centers taking part in two studies conducted in 2004-2005 and 2009-2010 were analyzed. Transfused red blood cell volume was predicted using random forests. Separate models were trained for overall data, for each center and for each of the two studies. Important characteristics of different models were compared with one another. Our results indicate that predictive modeling applied prior surgery can predict the transfused volume of red blood cells more accurately (correlation coefficient cc = 0.61) than state of the art algorithms (cc = 0.39). We found significantly different patterns of feature importance a) in different hospitals and b) between study 1 and study 2. We conclude that predictive modeling can be used to benchmark the importance of different features on the models derived with data from different hospitals. This might help to optimize crucial processes in a specific hospital, even in other scenarios beyond Patient Blood Management.
Hilkens, N A; Algra, A; Greving, J P
2016-01-01
ESSENTIALS: Prediction models may help to identify patients at high risk of bleeding on antiplatelet therapy. We identified existing prediction models for bleeding and validated them in patients with cerebral ischemia. Five prediction models were identified, all of which had some methodological shortcomings. Performance in patients with cerebral ischemia was poor. Background Antiplatelet therapy is widely used in secondary prevention after a transient ischemic attack (TIA) or ischemic stroke. Bleeding is the main adverse effect of antiplatelet therapy and is potentially life threatening. Identification of patients at increased risk of bleeding may help target antiplatelet therapy. This study sought to identify existing prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy and evaluate their performance in patients with cerebral ischemia. We systematically searched PubMed and Embase for existing prediction models up to December 2014. The methodological quality of the included studies was assessed with the CHARMS checklist. Prediction models were externally validated in the European Stroke Prevention Study 2, comprising 6602 patients with a TIA or ischemic stroke. We assessed discrimination and calibration of included prediction models. Five prediction models were identified, of which two were developed in patients with previous cerebral ischemia. Three studies assessed major bleeding, one studied intracerebral hemorrhage and one gastrointestinal bleeding. None of the studies met all criteria of good quality. External validation showed poor discriminative performance, with c-statistics ranging from 0.53 to 0.64 and poor calibration. A limited number of prediction models is available that predict intracranial hemorrhage or major bleeding in patients on antiplatelet therapy. The methodological quality of the models varied, but was generally low. Predictive performance in patients with cerebral ischemia was poor. In order to reliably predict the risk of bleeding in patients with cerebral ischemia, development of a prediction model according to current methodological standards is needed. © 2015 International Society on Thrombosis and Haemostasis.
Fluschnik, Nina; Ojeda, Francisco; Zeller, Tanja; Jørgensen, Torben; Kuulasmaa, Kari; Becher, Peter Moritz; Sinning, Christoph; Blankenberg, Stefan; Westermann, Dirk
2018-01-01
Growth differentiation factor-15 (GDF-15), Cystatin C and C-reactive protein (CRP) have been discussed as biomarkers for prediction of cardiac diseases. The aim of this study was to investigate the predictive value of single and repeated measurements of GDF-15 compared to Cystatin C and CRP for incidence of heart failure (HF) and death due to coronary heart disease (CHD) in the general population. Levels of GDF-15, CRP and Cystatin C were determined in three repeated measurements collected 5 years apart in the DAN-MONICA (Danish-Multinational MONitoring of trends and determinants in Cardiovascular disease) cohort (participants at baseline n = 3785). Cox regression models adjusted for cardiovascular risk factors revealed significantly increased hazard ratios (HR) for GDF-15 for incident HF 1.36 (HR per interquartile range (IQR) increase, 95% confidence interval (CI): 1.16; 1.59) and for death from CHD 1.51 (HR per IQR increase, 95% CI: 1.31, 1.75) (both with p<0.001). Joint modeling of time-to-event and longitudinal GDF-15 over a median 27-year follow-up period showed that the marker evolution was positively associated with death of CHD (HR per IQR increase 3.02 95% CI: (2.26, 4.04), p < 0.001) and HF (HR per IQR increase 2.12 95% CI: (1.54, 2.92), p<0.001). However using Cox models with follow-up time starting at the time of the third examination, serial measurement of GDF-15, modeled as changes between the measurements, did not improve prediction over that of the most recent measurement. GDF-15 is a promising biomarker for prediction of HF and death due to CHD in the general population, which may provide prognostic information to already established clinical biomarkers. Repeated measurements of GDF-15 displayed only a slight improvement in the prediction of these endpoints compared to a single measurement.
Fear and Loving in Las Vegas: Evolution, Emotion, and Persuasion.
Griskevicius, Vladas; Goldstein, Noah J; Mortensen, Chad R; Sundie, Jill M; Cialdini, Robert B; Kenrick, Douglas T
2009-06-01
How do arousal-inducing contexts, such as frightening or romantic television programs, influence the effectiveness of basic persuasion heuristics? Different predictions are made by three theoretical models: A general arousal model predicts that arousal should increase effectiveness of heuristics; an affective valence model predicts that effectiveness should depend on whether the context elicits positive or negative affect; an evolutionary model predicts that persuasiveness should depend on both the specific emotion that is elicited and the content of the particular heuristic. Three experiments examined how fear-inducing versus romantic contexts influenced the effectiveness of two widely used heuristics-social proof (e.g., "most popular") and scarcity (e.g., "limited edition"). Results supported predictions from an evolutionary model, showing that fear can lead scarcity appeals to be counter-persuasive, and that romantic desire can lead social proof appeals to be counter-persuasive. The findings highlight how an evolutionary theoretical approach can lead to novel theoretical and practical marketing insights.
Shetty, N; Løvendahl, P; Lund, M S; Buitenhuis, A J
2017-01-01
The present study explored the effectiveness of Fourier transform mid-infrared (FT-IR) spectral profiles as a predictor for dry matter intake (DMI) and residual feed intake (RFI). The partial least squares regression method was used to develop the prediction models. The models were validated using different external test sets, one randomly leaving out 20% of the records (validation A), the second randomly leaving out 20% of cows (validation B), and a third (for DMI prediction models) randomly leaving out one cow (validation C). The data included 1,044 records from 140 cows; 97 were Danish Holstein and 43 Danish Jersey. Results showed better accuracies for validation A compared with other validation methods. Milk yield (MY) contributed largely to DMI prediction; MY explained 59% of the variation and the validated model error root mean square error of prediction (RMSEP) was 2.24kg. The model was improved by adding live weight (LW) as an additional predictor trait, where the accuracy R 2 increased from 0.59 to 0.72 and error RMSEP decreased from 2.24 to 1.83kg. When only the milk FT-IR spectral profile was used in DMI prediction, a lower prediction ability was obtained, with R 2 =0.30 and RMSEP=2.91kg. However, once the spectral information was added, along with MY and LW as predictors, model accuracy improved and R 2 increased to 0.81 and RMSEP decreased to 1.49kg. Prediction accuracies of RFI changed throughout lactation. The RFI prediction model for the early-lactation stage was better compared with across lactation or mid- and late-lactation stages, with R 2 =0.46 and RMSEP=1.70. The most important spectral wavenumbers that contributed to DMI and RFI prediction models included fat, protein, and lactose peaks. Comparable prediction results were obtained when using infrared-predicted fat, protein, and lactose instead of full spectra, indicating that FT-IR spectral data do not add significant new information to improve DMI and RFI prediction models. Therefore, in practice, if full FT-IR spectral data are not stored, it is possible to achieve similar DMI or RFI prediction results based on standard milk control data. For DMI, the milk fat region was responsible for the major variation in milk spectra; for RFI, the major variation in milk spectra was within the milk protein region. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Abdel-Dayem, M S; Annajar, B B; Hanafi, H A; Obenauer, P J
2012-05-01
The increased cases of cutaneous leishmaniasis vectored by Phlebotomus papatasi (Scopoli) in Libya have driven considerable effort to develop a predictive model for the potential geographical distribution of this disease. We collected adult P. papatasi from 17 sites in Musrata and Yefern regions of Libya using four different attraction traps. Our trap results and literature records describing the distribution of P. papatasi were incorporated into a MaxEnt algorithm prediction model that used 22 environmental variables. The model showed a high performance (AUC = 0.992 and 0.990 for training and test data, respectively). High suitability for P. papatasi was predicted to be largely confined to the coast at altitudes <600 m. Regions south of 300 degrees N latitude were calculated as unsuitable for this species. Jackknife analysis identified precipitation as having the most significant predictive power, while temperature and elevation variables were less influential. The National Leishmaniasis Control Program in Libya may find this information useful in their efforts to control zoonotic cutaneous leishmaniasis. Existing records are strongly biased toward a few geographical regions, and therefore, further sand fly collections are warranted that should include documentation of such factors as soil texture and humidity, land cover, and normalized difference vegetation index (NDVI) data to increase the model's predictive power.
Cheng, Jeffrey K.; Stoilov, Ivan; Mecham, Robert P.
2013-01-01
Decreased elastin in mice (Eln+/−) yields a functioning vascular system with elevated blood pressure and increased arterial stiffness that is morphologically distinct from wild-type mice (WT). Yet, function is retained enough that there is no appreciable effect on life span and some mechanical properties are maintained constant. It is not understood how the mouse modifies the normal developmental process to produce a functioning vascular system despite a deficiency in elastin. To quantify changes in mechanical properties, we have applied a fiber-based constitutive model to mechanical data from the ascending aorta during postnatal development of WT and Eln+/− mice. Results indicate that the fiber-based constitutive model is capable of distinguishing elastin amounts and identifying trends during development. We observe an increase in predicted circumferential stress contribution from elastin with age, which correlates with increased elastin amounts from protein quantification data. The model also predicts changes in the unloaded collagen fiber orientation with age, which must be verified in future work. In Eln+/− mice, elastin amounts are decreased at each age, along with the predicted circumferential stress contribution of elastin. Collagen amounts in Eln+/− aorta are comparable to WT, but the predicted circumferential stress contribution of collagen is increased. This may be due to altered organization or structure of the collagen fibers. Relating quantifiable changes in arterial mechanics with changes in extracellular matrix (ECM) protein amounts will help in understanding developmental remodeling and in producing treatments for human diseases affecting ECM proteins. PMID:22790326
Basler, Georg; Küken, Anika; Fernie, Alisdair R.; Nikoloski, Zoran
2016-01-01
Arguably, the biggest challenge of modern plant systems biology lies in predicting the performance of plant species, and crops in particular, upon different intracellular and external perturbations. Recently, an increased growth of Arabidopsis thaliana plants was achieved by introducing two different photorespiratory bypasses via metabolic engineering. Here, we investigate the extent to which these findings match the predictions from constraint-based modeling. To determine the effect of the employed metabolic network model on the predictions, we perform a comparative analysis involving three state-of-the-art metabolic reconstructions of A. thaliana. In addition, we investigate three scenarios with respect to experimental findings on the ratios of the carboxylation and oxygenation reactions of Ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO). We demonstrate that the condition-dependent growth phenotypes of one of the engineered bypasses can be qualitatively reproduced by each reconstruction, particularly upon considering the additional constraints with respect to the ratio of fluxes for the RuBisCO reactions. Moreover, our results lend support for the hypothesis of a reduced photorespiration in the engineered plants, and indicate that specific changes in CO2 exchange as well as in the proxies for co-factor turnover are associated with the predicted growth increase in the engineered plants. We discuss our findings with respect to the structure of the used models, the modeling approaches taken, and the available experimental evidence. Our study sets the ground for investigating other strategies for increase of plant biomass by insertion of synthetic reactions. PMID:27092301
Band, Leah R.; Fozard, John A.; Godin, Christophe; Jensen, Oliver E.; Pridmore, Tony; Bennett, Malcolm J.; King, John R.
2012-01-01
Over recent decades, we have gained detailed knowledge of many processes involved in root growth and development. However, with this knowledge come increasing complexity and an increasing need for mechanistic modeling to understand how those individual processes interact. One major challenge is in relating genotypes to phenotypes, requiring us to move beyond the network and cellular scales, to use multiscale modeling to predict emergent dynamics at the tissue and organ levels. In this review, we highlight recent developments in multiscale modeling, illustrating how these are generating new mechanistic insights into the regulation of root growth and development. We consider how these models are motivating new biological data analysis and explore directions for future research. This modeling progress will be crucial as we move from a qualitative to an increasingly quantitative understanding of root biology, generating predictive tools that accelerate the development of improved crop varieties. PMID:23110897
From documentation to prediction: Raising the bar for thermokarst research
Rowland, Joel C.; Coon, Ethan T.
2015-11-12
Here we report that to date the majority of published research on thermokarst has been directed at documenting its form, occurrence, and rates of occurrence. The fundamental processes driving thermokarst have long been largely understood. However, the detailed physical couplings between, water, air, soil, and the thermal dynamics governing freeze-thaw and soil mechanics is less understood and not captured in models aimed at predicting the response of frozen soils to warming and thaw. As computational resources increase more sophisticated mechanistic models can be applied; these show great promise as predictive tools. These models will be capable of simulating the responsemore » of soil deformation to thawing/freezing cycles and the long-term, non-recoverable response of the land surface to the loss of ice. At the same time, advances in remote sensing of permafrost environments also show promise in providing detailed and spatially extensive estimates in the rates and patterns of subsidence. These datasets provide key constraints to calibrate and evaluate the predictive power of mechanistic models. In conclusion, in the coming decade, these emerging technologies will greatly increase our ability to predict when, where, and how thermokarst will occur in a changing climate.« less
Wang, Q; Leil, T
2017-01-01
Rosuvastatin is a frequently used probe in transporter‐mediated drug‐drug interaction (DDI) studies. This report describes the development of a physiologically based pharmacokinetic (PBPK) model of rosuvastatin for prediction of pharmacokinetic (PK) DDIs. The rosuvastatin model predicted the observed single (i.v. and oral) and multiple dose PK profiles, as well as the impact of coadministration with transporter inhibitors. The predicted effects of rifampin and cyclosporine (6.58‐fold and 5.07‐fold increase in rosuvastatin area under the curve (AUC), respectively) were mediated primarily via inhibition of hepatic organic anion‐transporting polypeptide (OATP)1B1 (Inhibition constant (Ki) ∼1.1 and 0.014 µM, respectively) and OATP1B3 (Ki ∼0.3 and 0.007 µM, respectively), with cyclosporine also inhibiting intestinal breast cancer resistance protein (BCRP; Ki ∼0.07 µM). The predicted effects of gemfibrozil and its metabolite were moderate (1.88‐fold increase in rosuvastatin AUC) and mediated primarily via inhibition of hepatic OATP1B1 and renal organic cation transporter 3. This model of rosuvastatin will be useful in prospectively predicting transporter‐mediated DDIs with novel pharmaceutical agents in development. PMID:28296193
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-01-01
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-06-08
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.
Ashraf, M Irfan; Meng, Fan-Rui; Bourque, Charles P-A; MacLean, David A
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2) 5-year(-1) and volume: 0.0008 m(3) 5-year(-1)). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm(2) 5-year(-1) and 0.0393 m(3) 5-year(-1) in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change
Ashraf, M. Irfan; Meng, Fan-Rui; Bourque, Charles P.-A.; MacLean, David A.
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm2 5-year-1 and volume: 0.0008 m3 5-year-1). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm2 5-year-1 and 0.0393 m3 5-year-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling. PMID:26173081
Rubikowska, Barbara; Bratkowski, Jakub; Ustrnul, Zbigniew; Vanwambeke, Sophie O.
2018-01-01
During 1999–2012, 77% of the cases of tick-borne encephalitis (TBE) were recorded in two out of 16 Polish provinces. However, historical data, mostly from national serosurveys, suggest that the disease could be undetected in many areas. The aim of this study was to identify which routinely-measured meteorological, environmental, and socio-economic factors are associated to TBE human risk across Poland, with a particular focus on areas reporting few cases, but where serosurveys suggest higher incidence. We fitted a zero-inflated Poisson model using data on TBE incidence recorded in 108 NUTS-5 administrative units in high-risk areas over the period 1999–2012. Subsequently we applied the best fitting model to all Polish municipalities. Keeping the remaining variables constant, the predicted rate increased with the increase of air temperature over the previous 10–20 days, precipitation over the previous 20–30 days, in forestation, forest edge density, forest road density, and unemployment. The predicted rate decreased with increasing distance from forests. The map of predicted rates was consistent with the established risk areas. It predicted, however, high rates in provinces considered TBE-free. We recommend raising awareness among physicians working in the predicted high-risk areas and considering routine use of household animal surveys for risk mapping. PMID:29617333
Stefanoff, Pawel; Rubikowska, Barbara; Bratkowski, Jakub; Ustrnul, Zbigniew; Vanwambeke, Sophie O; Rosinska, Magdalena
2018-04-04
During 1999–2012, 77% of the cases of tick-borne encephalitis (TBE) were recorded in two out of 16 Polish provinces. However, historical data, mostly from national serosurveys, suggest that the disease could be undetected in many areas. The aim of this study was to identify which routinely-measured meteorological, environmental, and socio-economic factors are associated to TBE human risk across Poland, with a particular focus on areas reporting few cases, but where serosurveys suggest higher incidence. We fitted a zero-inflated Poisson model using data on TBE incidence recorded in 108 NUTS-5 administrative units in high-risk areas over the period 1999–2012. Subsequently we applied the best fitting model to all Polish municipalities. Keeping the remaining variables constant, the predicted rate increased with the increase of air temperature over the previous 10–20 days, precipitation over the previous 20–30 days, in forestation, forest edge density, forest road density, and unemployment. The predicted rate decreased with increasing distance from forests. The map of predicted rates was consistent with the established risk areas. It predicted, however, high rates in provinces considered TBE-free. We recommend raising awareness among physicians working in the predicted high-risk areas and considering routine use of household animal surveys for risk mapping.
Biogeochemical modeling of CO2 and CH4 production in anoxic Arctic soil microcosms
NASA Astrophysics Data System (ADS)
Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng; Yang, Ziming; Graham, David E.; Gu, Baohua; Painter, Scott L.; Thornton, Peter E.
2016-09-01
Soil organic carbon turnover to CO2 and CH4 is sensitive to soil redox potential and pH conditions. However, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximately describe the observed pH evolution without additional parameterization. Although Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. The equilibrium speciation predicts a substantial increase in CO2 solubility as pH increases, and taking into account CO2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO2 production from closed microcosms can be substantially underestimated based on headspace CO2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.
Yasin, Siti Munira; Retneswari, Masilamani; Moy, Foong Ming; Taib, Khairul Mizan; Isahak, Marzuki; Koh, David
2013-01-01
The role of The Transtheoretical Model (TTM) in predicting relapse is limited. We aimed to assess whether this model can be utilised to predict relapse during the action stage. The participants included 120 smokers who had abstained from smoking for at least 24 hours following two Malaysian universities' smoking cessation programme. The smokers who relapsed perceived significantly greater advantages related to smoking and increasing doubt in their ability to quit. In contrast, former smokers with greater self-liberation and determination to abstain were less likely to relapse. The findings suggest that TTM can be used to predict relapse among quitting smokers.
Borderline Personality Disorder Symptoms and Aggression: A Within-Person Process Model
Scott, Lori N.; Wright, Aidan G. C.; Beeney, Joseph E.; Lazarus, Sophie A.; Pilkonis, Paul A.; Stepp, Stephanie D.
2017-01-01
Theoretical and empirical work suggests that aggression in those with borderline personality disorder (BPD) occurs primarily in the context of emotional reactivity, especially anger and shame, in response to perceived rejection. Using intensive repeated measures, we examined a within-person process model in which perceived rejection predicts increases in aggressive urges and behaviors via increases in negative affect (indirect effect) and in which BPD symptoms exacerbate this process (moderated mediation). Participants were 117 emerging adult women (ages 18–24) with recent histories of aggressive behavior who were recruited from a community-based longitudinal study of at-risk youth. Personality disorder symptoms were assessed by semi-structured clinical interview, and aggressive urges, threats, and behaviors were measured in daily life during a three-week ecological momentary assessment (EMA) protocol. Multilevel path models revealed that within-person increases in perceived rejection predicted increases in negative affect, especially in women with greater BPD symptoms. In turn, increases in negative affect predicted increased likelihood of aggressive urges or behaviors. Further analysis revealed that BPD symptoms predicted greater anger and shame reactivity to perceived rejection, but not to criticism or insult. Additionally, only anger was associated with increases in aggression after controlling for other negative emotions. Whereas BPD symptoms exacerbated the link between perceived rejection and aggression via increases in negative affect (particularly anger), this process was attenuated in women with greater antisocial personality disorder (ASPD) symptoms. These findings suggest that anger reactivity to perceived rejection is one unique pathway, distinct from ASPD, by which BPD symptoms increase risk for aggression. PMID:28383936
Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)
NASA Astrophysics Data System (ADS)
Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab
2017-08-01
The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.
Phung, Dung; Talukder, Mohammad Radwanur Rahman; Rutherford, Shannon; Chu, Cordia
2016-10-01
To develop a prediction score scheme useful for prevention practitioners and authorities to implement dengue preparedness and controls in the Mekong Delta region (MDR). We applied a spatial scan statistic to identify high-risk dengue clusters in the MDR and used generalised linear-distributed lag models to examine climate-dengue associations using dengue case records and meteorological data from 2003 to 2013. The significant predictors were collapsed into categorical scales, and the β-coefficients of predictors were converted to prediction scores. The score scheme was validated for predicting dengue outbreaks using ROC analysis. The north-eastern MDR was identified as the high-risk cluster. A 1 °C increase in temperature at lag 1-4 and 5-8 weeks increased the dengue risk 11% (95% CI, 9-13) and 7% (95% CI, 6-8), respectively. A 1% rise in humidity increased dengue risk 0.9% (95% CI, 0.2-1.4) at lag 1-4 and 0.8% (95% CI, 0.2-1.4) at lag 5-8 weeks. Similarly, a 1-mm increase in rainfall increased dengue risk 0.1% (95% CI, 0.05-0.16) at lag 1-4 and 0.11% (95% CI, 0.07-0.16) at lag 5-8 weeks. The predicted scores performed with high accuracy in diagnosing the dengue outbreaks (96.3%). This study demonstrates the potential usefulness of a dengue prediction score scheme derived from complex statistical models for high-risk dengue clusters. We recommend a further study to examine the possibility of incorporating such a score scheme into the dengue early warning system in similar climate settings. © 2016 John Wiley & Sons Ltd.
Prospects for Genomic Selection in Cassava Breeding.
Wolfe, Marnin D; Del Carpio, Dunia Pino; Alabi, Olumide; Ezenwaka, Lydia C; Ikeogu, Ugochukwu N; Kayondo, Ismail S; Lozano, Roberto; Okeke, Uche G; Ozimati, Alfred A; Williams, Esuma; Egesi, Chiedozie; Kawuki, Robert S; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc
2017-11-01
Cassava ( Crantz) is a clonally propagated staple food crop in the tropics. Genomic selection (GS) has been implemented at three breeding institutions in Africa to reduce cycle times. Initial studies provided promising estimates of predictive abilities. Here, we expand on previous analyses by assessing the accuracy of seven prediction models for seven traits in three prediction scenarios: cross-validation within populations, cross-population prediction and cross-generation prediction. We also evaluated the impact of increasing the training population (TP) size by phenotyping progenies selected either at random or with a genetic algorithm. Cross-validation results were mostly consistent across programs, with nonadditive models predicting of 10% better on average. Cross-population accuracy was generally low (mean = 0.18) but prediction of cassava mosaic disease increased up to 57% in one Nigerian population when data from another related population were combined. Accuracy across generations was poorer than within-generation accuracy, as expected, but accuracy for dry matter content and mosaic disease severity should be sufficient for rapid-cycling GS. Selection of a prediction model made some difference across generations, but increasing TP size was more important. With a genetic algorithm, selection of one-third of progeny could achieve an accuracy equivalent to phenotyping all progeny. We are in the early stages of GS for this crop but the results are promising for some traits. General guidelines that are emerging are that TPs need to continue to grow but phenotyping can be done on a cleverly selected subset of individuals, reducing the overall phenotyping burden. Copyright © 2017 Crop Science Society of America.
Are prediction models for Lynch syndrome valid for probands with endometrial cancer?
Backes, Floor J; Hampel, Heather; Backes, Katherine A; Vaccarello, Luis; Lewandowski, George; Bell, Jeffrey A; Reid, Gary C; Copeland, Larry J; Fowler, Jeffrey M; Cohn, David E
2009-01-01
Currently, three prediction models are used to predict a patient's risk of having Lynch syndrome (LS). These models have been validated in probands with colorectal cancer (CRC), but not in probands presenting with endometrial cancer (EMC). Thus, the aim was to determine the performance of these prediction models in women with LS presenting with EMC. Probands with EMC and LS were identified. Personal and family history was entered into three prediction models, PREMM(1,2), MMRpro, and MMRpredict. Probabilities of mutations in the mismatch repair genes were recorded. Accurate prediction was defined as a model predicting at least a 5% chance of a proband carrying a mutation. From 562 patients prospectively enrolled in a clinical trial of patients with EMC, 13 (2.2%) were shown to have LS. Nine patients had a mutation in MSH6, three in MSH2, and one in MLH1. MMRpro predicted that 3 of 9 patients with an MSH6, 3 of 3 with an MSH2, and 1 of 1 patient with an MLH1 mutation could have LS. For MMRpredict, EMC coded as "proximal CRC" predicted 5 of 5, and as "distal CRC" three of five. PREMM(1,2) predicted that 4 of 4 with an MLH1 or MSH2 could have LS. Prediction of LS in probands presenting with EMC using current models for probands with CRC works reasonably well. Further studies are needed to develop models that include questions specific to patients with EMC with a greater age range, as well as placing increased emphasis on prediction of LS in probands with MSH6 mutations.
NASA Astrophysics Data System (ADS)
Alessandri, A.; De Felice, M.; Catalano, F.; Lee, J. Y.; Wang, B.; Lee, D. Y.; Yoo, J. H.; Weisheimer, A.
2017-12-01
By initiating a novel cooperation between the European and the Asian-Pacific climate-prediction communities, this work demonstrates the potential of gathering together their Multi-Model Ensembles (MMEs) to obtain useful climate predictions at seasonal time-scale.MMEs are powerful tools in dynamical climate prediction as they account for the overconfidence and the uncertainties related to single-model ensembles and increasing benefit is expected with the increase of the independence of the contributing Seasonal Prediction Systems (SPSs). In this work we combine the two MME SPSs independently developed by the European (ENSEMBLES) and by the Asian-Pacific (APCC/CliPAS) communities by establishing an unprecedented partnerships. To this aim, all the possible MME combinations obtained by putting together the 5 models from ENSEMBLES and the 11 models from APCC/CliPAS have been evaluated. The Grand ENSEMBLES-APCC/CliPAS MME enhances significantly the skill in predicting 2m temperature and precipitation. Our results show that, in general, the better combinations of SPSs are obtained by mixing ENSEMBLES and APCC/CliPAS models and that only a limited number of SPSs is required to obtain the maximum performance. The selection of models that perform better is usually different depending on the region/phenomenon under consideration so that all models are useful in some cases. It is shown that the incremental performance contribution tends to be higher when adding one model from ENSEMBLES to APCC/CliPAS MMEs and vice versa, confirming that the benefit of using MMEs amplifies with the increase of the independence the contributing models.To verify the above results for a real world application, the Grand MME is used to predict energy demand over Italy as provided by TERNA (Italian Transmission System Operator) for the period 1990-2007. The results demonstrate the useful application of MME seasonal predictions for energy demand forecasting over Italy. It is shown a significant enhancement of the potential economic value of forecasting energy demand when using the better combinations from the Grand MME by comparison to the maximum value obtained from the better combinations of each of the two contributing MMEs. Above results are discussed in a Clim Dyn paper (Alessandri et al., 2017; doi:10.1007/s00382-016-3372-4).
Calibration and prediction of removal function in magnetorheological finishing.
Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng
2010-01-20
A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.
Influences of misprediction costs on solar flare prediction
NASA Astrophysics Data System (ADS)
Huang, Xin; Wang, HuaNing; Dai, XingHua
2012-10-01
The mispredictive costs of flaring and non-flaring samples are different for different applications of solar flare prediction. Hence, solar flare prediction is considered a cost sensitive problem. A cost sensitive solar flare prediction model is built by modifying the basic decision tree algorithm. Inconsistency rate with the exhaustive search strategy is used to determine the optimal combination of magnetic field parameters in an active region. These selected parameters are applied as the inputs of the solar flare prediction model. The performance of the cost sensitive solar flare prediction model is evaluated for the different thresholds of solar flares. It is found that more flaring samples are correctly predicted and more non-flaring samples are wrongly predicted with the increase of the cost for wrongly predicting flaring samples as non-flaring samples, and the larger cost of wrongly predicting flaring samples as non-flaring samples is required for the higher threshold of solar flares. This can be considered as the guide line for choosing proper cost to meet the requirements in different applications.
Wombacher, Kevin; Dai, Minhao; Matig, Jacob J; Harrington, Nancy Grant
2018-03-22
To identify salient behavioral determinants related to STI testing among college students by testing a model based on the integrative model of behavioral (IMBP) prediction. 265 undergraduate students from a large university in the Southeastern US. Formative and survey research to test an IMBP-based model that explores the relationships between determinants and STI testing intention and behavior. Results of path analyses supported a model in which attitudinal beliefs predicted intention and intention predicted behavior. Normative beliefs and behavioral control beliefs were not significant in the model; however, select individual normative and control beliefs were significantly correlated with intention and behavior. Attitudinal beliefs are the strongest predictor of STI testing intention and behavior. Future efforts to increase STI testing rates should identify and target salient attitudinal beliefs.
Modeling the spatiotemporal dynamics of light and heat propagation for in vivo optogenetics
Stujenske, Joseph M.; Spellman, Timothy; Gordon, Joshua A.
2015-01-01
Summary Despite the increasing use of optogenetics in vivo, the effects of direct light exposure to brain tissue are understudied. Of particular concern is the potential for heat induced by prolonged optical stimulation. We demonstrate that high intensity light, delivered through an optical fiber, is capable of elevating firing rate locally, even in the absence of opsin expression. Predicting the severity and spatial extent of any temperature increase during optogenetic stimulation is therefore of considerable importance. Here we describe a realistic model that simulates light and heat propagation during optogenetic experiments. We validated the model by comparing predicted and measured temperature changes in vivo. We further demonstrate the utility of this model by comparing predictions for various wavelengths of light and fiber sizes, as well as testing methods for reducing heat effects on neural targets in vivo. PMID:26166563
Harris, Julianne E.; Hightower, Joseph E.
2012-01-01
American shad Alosa sapidissima are in decline in their native range, and modeling possible management scenarios could help guide their restoration. We developed a density-dependent, deterministic, stage-based matrix model to predict the population-level results of transporting American shad to suitable spawning habitat upstream of dams on the Roanoke River, North Carolina and Virginia. We used data on sonic-tagged adult American shad and oxytetracycline-marked American shad fry both above and below dams on the Roanoke River with information from other systems to estimate a starting population size and vital rates. We modeled the adult female population over 30 years under plausible scenarios of adult transport, effective fecundity (egg production), and survival of adults (i.e., to return to spawn the next year) and juveniles (from spawned egg to age 1). We also evaluated the potential effects of increased survival for adults and juveniles. The adult female population size in the Roanoke River was estimated to be 5,224. With no transport, the model predicted a slow population increase over the next 30 years. Predicted population increases were highest when survival was improved during the first year of life. Transport was predicted to benefit the population only if high rates of effective fecundity and juvenile survival could be achieved. Currently, transported adults and young are less likely to successfully out-migrate than individuals below the dams, and the estimated adult population size is much smaller than either of two assumed values of carrying capacity for the lower river; therefore, transport is not predicted to help restore the stock under present conditions. Research on survival rates, density-dependent processes, and the impacts of structures to increase out-migration success would improve evaluation of the potential benefits of access to additional spawning habitat for American shad.
NASA Astrophysics Data System (ADS)
Zhu, Linqi; Zhang, Chong; Zhang, Chaomo; Wei, Yang; Zhou, Xueqing; Cheng, Yuan; Huang, Yuyang; Zhang, Le
2018-06-01
There is increasing interest in shale gas reservoirs due to their abundant reserves. As a key evaluation criterion, the total organic carbon content (TOC) of the reservoirs can reflect its hydrocarbon generation potential. The existing TOC calculation model is not very accurate and there is still the possibility for improvement. In this paper, an integrated hybrid neural network (IHNN) model is proposed for predicting the TOC. This is based on the fact that the TOC information on the low TOC reservoir, where the TOC is easy to evaluate, comes from a prediction problem, which is the inherent problem of the existing algorithm. By comparing the prediction models established in 132 rock samples in the shale gas reservoir within the Jiaoshiba area, it can be seen that the accuracy of the proposed IHNN model is much higher than that of the other prediction models. The mean square error of the samples, which were not joined to the established models, was reduced from 0.586 to 0.442. The results show that TOC prediction is easier after logging prediction has been improved. Furthermore, this paper puts forward the next research direction of the prediction model. The IHNN algorithm can help evaluate the TOC of a shale gas reservoir.
Large-scale structure prediction by improved contact predictions and model quality assessment.
Michel, Mirco; Menéndez Hurtado, David; Uziela, Karolis; Elofsson, Arne
2017-07-15
Accurate contact predictions can be used for predicting the structure of proteins. Until recently these methods were limited to very big protein families, decreasing their utility. However, recent progress by combining direct coupling analysis with machine learning methods has made it possible to predict accurate contact maps for smaller families. To what extent these predictions can be used to produce accurate models of the families is not known. We present the PconsFold2 pipeline that uses contact predictions from PconsC3, the CONFOLD folding algorithm and model quality estimations to predict the structure of a protein. We show that the model quality estimation significantly increases the number of models that reliably can be identified. Finally, we apply PconsFold2 to 6379 Pfam families of unknown structure and find that PconsFold2 can, with an estimated 90% specificity, predict the structure of up to 558 Pfam families of unknown structure. Out of these, 415 have not been reported before. Datasets as well as models of all the 558 Pfam families are available at http://c3.pcons.net/ . All programs used here are freely available. arne@bioinfo.se. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Comparison of Statistical Models for Analyzing Wheat Yield Time Series
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280
Aycock, Kenneth I; Campbell, Robert L; Manning, Keefe B; Craven, Brent A
2017-06-01
Inferior vena cava (IVC) filters are medical devices designed to provide a mechanical barrier to the passage of emboli from the deep veins of the legs to the heart and lungs. Despite decades of development and clinical use, IVC filters still fail to prevent the passage of all hazardous emboli. The objective of this study is to (1) develop a resolved two-way computational model of embolus transport, (2) provide verification and validation evidence for the model, and (3) demonstrate the ability of the model to predict the embolus-trapping efficiency of an IVC filter. Our model couples computational fluid dynamics simulations of blood flow to six-degree-of-freedom simulations of embolus transport and resolves the interactions between rigid, spherical emboli and the blood flow using an immersed boundary method. Following model development and numerical verification and validation of the computational approach against benchmark data from the literature, embolus transport simulations are performed in an idealized IVC geometry. Centered and tilted filter orientations are considered using a nonlinear finite element-based virtual filter placement procedure. A total of 2048 coupled CFD/6-DOF simulations are performed to predict the embolus-trapping statistics of the filter. The simulations predict that the embolus-trapping efficiency of the IVC filter increases with increasing embolus diameter and increasing embolus-to-blood density ratio. Tilted filter placement is found to decrease the embolus-trapping efficiency compared with centered filter placement. Multiple embolus-trapping locations are predicted for the IVC filter, and the trapping locations are predicted to shift upstream and toward the vessel wall with increasing embolus diameter. Simulations of the injection of successive emboli into the IVC are also performed and reveal that the embolus-trapping efficiency decreases with increasing thrombus load in the IVC filter. In future work, the computational tool could be used to investigate IVC filter design improvements, the effect of patient anatomy on embolus transport and IVC filter embolus-trapping efficiency, and, with further development and validation, optimal filter selection and placement on a patient-specific basis.
All-atom 3D structure prediction of transmembrane β-barrel proteins from sequences.
Hayat, Sikander; Sander, Chris; Marks, Debora S; Elofsson, Arne
2015-04-28
Transmembrane β-barrels (TMBs) carry out major functions in substrate transport and protein biogenesis but experimental determination of their 3D structure is challenging. Encouraged by successful de novo 3D structure prediction of globular and α-helical membrane proteins from sequence alignments alone, we developed an approach to predict the 3D structure of TMBs. The approach combines the maximum-entropy evolutionary coupling method for predicting residue contacts (EVfold) with a machine-learning approach (boctopus2) for predicting β-strands in the barrel. In a blinded test for 19 TMB proteins of known structure that have a sufficient number of diverse homologous sequences available, this combined method (EVfold_bb) predicts hydrogen-bonded residue pairs between adjacent β-strands at an accuracy of ∼70%. This accuracy is sufficient for the generation of all-atom 3D models. In the transmembrane barrel region, the average 3D structure accuracy [template-modeling (TM) score] of top-ranked models is 0.54 (ranging from 0.36 to 0.85), with a higher (44%) number of residue pairs in correct strand-strand registration than in earlier methods (18%). Although the nonbarrel regions are predicted less accurately overall, the evolutionary couplings identify some highly constrained loop residues and, for FecA protein, the barrel including the structure of a plug domain can be accurately modeled (TM score = 0.68). Lower prediction accuracy tends to be associated with insufficient sequence information and we therefore expect increasing numbers of β-barrel families to become accessible to accurate 3D structure prediction as the number of available sequences increases.
Predictive models of forest dynamics.
Purves, Drew; Pacala, Stephen
2008-06-13
Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
Stuy on Fatigue Life of Aluminum Alloy Considering Fretting
NASA Astrophysics Data System (ADS)
Yang, Maosheng; Zhao, Hongqiang; Wang, Yunxiang; Chen, Xiaofei; Fan, Jiali
2018-01-01
To study the influence of fretting on Aluminum Alloy, a global finite element model considering fretting was performed using the commercial code ABAQUS. With which a new model for predicting fretting fatigue life has been presented based on friction work. The rationality and effectiveness of the model were validated according to the contrast of experiment life and predicting life. At last influence factor on fretting fatigue life of aerial aluminum alloy was investigated with the model. The results revealed that fretting fatigue life decreased monotonously with the increasing of normal load and then became constant at higher pressures. At low normal load, fretting fatigue life was found to increase with increase in the pad radius. At high normal load, however, the fretting fatigue life remained almost unchanged with changes in the fretting pad radius. The bulk stress amplitude had the dominant effect on fretting fatigue life. The fretting fatigue life diminished as the bulk stress amplitude increased.
Active optimal control strategies for increasing the efficiency of photovoltaic cells
NASA Astrophysics Data System (ADS)
Aljoaba, Sharif Zidan Ahmad
Energy consumption has increased drastically during the last century. Currently, the worldwide energy consumption is about 17.4 TW and is predicted to reach 25 TW by 2035. Solar energy has emerged as one of the potential renewable energy sources. Since its first physical recognition in 1887 by Adams and Day till nowadays, research in solar energy is continuously developing. This has lead to many achievements and milestones that introduced it as one of the most reliable and sustainable energy sources. Recently, the International Energy Agency declared that solar energy is predicted to be one of the major electricity production energy sources by 2035. Enhancing the efficiency and lifecycle of photovoltaic (PV) modules leads to significant cost reduction. Reducing the temperature of the PV module improves its efficiency and enhances its lifecycle. To better understand the PV module performance, it is important to study the interaction between the output power and the temperature. A model that is capable of predicting the PV module temperature and its effects on the output power considering the individual contribution of the solar spectrum wavelengths significantly advances the PV module edsigns toward higher efficiency. In this work, a thermoelectrical model is developed to predict the effects of the solar spectrum wavelengths on the PV module performance. The model is characterized and validated under real meteorological conditions where experimental temperature and output power of the PV module measurements are shown to agree with the predicted results. The model is used to validate the concept of active optical filtering. Since this model is wavelength-based, it is used to design an active optical filter for PV applications. Applying this filter to the PV module is expected to increase the output power of the module by filtering the spectrum wavelengths. The active filter performance is optimized, where different cutoff wavelengths are used to maximize the module output power. It is predicted that if the optimized active optical filter is applied to the PV module, the module efficiency is predicted to increase by about 1%. Different technologies are considered for physical implementation of the active optical filter.
Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marre, O.; El Boustani, S.; Fregnac, Y.
We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatiotemporal patterns significantly better than Ising models only based on spatial correlations. This increase of predictability was also observed on experimental data recorded in parietal cortex during slow-wave sleep. This approach can also be used to generate surrogatesmore » that reproduce the spatial and temporal correlations of a given data set.« less
Wind power application research on the fusion of the determination and ensemble prediction
NASA Astrophysics Data System (ADS)
Lan, Shi; Lina, Xu; Yuzhu, Hao
2017-07-01
The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.
Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...
2015-10-06
Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less
Modeling of acoustic wave dissipation in gas hydrate-bearing sediments
NASA Astrophysics Data System (ADS)
Guerin, Gilles; Goldberg, David
2005-07-01
Recent sonic and seismic data in gas hydrate-bearing sediments have indicated strong waveform attenuation associated with a velocity increase, in apparent contradiction with conventional wave propagation theory. Understanding the reasons for such energy dissipation could help constrain the distribution and the amounts of gas hydrate worldwide from the identification of low amplitudes in seismic surveys. A review of existing models for wave propagation in frozen porous media, all based on Biot's theory, shows that previous formulations fail to predict any significant attenuation with increasing hydrate content. By adding physically based components to these models, such as cementation by elastic shear coupling, friction between the solid phases, and squirt flow, we are able to predict an attenuation increase associated with gas hydrate formation. The results of the model agree well with the sonic logging data recorded in the Mallik 5L-38 Gas Hydrate Research Well. Cementation between gas hydrate and the sediment grains is responsible for the increase in shear velocity. The primary mode of energy dissipation is found to be friction between gas hydrate and the sediment matrix, combined with an absence of inertial coupling between gas hydrate and the pore fluid. These results predict similar attenuation increase in hydrate-bearing formations over most of the sonic and seismic frequency range.
Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics.
Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2017-03-04
Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)₄ model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.
Anwar, Mohammad Y; Lewnard, Joseph A; Parikh, Sunil; Pitzer, Virginia E
2016-11-22
Malaria remains endemic in Afghanistan. National control and prevention strategies would be greatly enhanced through a better ability to forecast future trends in disease incidence. It is, therefore, of interest to develop a predictive tool for malaria patterns based on the current passive and affordable surveillance system in this resource-limited region. This study employs data from Ministry of Public Health monthly reports from January 2005 to September 2015. Malaria incidence in Afghanistan was forecasted using autoregressive integrated moving average (ARIMA) models in order to build a predictive tool for malaria surveillance. Environmental and climate data were incorporated to assess whether they improve predictive power of models. Two models were identified, each appropriate for different time horizons. For near-term forecasts, malaria incidence can be predicted based on the number of cases in the four previous months and 12 months prior (Model 1); for longer-term prediction, malaria incidence can be predicted using the rates 1 and 12 months prior (Model 2). Next, climate and environmental variables were incorporated to assess whether the predictive power of proposed models could be improved. Enhanced vegetation index was found to have increased the predictive accuracy of longer-term forecasts. Results indicate ARIMA models can be applied to forecast malaria patterns in Afghanistan, complementing current surveillance systems. The models provide a means to better understand malaria dynamics in a resource-limited context with minimal data input, yielding forecasts that can be used for public health planning at the national level.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
Susan J. Prichard; Eva C. Karau; Roger D. Ottmar; Maureen C. Kennedy; James B. Cronan; Clinton S. Wright; Robert E. Keane
2014-01-01
Reliable predictions of fuel consumption are critical in the eastern United States (US), where prescribed burning is frequently applied to forests and air quality is of increasing concern. CONSUME and the First Order Fire Effects Model (FOFEM), predictive models developed to estimate fuel consumption and emissions from wildland fires, have not been systematically...
The effect of model resolution in predicting meteorological parameters used in fire danger rating.
Jeanne L. Hoadley; Ken Westrick; Sue A. Ferguson; Scott L. Goodrick; Larry Bradshaw; Paul Werth
2004-01-01
Previous studies of model performance at varying resolutions have focused on winter storms or isolated convective events. Little attention has been given to the static high pressure situations that may lead to severe wildfire outbreaks. This study focuses on such an event so as to evaluate the value of increased model resolution for prediction of fire danger. The...
The effect of model resolution in predicting meteorological parameters used in fire danger rating
Jeanne L. Hoadley; Ken Westrick; Sue a. Ferguson; Scott L. Goodrick; Larry Bradshaw; Paul Wreth
2004-01-01
Previous studies of model perfonnance at varying resolutions have focused on winter stonns or isolated convective events. Little attention has been given to the static high pressure situations that may lead to severe wildfire outbreaks. This study focuses on such an event so as to evaluate the value of increased model resolution for prediction of fire danger. The...
Extending the cost-benefit model of thermoregulation: high-temperature environments.
Vickers, Mathew; Manicom, Carryn; Schwarzkopf, Lin
2011-04-01
The classic cost-benefit model of ectothermic thermoregulation compares energetic costs and benefits, providing a critical framework for understanding this process (Huey and Slatkin 1976 ). It considers the case where environmental temperature (T(e)) is less than the selected temperature of the organism (T(sel)), and it predicts that, to minimize increasing energetic costs of thermoregulation as habitat thermal quality declines, thermoregulatory effort should decrease until the lizard thermoconforms. We extended this model to include the case where T(e) exceeds T(sel), and we redefine costs and benefits in terms of fitness to include effects of body temperature (T(b)) on performance and survival. Our extended model predicts that lizards will increase thermoregulatory effort as habitat thermal quality declines, gaining the fitness benefits of optimal T(b) and maximizing the net benefit of activity. Further, to offset the disproportionately high fitness costs of high T(e) compared with low T(e), we predicted that lizards would thermoregulate more effectively at high values of T(e) than at low ones. We tested our predictions on three sympatric skink species (Carlia rostralis, Carlia rubrigularis, and Carlia storri) in hot savanna woodlands and found that thermoregulatory effort increased as thermal quality declined and that lizards thermoregulated most effectively at high values of T(e).
Offspring Size and Reproductive Allocation in Harvester Ants.
Wiernasz, Diane C; Cole, Blaine J
2018-01-01
A fundamental decision that an organism must make is how to allocate resources to offspring, with respect to both size and number. The two major theoretical approaches to this problem, optimal offspring size and optimistic brood size models, make different predictions that may be reconciled by including how offspring fitness is related to size. We extended the reasoning of Trivers and Willard (1973) to derive a general model of how parents should allocate additional resources with respect to the number of males and females produced, and among individuals of each sex, based on the fitness payoffs of each. We then predicted how harvester ant colonies should invest additional resources and tested three hypotheses derived from our model, using data from 3 years of food supplementation bracketed by 6 years without food addition. All major results were predicted by our model: food supplementation increased the number of reproductives produced. Male, but not female, size increased with food addition; the greatest increases in male size occurred in colonies that made small females. We discuss how use of a fitness landscape improves quantitative predictions about allocation decisions. When parents can invest differentially in offspring of different types, the best strategy will depend on parental state as well as the effect of investment on offspring fitness.
Mass and stiffness estimation using mobile devices for structural health monitoring
NASA Astrophysics Data System (ADS)
Le, Viet; Yu, Tzuyang
2015-04-01
In the structural health monitoring (SHM) of civil infrastructure, dynamic methods using mass, damping, and stiffness for characterizing structural health have been a traditional and widely used approach. Changes in these system parameters over time indicate the progress of structural degradation or deterioration. In these methods, capability of predicting system parameters is essential to their success. In this paper, research work on the development of a dynamic SHM method based on perturbation analysis is reported. The concept is to use externally applied mass to perturb an unknown system and measure the natural frequency of the system. Derived theoretical expressions for mass and stiffness prediction are experimentally verified by a building model. Dynamic responses of the building model perturbed by various masses in free vibration were experimentally measured by a mobile device (cell phone) to extract the natural frequency of the building model. Single-degreeof- freedom (SDOF) modeling approach was adopted for the sake of using a cell phone. From the experimental result, it is shown that the percentage error of predicted mass increases when the mass ratio increases, while the percentage error of predicted stiffness decreases when the mass ratio increases. This work also demonstrated the potential use of mobile devices in the health monitoring of civil infrastructure.
Sun, Xiangqing; Elston, Robert C; Barnholtz-Sloan, Jill S; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Tian, Ye D; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford D; Chandar, Apoorva; Warfe, James M; Brock, Wendy; Chak, Amitabh
2016-05-01
Barrett's esophagus is often asymptomatic and only a small portion of Barrett's esophagus patients are currently diagnosed and under surveillance. Therefore, it is important to develop risk prediction models to identify high-risk individuals with Barrett's esophagus. Familial aggregation of Barrett's esophagus and esophageal adenocarcinoma, and the increased risk of esophageal adenocarcinoma for individuals with a family history, raise the necessity of including genetic factors in the prediction model. Methods to determine risk prediction models using both risk covariates and ascertained family data are not well developed. We developed a Barrett's Esophagus Translational Research Network (BETRNet) risk prediction model from 787 singly ascertained Barrett's esophagus pedigrees and 92 multiplex Barrett's esophagus pedigrees, fitting a multivariate logistic model that incorporates family history and clinical risk factors. The eight risk factors, age, sex, education level, parental status, smoking, heartburn frequency, regurgitation frequency, and use of acid suppressant, were included in the model. The prediction accuracy was evaluated on the training dataset and an independent validation dataset of 643 multiplex Barrett's esophagus pedigrees. Our results indicate family information helps to predict Barrett's esophagus risk, and predicting in families improves both prediction calibration and discrimination accuracy. Our model can predict Barrett's esophagus risk for anyone with family members known to have, or not have, had Barrett's esophagus. It can predict risk for unrelated individuals without knowing any relatives' information. Our prediction model will shed light on effectively identifying high-risk individuals for Barrett's esophagus screening and surveillance, consequently allowing intervention at an early stage, and reducing mortality from esophageal adenocarcinoma. Cancer Epidemiol Biomarkers Prev; 25(5); 727-35. ©2016 AACR. ©2016 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Poppett, Claire; Allington-Smith, Jeremy
2010-07-01
We investigate the FRD performance of a 150 μm core fibre for its suitability to the SIDE project.1 This work builds on our previous work2 (Paper 1) where we examined the dependence of FRD on length in fibres with a core size of 100 μm and proposed a new multi-component model to explain the results. In order to predict the FRD characteristics of a fibre, the most commonly used model is an adaptation of the Gloge8model by Carrasco and Parry3 which quantifies the the number of scattering defects within an optical bre using a single parameter, d0. The model predicts many trends which are seen experimentally, for example, a decrease in FRD as core diameter increases, and also as wavelength increases. However the model also predicts a strong dependence on FRD with length that is not seen experimentally. By adapting the single fibre model to include a second fibre, we can quantify the amount of FRD due to stress caused by the method of termination. By fitting the model to experimental data we find that polishing the fibre causes a small increase in stress to be induced in the end of the fibre compared to a simple cleave technique.
Influence of microscopic strain heterogeneity on the formability of martensitic stainless steel
NASA Astrophysics Data System (ADS)
Bettanini, Alvise Miotti; Delannay, Laurent; Jacques, Pascal J.; Pardoen, Thomas; Badinier, Guillaume; Mithieux, Jean-Denis
2017-10-01
Both finite element modeling and mean field (Mori-Tanaka) modeling are used to predict the strain partitioning in the martensite-ferrite microstructure of an AISI 410 martensitic stainless steel. Numerical predictions reproduce experimental trends according to which macroscopic strength is increased when the dissolution of carbides leads to carbon enrichment of martensite. However, the increased strength contrast of ferrite and martensite favours strain localization and high stress triaxiality in ferrite, which in turn promotes ductile damage development.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Predicting Liver Transplant Capacity Using Discrete Event Simulation.
Toro-Díaz, Hector; Mayorga, Maria E; Barritt, A Sidney; Orman, Eric S; Wheeler, Stephanie B
2015-08-01
The number of liver transplants (LTs) performed in the US increased until 2006 but has since declined despite an ongoing increase in demand. This decline may be due in part to decreased donor liver quality and increasing discard of poor-quality livers. We constructed a discrete event simulation (DES) model informed by current donor characteristics to predict future LT trends through the year 2030. The data source for our model is the United Network for Organ Sharing database, which contains patient-level information on all organ transplants performed in the US. Previous analysis showed that liver discard is increasing and that discarded organs are more often from donors who are older, are obese, have diabetes, and donated after cardiac death. Given that the prevalence of these factors is increasing, the DES model quantifies the reduction in the number of LTs performed through 2030. In addition, the model estimatesthe total number of future donors needed to maintain the current volume of LTs and the effect of a hypothetical scenario of improved reperfusion technology.We also forecast the number of patients on the waiting list and compare this with the estimated number of LTs to illustrate the impact that decreased LTs will have on patients needing transplants. By altering assumptions about the future donor pool, this model can be used to develop policy interventions to prevent a further decline in this lifesaving therapy. To our knowledge, there are no similar predictive models of future LT use based on epidemiological trends. © The Author(s) 2014.
Predicting Liver Transplant Capacity Using Discrete Event Simulation
Diaz, Hector Toro; Mayorga, Maria; Barritt, A. Sidney; Orman, Eric S.; Wheeler, Stephanie B.
2014-01-01
The number of liver transplants (LTs) performed in the US increased until 2006, but has since declined despite an ongoing increase in demand. This decline may be due in part to decreased donor liver quality and increasing discard of poor quality livers. We constructed a Discrete Event Simulation (DES) model informed by current donor characteristics to predict future LT trends through the year 2030. The data source for our model is the United Network for Organ Sharing database, which contains patient level information on all organ transplants performed in the US. Previous analysis showed that liver discard is increasing and that discarded organs are more often from donors who are older, obese, have diabetes, and donated after cardiac death. Given that the prevalence of these factors is increasing, the DES model quantifies the reduction in the number of LTs performed through 2030. In addition, the model estimates the total number of future donors needed to maintain the current volume of LTs, and the effect of a hypothetical scenario of improved reperfusion technology. We also forecast the number of patients on the waiting list and compare this to the estimated number of LTs to illustrate the impact that decreased LTs will have on patients needing transplants. By altering assumptions about the future donor pool, this model can be used to develop policy interventions to prevent a further decline in this life saving therapy. To our knowledge, there are no similar predictive models of future LT use based on epidemiologic trends. PMID:25391681
Prediction of breast cancer risk by genetic risk factors, overall and by hormone receptor status.
Hüsing, Anika; Canzian, Federico; Beckmann, Lars; Garcia-Closas, Montserrat; Diver, W Ryan; Thun, Michael J; Berg, Christine D; Hoover, Robert N; Ziegler, Regina G; Figueroa, Jonine D; Isaacs, Claudine; Olsen, Anja; Viallon, Vivian; Boeing, Heiner; Masala, Giovanna; Trichopoulos, Dimitrios; Peeters, Petra H M; Lund, Eiliv; Ardanaz, Eva; Khaw, Kay-Tee; Lenner, Per; Kolonel, Laurence N; Stram, Daniel O; Le Marchand, Loïc; McCarty, Catherine A; Buring, Julie E; Lee, I-Min; Zhang, Shumin; Lindström, Sara; Hankinson, Susan E; Riboli, Elio; Hunter, David J; Henderson, Brian E; Chanock, Stephen J; Haiman, Christopher A; Kraft, Peter; Kaaks, Rudolf
2012-09-01
There is increasing interest in adding common genetic variants identified through genome wide association studies (GWAS) to breast cancer risk prediction models. First results from such models showed modest benefits in terms of risk discrimination. Heterogeneity of breast cancer as defined by hormone-receptor status has not been considered in this context. In this study we investigated the predictive capacity of 32 GWAS-detected common variants for breast cancer risk, alone and in combination with classical risk factors, and for tumours with different hormone receptor status. Within the Breast and Prostate Cancer Cohort Consortium, we analysed 6009 invasive breast cancer cases and 7827 matched controls of European ancestry, with data on classical breast cancer risk factors and 32 common gene variants identified through GWAS. Discriminatory ability with respect to breast cancer of specific hormone receptor-status was assessed with the age adjusted and cohort-adjusted concordance statistic (AUROC(a)). Absolute risk scores were calculated with external reference data. Integrated discrimination improvement was used to measure improvements in risk prediction. We found a small but steady increase in discriminatory ability with increasing numbers of genetic variants included in the model (difference in AUROC(a) going from 2.7% to 4%). Discriminatory ability for all models varied strongly by hormone receptor status. Adding information on common polymorphisms provides small but statistically significant improvements in the quality of breast cancer risk prediction models. We consistently observed better performance for receptor-positive cases, but the gain in discriminatory quality is not sufficient for clinical application.
Land-atmosphere coupling and climate prediction over the U.S. Southern Great Plains
NASA Astrophysics Data System (ADS)
Williams, I. N.; Lu, Y.; Kueppers, L. M.; Riley, W. J.; Biraud, S.; Bagley, J. E.; Torn, M. S.
2016-12-01
Biases in land-atmosphere coupling in climate models can contribute to climate prediction biases, but land models are rarely evaluated in the context of this coupling. We tested land-atmosphere coupling and explored effects of land surface parameterizations on climate prediction in a single-column version of the NCAR Community Earth System Model (CESM1.2.2) and an offline Community Land Model (CLM4.5). The correlation between leaf area index (LAI) and surface evaporative fraction (ratio of latent to total turbulent heat flux) was substantially underpredicted compared to observations in the U.S. Southern Great Plains, while the correlation between soil moisture and evaporative fraction was overpredicted by CLM4.5. These correlations were improved by prescribing observed LAI, increasing soil resistance to evaporation, increasing minimum stomatal conductance, and increasing leaf reflectance. The modifications reduced the root mean squared error (RMSE) in daytime 2 m air temperature from 3.6 C to 2 C in summer (JJA), and reduced RMSE in total JJA precipitation from 133 to 84 mm. The modifications had the largest effect on prediction of summer drought in 2006, when a warm bias in daytime 2 m air temperature was reduced from +6 C to a smaller cold bias of -1.3 C, and a corresponding dry bias in total JJA precipitation was reduced from -111 mm to -23 mm. Thus, the role of vegetation in droughts and heat waves is likely underpredicted in CESM1.2.2, and improvements in land surface models can improve prediction of climate extremes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hathaway, M.D.; Wood, J.R.
1997-10-01
CFD codes capable of utilizing multi-block grids provide the capability to analyze the complete geometry of centrifugal compressors. Attendant with this increased capability is potentially increased grid setup time and more computational overhead with the resultant increase in wall clock time to obtain a solution. If the increase in difficulty of obtaining a solution significantly improves the solution from that obtained by modeling the features of the tip clearance flow or the typical bluntness of a centrifugal compressor`s trailing edge, then the additional burden is worthwhile. However, if the additional information obtained is of marginal use, then modeling of certainmore » features of the geometry may provide reasonable solutions for designers to make comparative choices when pursuing a new design. In this spirit a sequence of grids were generated to study the relative importance of modeling versus detailed gridding of the tip gap and blunt trailing edge regions of the NASA large low-speed centrifugal compressor for which there is considerable detailed internal laser anemometry data available for comparison. The results indicate: (1) There is no significant difference in predicted tip clearance mass flow rate whether the tip gap is gridded or modeled. (2) Gridding rather than modeling the trailing edge results in better predictions of some flow details downstream of the impeller, but otherwise appears to offer no great benefits. (3) The pitchwise variation of absolute flow angle decreases rapidly up to 8% impeller radius ratio and much more slowly thereafter. Although some improvements in prediction of flow field details are realized as a result of analyzing the actual geometry there is no clear consensus that any of the grids investigated produced superior results in every case when compared to the measurements. However, if a multi-block code is available, it should be used, as it has the propensity for enabling better predictions than a single block code.« less
Multimodel predictive system for carbon dioxide solubility in saline formation waters.
Wang, Zan; Small, Mitchell J; Karamalidis, Athanasios K
2013-02-05
The prediction of carbon dioxide solubility in brine at conditions relevant to carbon sequestration (i.e., high temperature, pressure, and salt concentration (T-P-X)) is crucial when this technology is applied. Eleven mathematical models for predicting CO(2) solubility in brine are compared and considered for inclusion in a multimodel predictive system. Model goodness of fit is evaluated over the temperature range 304-433 K, pressure range 74-500 bar, and salt concentration range 0-7 m (NaCl equivalent), using 173 published CO(2) solubility measurements, particularly selected for those conditions. The performance of each model is assessed using various statistical methods, including the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Different models emerge as best fits for different subranges of the input conditions. A classification tree is generated using machine learning methods to predict the best-performing model under different T-P-X subranges, allowing development of a multimodel predictive system (MMoPS) that selects and applies the model expected to yield the most accurate CO(2) solubility prediction. Statistical analysis of the MMoPS predictions, including a stratified 5-fold cross validation, shows that MMoPS outperforms each individual model and increases the overall accuracy of CO(2) solubility prediction across the range of T-P-X conditions likely to be encountered in carbon sequestration applications.
Modeling ultrasound propagation through material of increasing geometrical complexity.
Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen
2018-06-01
Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-01
... Defense of Infrastructure Against Cyber Threats (PREDICT) Program AGENCY: Science and Technology... Defense of Infrastructure Against Cyber Threats (PREDICT) initiative. PREDICT is an initiative to... new models, technologies and products that support effective threat assessment and increase cyber...
NASA Astrophysics Data System (ADS)
Spracklen, D. V.; Logan, J. A.; Mickley, L. J.; Park, R. J.; Flannigan, M. D.; Westerling, A. L.
2006-12-01
Increased forest fire activity in the Western United States appears to be driven by increasing spring and summer temperatures. Here we make a first estimate of how climate-driven changes in fire activity will influence summertime organic carbon (OC) concentrations in the Western US. We use output from a general circulation model (GCM) combined with area burned regressions to predict how area burned will change between present day and 2050. Calculated area burned is used to create future emission estimates for the Western U.S. and we use a global chemical transport model (CTM) to predict future changes in OC concentrations. Stepwise linear regression is used to determine the best relationships between observed area burned for 1980- 2004 and variables chosen from temperature, relative humidity, wind speed, rainfall and drought indices from the Candaian Fire Weather Index Model. Best predictors are ecosytem dependent but typically include mean summer temperature and mean drought code. In forest ecosystems of the Western U.S. our regressions explain 50-60% of the variance in annual area burned. Between 2000 and 2050 increases in temperature and reductions in precipitation, as predicted by the GISS GCM, cause mean area burned in the western U.S. to increase by 30-55%. We use the GEOS-Chem CTM to show that these increased emissions result in an increase in summertime western U.S. OC concentrations by 55% over current concentrations. Our results show that the predicted increase in future wild fires will have important consequences for western US air quality and visibility.
Predicting grain yield using canopy hyperspectral reflectance in wheat breeding data.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; de Los Campos, Gustavo; Alvarado, Gregorio; Suchismita, Mondal; Rutkoski, Jessica; González-Pérez, Lorena; Burgueño, Juan
2017-01-01
Modern agriculture uses hyperspectral cameras to obtain hundreds of reflectance data measured at discrete narrow bands to cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra, depending on the camera. This information is used to construct vegetation indices (VI) (e.g., green normalized difference vegetation index or GNDVI, simple ratio or SRa, etc.) which are used for the prediction of primary traits (e.g., biomass). However, these indices only use some bands and are cultivar-specific; therefore they lose considerable information and are not robust for all cultivars. This study proposes models that use all available bands as predictors to increase prediction accuracy; we compared these approaches with eight conventional vegetation indexes (VIs) constructed using only some bands. The data set we used comes from CIMMYT's global wheat program and comprises 1170 genotypes evaluated for grain yield (ton/ha) in five environments (Drought, Irrigated, EarlyHeat, Melgas and Reduced Irrigated); the reflectance data were measured in 250 discrete narrow bands ranging between 392 and 851 nm. The proposed models for the simultaneous analysis of all the bands were ordinal least square (OLS), Bayes B, principal components with Bayes B, functional B-spline, functional Fourier and functional partial least square. The results of these models were compared with the OLS performed using as predictors each of the eight VIs individually and combined. We found that using all bands simultaneously increased prediction accuracy more than using VI alone. The Splines and Fourier models had the best prediction accuracy for each of the nine time-points under study. Combining image data collected at different time-points led to a small increase in prediction accuracy relative to models that use data from a single time-point. Also, using bands with heritabilities larger than 0.5 only in Drought as predictor variables showed improvements in prediction accuracy.
Gonzalez-Benecke, Carlos A; Teskey, Robert O; Dinon-Aldridge, Heather; Martin, Timothy A
2017-11-01
Climate projections from 20 downscaled global climate models (GCMs) were used with the 3-PG model to predict the future productivity and water use of planted loblolly pine (Pinus taeda) growing across the southeastern United States. Predictions were made using Representative Concentration Pathways (RCP) 4.5 and 8.5. These represent scenarios in which total radiative forcing stabilizes before 2100 (RCP 4.5) or continues increasing throughout the century (RCP 8.5). Thirty-six sites evenly distributed across the native range of the species were used in the analysis. These sites represent a range in current mean annual temperature (14.9-21.6°C) and precipitation (1,120-1,680 mm/year). The site index of each site, which is a measure of growth potential, was varied to represent different levels of management. The 3-PG model predicted that aboveground biomass growth and net primary productivity will increase by 10%-40% in many parts of the region in the future. At cooler sites, the relative growth increase was greater than at warmer sites. By running the model with the baseline [CO 2 ] or the anticipated elevated [CO 2 ], the effect of CO 2 on growth was separated from that of other climate factors. The growth increase at warmer sites was due almost entirely to elevated [CO 2 ]. The growth increase at cooler sites was due to a combination of elevated [CO 2 ] and increased air temperature. Low site index stands had a greater relative increase in growth under the climate change scenarios than those with a high site index. Water use increased in proportion to increases in leaf area and productivity but precipitation was still adequate, based on the downscaled GCM climate projections. We conclude that an increase in productivity can be expected for a large majority of the planted loblolly pine stands in the southeastern United States during this century. © 2017 John Wiley & Sons Ltd.
Predictive power of food web models based on body size decreases with trophic complexity.
Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo
2018-05-01
Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2 = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.
Yue, Xu; Mickley, Loretta J.; Logan, Jennifer A.; Kaplan, Jed O.
2013-01-01
We estimate future wildfire activity over the western United States during the mid-21st century (2046–2065), based on results from 15 climate models following the A1B scenario. We develop fire prediction models by regressing meteorological variables from the current and previous years together with fire indexes onto observed regional area burned. The regressions explain 0.25–0.60 of the variance in observed annual area burned during 1980–2004, depending on the ecoregion. We also parameterize daily area burned with temperature, precipitation, and relative humidity. This approach explains ~0.5 of the variance in observed area burned over forest ecoregions but shows no predictive capability in the semi-arid regions of Nevada and California. By applying the meteorological fields from 15 climate models to our fire prediction models, we quantify the robustness of our wildfire projections at mid-century. We calculate increases of 24–124% in area burned using regressions and 63–169% with the parameterization. Our projections are most robust in the southwestern desert, where all GCMs predict significant (p<0.05) meteorological changes. For forested ecoregions, more GCMs predict significant increases in future area burned with the parameterization than with the regressions, because the latter approach is sensitive to hydrological variables that show large inter-model variability in the climate projections. The parameterization predicts that the fire season lengthens by 23 days in the warmer and drier climate at mid-century. Using a chemical transport model, we find that wildfire emissions will increase summertime surface organic carbon aerosol over the western United States by 46–70% and black carbon by 20–27% at midcentury, relative to the present day. The pollution is most enhanced during extreme episodes: above the 84th percentile of concentrations, OC increases by ~90% and BC by ~50%, while visibility decreases from 130 km to 100 km in 32 Federal Class 1 areas in Rocky Mountains Forest. PMID:24015109
Prediction of near-term breast cancer risk using a Bayesian belief network
NASA Astrophysics Data System (ADS)
Zheng, Bin; Ramalingam, Pandiyarajan; Hariharan, Harishwaran; Leader, Joseph K.; Gur, David
2013-03-01
Accurately predicting near-term breast cancer risk is an important prerequisite for establishing an optimal personalized breast cancer screening paradigm. In previous studies, we investigated and tested the feasibility of developing a unique near-term breast cancer risk prediction model based on a new risk factor associated with bilateral mammographic density asymmetry between the left and right breasts of a woman using a single feature. In this study we developed a multi-feature based Bayesian belief network (BBN) that combines bilateral mammographic density asymmetry with three other popular risk factors, namely (1) age, (2) family history, and (3) average breast density, to further increase the discriminatory power of our cancer risk model. A dataset involving "prior" negative mammography examinations of 348 women was used in the study. Among these women, 174 had breast cancer detected and verified in the next sequential screening examinations, and 174 remained negative (cancer-free). A BBN was applied to predict the risk of each woman having cancer detected six to 18 months later following the negative screening mammography. The prediction results were compared with those using single features. The prediction accuracy was significantly increased when using the BBN. The area under the ROC curve increased from an AUC=0.70 to 0.84 (p<0.01), while the positive predictive value (PPV) and negative predictive value (NPV) also increased from a PPV=0.61 to 0.78 and an NPV=0.65 to 0.75, respectively. This study demonstrates that a multi-feature based BBN can more accurately predict the near-term breast cancer risk than with a single feature.
Predicting future coexistence in a North American ant community
Bewick, Sharon; Stuble, Katharine L; Lessard, Jean-Phillipe; Dunn, Robert R; Adler, Frederick R; Sanders, Nathan J
2014-01-01
Global climate change will remodel ecological communities worldwide. However, as a consequence of biotic interactions, communities may respond to climate change in idiosyncratic ways. This makes predictive models that incorporate biotic interactions necessary. We show how such models can be constructed based on empirical studies in combination with predictions or assumptions regarding the abiotic consequences of climate change. Specifically, we consider a well-studied ant community in North America. First, we use historical data to parameterize a basic model for species coexistence. Using this model, we determine the importance of various factors, including thermal niches, food discovery rates, and food removal rates, to historical species coexistence. We then extend the model to predict how the community will restructure in response to several climate-related changes, such as increased temperature, shifts in species phenology, and altered resource availability. Interestingly, our mechanistic model suggests that increased temperature and shifts in species phenology can have contrasting effects. Nevertheless, for almost all scenarios considered, we find that the most subordinate ant species suffers most as a result of climate change. More generally, our analysis shows that community composition can respond to climate warming in nonintuitive ways. For example, in the context of a community, it is not necessarily the most heat-sensitive species that are most at risk. Our results demonstrate how models that account for niche partitioning and interspecific trade-offs among species can be used to predict the likely idiosyncratic responses of local communities to climate change. PMID:24963378
Evaluation of a black-footed ferret resource utilization function model
Eads, D.A.; Millspaugh, J.J.; Biggins, D.E.; Jachowski, D.S.; Livieri, T.M.
2011-01-01
Resource utilization function (RUF) models permit evaluation of potential habitat for endangered species; ideally such models should be evaluated before use in management decision-making. We evaluated the predictive capabilities of a previously developed black-footed ferret (Mustela nigripes) RUF. Using the population-level RUF, generated from ferret observations at an adjacent yet distinct colony, we predicted the distribution of ferrets within a black-tailed prairie dog (Cynomys ludovicianus) colony in the Conata Basin, South Dakota, USA. We evaluated model performance, using data collected during post-breeding spotlight surveys (2007-2008) by assessing model agreement via weighted compositional analysis and count-metrics. Compositional analysis of home range use and colony-level availability, and core area use and home range availability, demonstrated ferret selection of the predicted Very high and High occurrence categories in 2007 and 2008. Simple count-metrics corroborated these findings and suggested selection of the Very high category in 2007 and the Very high and High categories in 2008. Collectively, these results suggested that the RUF was useful in predicting occurrence and intensity of space use of ferrets at our study site, the 2 objectives of the RUF. Application of this validated RUF would increase the resolution of habitat evaluations, permitting prediction of the distribution of ferrets within distinct colonies. Additional model evaluation at other sites, on other black-tailed prairie dog colonies of varying resource configuration and size, would increase understanding of influences upon model performance and the general utility of the RUF. ?? 2011 The Wildlife Society.
Modeling physiological resistance in bacterial biofilms.
Cogan, N G; Cortez, Ricardo; Fauci, Lisa
2005-07-01
A mathematical model of the action of antimicrobial agents on bacterial biofilms is presented. The model includes the fluid dynamics in and around the biofilm, advective and diffusive transport of two chemical constituents and the mechanism of physiological resistance. Although the mathematical model applies in three dimensions, we present two-dimensional simulations for arbitrary biofilm domains and various dosing strategies. The model allows the prediction of the spatial evolution of bacterial population and chemical constituents as well as different dosing strategies based on the fluid motion. We find that the interaction between the nutrient and the antimicrobial agent can reproduce survival curves which are comparable to other model predictions as well as experimental results. The model predicts that exposing the biofilm to low concentration doses of antimicrobial agent for longer time is more effective than short time dosing with high antimicrobial agent concentration. The effects of flow reversal and the roughness of the fluid/biofilm are also investigated. We find that reversing the flow increases the effectiveness of dosing. In addition, we show that overall survival decreases with increasing surface roughness.
Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin
2015-01-01
Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280
Baldwin, Mark A; Clary, Chadd; Maletsky, Lorin P; Rullkoetter, Paul J
2009-10-16
Verified computational models represent an efficient method for studying the relationship between articular geometry, soft-tissue constraint, and patellofemoral (PF) mechanics. The current study was performed to evaluate an explicit finite element (FE) modeling approach for predicting PF kinematics in the natural and implanted knee. Experimental three-dimensional kinematic data were collected on four healthy cadaver specimens in their natural state and after total knee replacement in the Kansas knee simulator during a simulated deep knee bend activity. Specimen-specific FE models were created from medical images and CAD implant geometry, and included soft-tissue structures representing medial-lateral PF ligaments and the quadriceps tendon. Measured quadriceps loads and prescribed tibiofemoral kinematics were used to predict dynamic kinematics of an isolated PF joint between 10 degrees and 110 degrees femoral flexion. Model sensitivity analyses were performed to determine the effect of rigid or deformable patellar representations and perturbed PF ligament mechanical properties (pre-tension and stiffness) on model predictions and computational efficiency. Predicted PF kinematics from the deformable analyses showed average root mean square (RMS) differences for the natural and implanted states of less than 3.1 degrees and 1.7 mm for all rotations and translations. Kinematic predictions with rigid bodies increased average RMS values slightly to 3.7 degrees and 1.9 mm with a five-fold decrease in computational time. Two-fold increases and decreases in PF ligament initial strain and linear stiffness were found to most adversely affect kinematic predictions for flexion, internal-external tilt and inferior-superior translation in both natural and implanted states. The verified models could be used to further investigate the effects of component alignment or soft-tissue variability on natural and implant PF mechanics.
Kahmann, A; Anzanello, M J; Fogliatto, F S; Marcelo, M C A; Ferrão, M F; Ortiz, R S; Mariotti, K C
2018-04-15
Street cocaine is typically altered with several compounds that increase its harmful health-related side effects, most notably depression, convulsions, and severe damages to the cardiovascular system, lungs, and brain. Thus, determining the concentration of cocaine and adulterants in seized drug samples is important from both health and forensic perspectives. Although FTIR has been widely used to identify the fingerprint and concentration of chemical compounds, spectroscopy datasets are usually comprised of thousands of highly correlated wavenumbers which, when used as predictors in regression models, tend to undermine the predictive performance of multivariate techniques. In this paper, we propose an FTIR wavenumber selection method aimed at identifying FTIR spectra intervals that best predict the concentration of cocaine and adulterants (e.g. caffeine, phenacetin, levamisole, and lidocaine) in cocaine samples. For that matter, the Mutual Information measure is integrated into a Quadratic Programming problem with the objective of minimizing the probability of retaining redundant wavenumbers, while maximizing the relationship between retained wavenumbers and compounds' concentrations. Optimization outputs guide the order of inclusion of wavenumbers in a predictive model, using a forward-based wavenumber selection method. After the inclusion of each wavenumber, parameters of three alternative regression models are estimated, and each model's prediction error is assessed through the Mean Average Error (MAE) measure; the recommended subset of retained wavenumbers is the one that minimizes the prediction error with maximum parsimony. Using our propositions in a dataset of 115 cocaine samples we obtained a best prediction model with average MAE of 0.0502 while retaining only 2.29% of the original wavenumbers, increasing the predictive precision by 0.0359 when compared to a model using the complete set of wavenumbers as predictors. Copyright © 2018 Elsevier B.V. All rights reserved.
Karelina, T; Voronova, V; Demin, O; Colice, G
2016-01-01
Emerging T‐helper type 2 (Th2) cytokine‐based asthma therapies, such as tralokinumab, lebrikizumab (anti‐interleukin (IL)‐13), and mepolizumab (anti‐IL‐5), have shown differences in their blood eosinophil (EOS) response. To better understand these effects, we developed a mathematical model of EOS dynamics. For the anti‐IL‐13 therapies, lebrikizumab and tralokinumab, the model predicted an increase of 30% and 10% in total and activated EOS in the blood, respectively, and a decrease in the total and activated EOS in the airways. The model predicted a rapid decrease in total and activated EOS levels in blood and airways for the anti‐IL‐5 therapy mepolizumab. All model‐based predictions were consistent with published clinical observations. The modeling approach provided insights into EOS response after treatment with Th2‐targeted therapies, and supports the hypothesis that an increase in blood EOS after anti‐IL‐13 therapy is part of the pharmacological action of these therapies. PMID:27885827
Asgharian, Bahman; Price, Owen; Oberdörster, Gunter
2006-06-01
Inhalation of particles generated as a result of thermal degradation from fire or smoke, as may occur on spacecraft, is of major health concern to space-faring countries. Knowledge of lung airflow and particle transport under different gravity environments is required to addresses this concern by providing information on particle deposition. Gravity affects deposition of particles in the lung in two ways. First, the airflow distribution among airways is changed in different gravity environments. Second, particle losses by sedimentation are enhanced with increasing gravity. In this study, a model of airflow distribution in the lung that accounts for the influence of gravity was used for a mathematical description of particle deposition in the human lung to calculate lobar, regional, and local deposition of particles in different gravity environments. The lung geometry used in the mathematical model contained five lobes that allowed the assessment of lobar ventilation distribution and variation of particle deposition. At zero gravity, it was predicted that all lobes of the lung expanded and contracted uniformly, independent of body position. Increased gravity in the upright position increased the expansion of the upper lobes and decreased expansion of the lower lobes. Despite a slight increase in predicted deposition of ultrafine particles in the upper lobes with decreasing gravity, deposition of ultrafine particles was generally predicted to be unaffected by gravity. Increased gravity increased predicted deposition of fine and coarse particles in the tracheobronchial region, but that led to a reduction or even elimination of deposition in the alveolar region for coarse particles. The results from this study show that existing mathematical models of particle deposition at 1 G can be extended to different gravity environments by simply correcting for a gravity constant. Controlled studies in astronauts on future space missions are needed to validate these predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Mario, E-mail: mgsantoss@gmail.com; Freitas, Raul, E-mail: raulfreitas@portugalmail.com; Crespi, Antonio L., E-mail: aluis.crespi@gmail.com
2011-10-15
This study assesses the potential of an integrated methodology for predicting local trends in invasive exotic plant species (invasive richness) using indirect, regional information on human disturbance. The distribution of invasive plants was assessed in North Portugal using herbarium collections and local environmental, geophysical and socio-economic characteristics. Invasive richness response to anthropogenic disturbance was predicted using a dynamic model based on a sequential modeling process (stochastic dynamic methodology-StDM). Derived scenarios showed that invasive richness trends were clearly associated with ongoing socio-economic change. Simulations including scenarios of growing urbanization showed an increase in invasive richness while simulations in municipalities with decreasingmore » populations showed stable or decreasing levels of invasive richness. The model simulations demonstrate the interest and feasibility of using this methodology in disturbance ecology. - Highlights: {yields} Socio-economic data indicate human induced disturbances. {yields} Socio-economic development increase disturbance in ecosystems. {yields} Disturbance promotes opportunities for invasive plants.{yields} Increased opportunities promote richness of invasive plants.{yields} Increase in richness of invasive plants change natural ecosystems.« less
Dey, Cody J; Richardson, Evan; McGeachy, David; Iverson, Samuel A; Gilchrist, Hugh G; Semeniuk, Christina A D
2017-05-01
Climate change can influence interspecific interactions by differentially affecting species-specific phenology. In seasonal ice environments, there is evidence that polar bear predation of Arctic bird eggs is increasing because of earlier sea ice breakup, which forces polar bears into nearshore terrestrial environments where Arctic birds are nesting. Because polar bears can consume a large number of nests before becoming satiated, and because they can swim between island colonies, they could have dramatic influences on seabird and sea duck reproductive success. However, it is unclear whether nest foraging can provide an energetic benefit to polar bear populations, especially given the capacity of bird populations to redistribute in response to increasing predation pressure. In this study, we develop a spatially explicit agent-based model of the predator-prey relationship between polar bears and common eiders, a common and culturally important bird species for northern peoples. Our model is composed of two types of agents (polar bear agents and common eider hen agents) whose movements and decision heuristics are based on species-specific bioenergetic and behavioral ecological principles, and are influenced by historical and extrapolated sea ice conditions. Our model reproduces empirical findings that polar bear predation of bird nests is increasing and predicts an accelerating relationship between advancing ice breakup dates and the number of nests depredated. Despite increases in nest predation, our model predicts that polar bear body condition during the ice-free period will continue to decline. Finally, our model predicts that common eider nests will become more dispersed and will move closer to the mainland in response to increasing predation, possibly increasing their exposure to land-based predators and influencing the livelihood of local people that collect eider eggs and down. These results show that predator-prey interactions can have nonlinear responses to changes in climate and provides important predictions of ecological change in Arctic ecosystems. © 2016 John Wiley & Sons Ltd.
Longer guts and higher food quality increase energy intake in migratory swans.
van Gils, Jan A; Beekman, Jan H; Coehoorn, Pieter; Corporaal, Els; Dekkers, Ten; Klaassen, Marcel; van Kraaij, Rik; de Leeuw, Rinze; de Vries, Peter P
2008-11-01
1. Within the broad field of optimal foraging, it is increasingly acknowledged that animals often face digestive constraints rather than constraints on rates of food collection. This therefore calls for a formalization of how animals could optimize food absorption rates. 2. Here we generate predictions from a simple graphical optimal digestion model for foragers that aim to maximize their (true) metabolizable food intake over total time (i.e. including nonforaging bouts) under a digestive constraint. 3. The model predicts that such foragers should maintain a constant food retention time, even if gut length or food quality changes. For phenotypically flexible foragers, which are able to change the size of their digestive machinery, this means that an increase in gut length should go hand in hand with an increase in gross intake rate. It also means that better quality food should be digested more efficiently. 4. These latter two predictions are tested in a large avian long-distance migrant, the Bewick's swan (Cygnus columbianus bewickii), feeding on grasslands in its Dutch wintering quarters. 5. Throughout winter, free-ranging Bewick's swans, growing a longer gut and experiencing improved food quality, increased their gross intake rate (i.e. bite rate) and showed a higher digestive efficiency. These responses were in accordance with the model and suggest maintenance of a constant food retention time. 6. These changes doubled the birds' absorption rate. Had only food quality changed (and not gut length), then absorption rate would have increased by only 67%; absorption rate would have increased by only 17% had only gut length changed (and not food quality). 7. The prediction that gross intake rate should go up with gut length parallels the mechanism included in some proximate models of foraging that feeding motivation scales inversely to gut fullness. We plea for a tighter integration between ultimate and proximate foraging models.
Increased sediment oxygen flux in lakes and reservoirs: The impact of hypolimnetic oxygenation
NASA Astrophysics Data System (ADS)
Bierlein, Kevin A.; Rezvani, Maryam; Socolofsky, Scott A.; Bryant, Lee D.; Wüest, Alfred; Little, John C.
2017-06-01
Hypolimnetic oxygenation is an increasingly common lake management strategy for mitigating hypoxia/anoxia and associated deleterious effects on water quality. A common effect of oxygenation is increased oxygen consumption in the hypolimnion and predicting the magnitude of this increase is the crux of effective oxygenation system design. Simultaneous measurements of sediment oxygen flux (JO2) and turbulence in the bottom boundary layer of two oxygenated lakes were used to investigate the impact of oxygenation on JO2. Oxygenation increased JO2 in both lakes by increasing the bulk oxygen concentration, which in turn steepens the diffusive gradient across the diffusive boundary layer. At high flow rates, the diffusive boundary layer thickness decreased as well. A transect along one of the lakes showed JO2 to be spatially quite variable, with near-field and far-field JO2 differing by a factor of 4. Using these in situ measurements, physical models of interfacial flux were compared to microprofile-derived JO2 to determine which models adequately predict JO2 in oxygenated lakes. Models based on friction velocity, turbulence dissipation rate, and the integral scale of turbulence agreed with microprofile-derived JO2 in both lakes. These models could potentially be used to predict oxygenation-induced oxygen flux and improve oxygenation system design methods for a broad range of reservoir systems.
Leve, Leslie D; Kim, Hyoun K; Pears, Katherine C
2005-10-01
Childhood temperament and family environment have been shown to predict internalizing and externalizing behavior; however, less is known about how temperament and family environment interact to predict changes in problem behavior. We conducted latent growth curve modeling on a sample assessed at ages 5, 7, 10, 14, and 17 (N = 337). Externalizing behavior decreased over time for both sexes, and internalizing behavior increased over time for girls only. Two childhood variables (fear/shyness and maternal depression) predicted boys' and girls' age-17 internalizing behavior, harsh discipline uniquely predicted boys' age-17 internalizing behavior, and maternal depression and lower family income uniquely predicted increases in girls' internalizing behavior. For externalizing behavior, an array of temperament, family environment, and Temperament x Family Environment variables predicted age-17 behavior for both sexes. Sex differences were present in the prediction of externalizing slopes, with maternal depression predicting increases in boys' externalizing behavior only when impulsivity was low, and harsh discipline predicting increases in girls' externalizing behavior only when impulsivity was high or when fear/shyness was low.
Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire
2016-01-01
This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...
Edwards, Joel; Othman, Maazuza; Burn, Stewart; Crossin, Enda
2016-10-01
The collection of source separated kerbside municipal FW (SSFW) is being incentivised in Australia, however such a collection is likely to increase the fuel and time a collection truck fleet requires. Therefore, waste managers need to determine whether the incentives outweigh the cost. With literature scarcely describing the magnitude of increase, and local parameters playing a crucial role in accurately modelling kerbside collection; this paper develops a new general mathematical model that predicts the energy and time requirements of a collection regime whilst incorporating the unique variables of different jurisdictions. The model, Municipal solid waste collect (MSW-Collect), is validated and shown to be more accurate at predicting fuel consumption and trucks required than other common collection models. When predicting changes incurred for five different SSFW collection scenarios, results show that SSFW scenarios require an increase in fuel ranging from 1.38% to 57.59%. There is also a need for additional trucks across most SSFW scenarios tested. All SSFW scenarios are ranked and analysed in regards to fuel consumption; sensitivity analysis is conducted to test key assumptions. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
In general, the accuracy of a predicted toxicity value increases with increase in similarity between the query chemical and the chemicals used to develop a QSAR model. A toxicity estimation methodology employing this finding has been developed. A hierarchical based clustering t...
Light aircraft sound transmission studies - Noise reduction model
NASA Technical Reports Server (NTRS)
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1987-01-01
Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.
Mastrangelo, Giuseppe; Carta, Angela; Arici, Cecilia; Pavanello, Sofia; Porru, Stefano
2017-01-01
No etiological prediction model incorporating biomarkers is available to predict bladder cancer risk associated with occupational exposure to aromatic amines. Cases were 199 bladder cancer patients. Clinical, laboratory and genetic data were predictors in logistic regression models (full and short) in which the dependent variable was 1 for 15 patients with aromatic amines related bladder cancer and 0 otherwise. The receiver operating characteristics approach was adopted; the area under the curve was used to evaluate discriminatory ability of models. Area under the curve was 0.93 for the full model (including age, smoking and coffee habits, DNA adducts, 12 genotypes) and 0.86 for the short model (including smoking, DNA adducts, 3 genotypes). Using the "best cut-off" of predicted probability of a positive outcome, percentage of cases correctly classified was 92% (full model) against 75% (short model). Cancers classified as "positive outcome" are those to be referred for evaluation by an occupational physician for etiological diagnosis; these patients were 28 (full model) or 60 (short model). Using 3 genotypes instead of 12 can double the number of patients with suspect of aromatic amine related cancer, thus increasing costs of etiologic appraisal. Integrating clinical, laboratory and genetic factors, we developed the first etiologic prediction model for aromatic amine related bladder cancer. Discriminatory ability was excellent, particularly for the full model, allowing individualized predictions. Validation of our model in external populations is essential for practical use in the clinical setting.
A PK-PD model-based assessment of sugammadex effects on coagulation parameters.
Bosch, Rolien; van Lierop, Marie-José; de Kam, Pieter-Jan; Kruithof, Annelieke C; Burggraaf, Jacobus; de Greef, Rik; Visser, Sandra A G; Johnson-Levonas, Amy O; Kleijn, Huub-Jan
2016-03-10
Exposure-response analyses of sugammadex on activated partial thromboplastin time (APTT) and prothrombin time international normalized ratio (PT(INR)) were performed using data from two clinical trials in which subjects were co-treated with anti-coagulants, providing a framework to predict these responses in surgical patients on thromboprophylactic doses of low molecular weight or unfractionated heparin. Sugammadex-mediated increases in APTT and PT(INR) were described with a direct effect model, and this relationship was similar in the presence or absence of anti-coagulant therapy in either healthy volunteers or surgical patients. In surgical patients on thromboprophylactic therapy, model-based predictions showed 13.1% and 22.3% increases in respectively APTT and PT(INR) within 30min after administration of 16mg/kg sugammadex. These increases remain below thresholds seen following treatment with standard anti-coagulant therapy and were predicted to be short-lived paralleling the rapid decline in sugammadex plasma concentrations. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Laceulle, Odilia M; Ormel, Johan; Vollebergh, Wilma A M; van Aken, Marcel A G; Nederhof, Esther
2014-03-01
This study aimed to test the vulnerability model of the relationship between temperament and mental disorders using a large sample of adolescents from the TRacking Adolescents Individual Lives' Survey (TRAILS). The vulnerability model argues that particular temperaments can place individuals at risk for the development of mental health problems. Importantly, the model may imply that not only baseline temperament predicts mental health problems prospectively, but additionally, that changes in temperament predict corresponding changes in risk for mental health problems. Data were used from 1195 TRAILS participants. Adolescent temperament was assessed both at age 11 and at age 16. Onset of mental disorders between age 16 and 19 was assessed at age 19, by means of the World Health Organization Composite International Diagnostic Interview (WHO CIDI). Results showed that temperament at age 11 predicted future mental disorders, thereby providing support for the vulnerability model. Moreover, temperament change predicted future mental disorders above and beyond the effect of basal temperament. For example, an increase in frustration increased the risk of mental disorders proportionally. This study confirms, and extends, the vulnerability model. Consequences of both temperament and temperament change were general (e.g., changes in frustration predicted both internalizing and externalizing disorders) as well as dimension specific (e.g., changes in fear predicted internalizing but not externalizing disorders). These findings confirm previous studies, which showed that mental disorders have both unique and shared underlying temperamental risk factors. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng
Soil organic carbon turnover to CO 2 and CH 4 is sensitive to soil redox potential and pH conditions. But, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximatelymore » describe the observed pH evolution without additional parameterization. Though Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. Furthermore, the equilibrium speciation predicts a substantial increase in CO 2 solubility as pH increases, and taking into account CO 2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO 2 production from closed microcosms can be substantially underestimated based on headspace CO 2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.« less
Tsukiji, Jun; Cho, Soo Jung; Echevarria, Ghislaine C.; Kwon, Sophia; Joseph, Phillip; Schenck, Edward J.; Naveed, Bushra; Prezant, David J.; Rom, William N.; Schmidt, Ann Marie; Weiden, Michael D.; Nolan, Anna
2014-01-01
Rationale Metabolic syndrome, inflammatory and vascular injury markers measured in serum after WTC exposures predict abnormal FEV1. We hypothesized that elevated LPA levels predict FEV1
Saliba, Christopher M; Brandon, Scott C E; Deluzio, Kevin J
2017-05-24
Musculoskeletal models are increasingly used to estimate medial and lateral knee contact forces, which are difficult to measure in vivo. The sensitivity of contact force predictions to modeling parameters is important to the interpretation and implication of results generated by the model. The purpose of this study was to quantify the sensitivity of knee contact force predictions to simultaneous errors in frontal plane knee alignment and contact locations under different dynamic conditions. We scaled a generic musculoskeletal model for N=23 subjects' stature and radiographic knee alignment, then perturbed frontal plane alignment and mediolateral contact locations within experimentally-possible ranges of 10° to -10° and 10 to -10mm, respectively. The sensitivity of first peak, second peak, and mean medial and lateral knee contact forces to knee adduction angle and contact locations was modeled using linear regression. Medial loads increased, and lateral loads decreased, by between 3% and 6% bodyweight for each degree of varus perturbation. Shifting the medial contact point medially increased medial loads and decreased lateral loads by between 1% and 4% bodyweight per millimeter. This study demonstrates that realistic measurement errors of 5mm (contact distance) or 5° (frontal plane alignment) could result in a combined 50% BW error in subject specific contact force estimates. We also show that model sensitivity varies between subjects as a result of differences in gait dynamics. These results demonstrate that predicted knee joint contact forces should be considered as a range of possible values determined by model uncertainty. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Quackenbush, A.
2015-12-01
Urban land cover and associated impervious surface area is expected to increase by as much as 50% over the next few decades across substantial portions of the United States. In combination with urban expansion, increases in temperature and changes in precipitation are expected to impact ecosystems through changes in productivity, disturbance and hydrological properties. In this study, we use the NASA Terrestrial Observation and Prediction System Biogeochemical Cycle (TOPS-BGC) model to explore the combined impacts of urbanization and climate change on hydrologic dynamics (snowmelt, runoff, and evapotranspiration) and vegetation carbon uptake (gross productivity). The model is driven using land cover predictions from the Spatially Explicit Regional Growth Model (SERGoM) to quantify projected changes in impervious surface area, and climate projections from the 30 arc-second NASA Earth Exchange Downscaled Climate Projection (NEX-DCP30) dataset derived from the CMIP5 climate scenarios. We present the modeling approach and an analysis of the ecosystem impacts projected to occur in the US, with an emphasis on protected areas in the Great Northern and Appalachian Landscape Conservation Cooperatives (LCC). Under the ensemble average of the CMIP5 models and land cover change scenarios for both representative concentration pathways (RCPs) 4.5 and 8.5, both LCCs are predicted to experience increases in maximum and minimum temperatures as well as annual average precipitation. In the Great Northern LCC, this is projected to lead to increased annual runoff, especially under RCP 8.5. Earlier melt of the winter snow pack and increased evapotranspiration, however, reduces summer streamflow and soil water content, leading to a net reduction in vegetation productivity across much of the Great Northern LCC, with stronger trends occurring under RCP 8.5. Increased runoff is also projected to occur in the Appalachian LCC under both RCP 4.5 and 8.5. However, under RCP 4.5, the model predicts that the warmer wetter conditions will lead to increases in vegetation productivity across much of the Appalachian LCC, while under RCP 8.5, the effects of increased precipitation are not enough to keep up with increases in evapotranspiration, leading to projected reductions in vegetation productivity for this LCC by the end of this century.
Predictive risk models for proximal aortic surgery
Díaz, Rocío; Pascual, Isaac; Álvarez, Rubén; Alperi, Alberto; Rozado, Jose; Morales, Carlos; Silva, Jacobo; Morís, César
2017-01-01
Predictive risk models help improve decision making, information to our patients and quality control comparing results between surgeons and between institutions. The use of these models promotes competitiveness and led to increasingly better results. All these virtues are of utmost importance when the surgical operation entails high-risk. Although proximal aortic surgery is less frequent than other cardiac surgery operations, this procedure itself is more challenging and technically demanding than other common cardiac surgery techniques. The aim of this study is to review the current status of predictive risk models for patients who undergo proximal aortic surgery, which means aortic root replacement, supracoronary ascending aortic replacement or aortic arch surgery. PMID:28616348
MJO prediction using the sub-seasonal to seasonal forecast model of Beijing Climate Center
NASA Astrophysics Data System (ADS)
Liu, Xiangwen; Wu, Tongwen; Yang, Song; Li, Tim; Jie, Weihua; Zhang, Li; Wang, Zaizhi; Liang, Xiaoyun; Li, Qiaoping; Cheng, Yanjie; Ren, Hongli; Fang, Yongjie; Nie, Suping
2017-05-01
By conducting several sets of hindcast experiments using the Beijing Climate Center Climate System Model, which participates in the Sub-seasonal to Seasonal (S2S) Prediction Project, we systematically evaluate the model's capability in forecasting MJO and its main deficiencies. In the original S2S hindcast set, MJO forecast skill is about 16 days. Such a skill shows significant seasonal-to-interannual variations. It is found that the model-dependent MJO forecast skill is more correlated with the Indian Ocean Dipole (IOD) than with the El Niño-Southern Oscillation. The highest skill is achieved in autumn when the IOD attains its maturity. Extended skill is found when the IOD is in its positive phase. MJO forecast skill's close association with the IOD is partially due to the quickly strengthening relationship between MJO amplitude and IOD intensity as lead time increases to about 15 days, beyond which a rapid weakening of the relationship is shown. This relationship transition may cause the forecast skill to decrease quickly with lead time, and is related to the unrealistic amplitude and phase evolutions of predicted MJO over or near the equatorial Indian Ocean during anomalous IOD phases, suggesting a possible influence of exaggerated IOD variability in the model. The results imply that the upper limit of intraseasonal predictability is modulated by large-scale external forcing background state in the tropical Indian Ocean. Two additional sets of hindcast experiments with improved atmosphere and ocean initial conditions (referred to as S2S_IEXP1 and S2S_IEXP2, respectively) are carried out, and the results show that the overall MJO forecast skill is increased to 21-22 days. It is found that the optimization of initial sea surface temperature condition largely accounts for the increase of the overall MJO forecast skill, even though the improved initial atmosphere conditions also play a role. For the DYNAMO/CINDY field campaign period, the forecast skill increases to 27 days in S2S_IEXP2. Nevertheless, even with improved initialization, it is still difficult for the model to predict MJO propagation across the western hemisphere-western Indian Ocean area and across the eastern Indian Ocean-Maritime Continent area. Especially, MJO prediction is apparently limited by various interrelated deficiencies (e.g., overestimated IOD, shorter-than-observed MJO life cycle, Maritime Continent prediction barrier), due possibly to the model bias in the background moisture field over the eastern Indian Ocean and Maritime Continent. Thus, more efforts are needed to correct the deficiency in model physics in this region, in order to overcome the well-known Maritime Continent predictability barrier.
Mammographic density, breast cancer risk and risk prediction
Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane
2007-01-01
In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724
Dynamical Characteristics Common to Neuronal Competition Models
Shpiro, Asya; Curtu, Rodica; Rinzel, John; Rubin, Nava
2009-01-01
Models implementing neuronal competition by reciprocally inhibitory populations are widely used to characterize bistable phenomena such as binocular rivalry. We find common dynamical behavior in several models of this general type, which differ in their architecture in the form of their gain functions, and in how they implement the slow process that underlies alternating dominance. We focus on examining the effect of the input strength on the rate (and existence) of oscillations. In spite of their differences, all considered models possess similar qualitative features, some of which we report here for the first time. Experimentally, dominance durations have been reported to decrease monotonically with increasing stimulus strength (such as Levelt's “Proposition IV”). The models predict this behavior; however, they also predict that at a lower range of input strength dominance durations increase with increasing stimulus strength. The nonmonotonic dependency of duration on stimulus strength is common to both deterministic and stochastic models. We conclude that additional experimental tests of Levelt's Proposition IV are needed to reconcile models and perception. PMID:17065254
The subthalamic nucleus during decision-making with multiple alternatives.
Keuken, Max C; Van Maanen, Leendert; Bogacz, Rafal; Schäfer, Andreas; Neumann, Jane; Turner, Robert; Forstmann, Birte U
2015-10-01
Several prominent neurocomputational models predict that an increase of choice alternatives is modulated by increased activity in the subthalamic nucleus (STN). In turn, increased STN activity allows prolonged accumulation of information. At the same time, areas in the medial frontal cortex such as the anterior cingulate cortex (ACC) and the pre-SMA are hypothesized to influence the information processing in the STN. This study set out to test concrete predictions of STN activity in multiple-alternative decision-making using a multimodal combination of 7 Tesla structural and functional Magnetic Resonance Imaging, and ancestral graph (AG) modeling. The results are in line with the predictions in that increased STN activity was found with an increasing amount of choice alternatives. In addition, our study shows that activity in the ACC is correlated with activity in the STN without directly modulating it. This result sheds new light on the information processing streams between medial frontal cortex and the basal ganglia. © 2015 Wiley Periodicals, Inc.
Artificial Intelligence Systems as Prognostic and Predictive Tools in Ovarian Cancer.
Enshaei, A; Robson, C N; Edmondson, R J
2015-11-01
The ability to provide accurate prognostic and predictive information to patients is becoming increasingly important as clinicians enter an era of personalized medicine. For a disease as heterogeneous as epithelial ovarian cancer, conventional algorithms become too complex for routine clinical use. This study therefore investigated the potential for an artificial intelligence model to provide this information and compared it with conventional statistical approaches. The authors created a database comprising 668 cases of epithelial ovarian cancer during a 10-year period and collected data routinely available in a clinical environment. They also collected survival data for all the patients, then constructed an artificial intelligence model capable of comparing a variety of algorithms and classifiers alongside conventional statistical approaches such as logistic regression. The model was used to predict overall survival and demonstrated that an artificial neural network (ANN) algorithm was capable of predicting survival with high accuracy (93 %) and an area under the curve (AUC) of 0.74 and that this outperformed logistic regression. The model also was used to predict the outcome of surgery and again showed that ANN could predict outcome (complete/optimal cytoreduction vs. suboptimal cytoreduction) with 77 % accuracy and an AUC of 0.73. These data are encouraging and demonstrate that artificial intelligence systems may have a role in providing prognostic and predictive data for patients. The performance of these systems likely will improve with increasing data set size, and this needs further investigation.
Modeling forest disturbance and recovery in secondary subtropical dry forests of Puerto Rico
NASA Astrophysics Data System (ADS)
Holm, J. A.; Shugart, H. H., Jr.; Van Bloem, S. J.
2015-12-01
Because of human pressures, the need to understand and predict the long-term dynamics of subtropical dry forests is urgent. Through modifications to the ZELIG vegetation demographic model, including the development of species- and site-specific parameters and internal modifications, the capability to predict forest change within the Guanica State Forest in Puerto Rico can now be accomplished. One objective was to test the capability of this new model (i.e. ZELIG-TROP) to predict successional patterns of secondary forests across a gradient of abandoned fields currently being reclaimed as forests. Model simulations found that abandoned fields that are on degraded lands have a delayed response to fully recover and reach a mature forest status during the simulated time period; 200 years. The forest recovery trends matched predictions published in other studies, such that attributes involving early resource acquisition (i.e. canopy height, canopy coverage, density) were the fastest to recover, but attributes used for structural development (i.e. biomass, basal area) were relatively slow in recovery. Biomass and basal area, two attributes that tend to increase during later successional stages, are significantly lower during the first 80-100 years of recovery compared to a mature forest, suggesting that the time scale of resilience in subtropical dry forests needs to be partially redefined. A second objective was to investigate the long and short-term effects of increasing hurricane disturbances on vegetation structure and dynamics, due to hurricanes playing an important role in maintaining dry forest structure in Puerto Rico. Hurricane disturbance simulations within ZELIG-TROP predicted that increasing hurricane intensity (i.e. up to 100% increase) did not lead to a large shift in long-term AGB or NPP. However, increased hurricane frequency did lead to a 5-40% decrease in AGB, and 32-50% increase in NPP, depending on the treatment. In addition, the modeling approach used here was able to track changes in litterfall, coarse woody debris, and other forest carbon components under various hurricane regimes, a critical step for understanding the future state of subtropical dry forests.
NASA Astrophysics Data System (ADS)
Cao, Qing; Nastac, Laurentiu; Pitts-Baggett, April; Yu, Qiulin
2018-03-01
A quick modeling analysis approach for predicting the slag-steel reaction and desulfurization kinetics in argon gas-stirred ladles has been developed in this study. The model consists of two uncoupled components: (i) a computational fluid dynamics (CFD) model for predicting the fluid flow and the characteristics of slag-steel interface, and (ii) a multicomponent reaction kinetics model for calculating the desulfurization evolution. The steel-slag interfacial area and mass transfer coefficients predicted by the CFD simulation are used as the processing data for the reaction model. Since the desulfurization predictions are uncoupled from the CFD simulation, the computational time of this uncoupled predictive approach is decreased by at least 100 times for each case study when compared with the CFD-reaction kinetics fully coupled model. The uncoupled modeling approach was validated by comparing the evolution of steel and slag compositions with the experimentally measured data during ladle metallurgical furnace (LMF) processing at Nucor Steel Tuscaloosa, Inc. Then, the validated approach was applied to investigate the effects of the initial steel and slag compositions, as well as different types of additions during the refining process on the desulfurization efficiency. The results revealed that the sulfur distribution ratio and the desulfurization reaction can be promoted by making Al and CaO additions during the refining process. It was also shown that by increasing the initial Al content in liquid steel, both Al oxidation and desulfurization rates rapidly increase. In addition, it was found that the variation of the initial Si content in steel has no significant influence on the desulfurization rate. Lastly, if the initial CaO content in slag is increased or the initial Al2O3 content is decreased in the fluid-slag compositional range, the desulfurization rate can be improved significantly during the LMF process.
Srinivasa Rao, Mathukumalli; Swathi, Pettem; Rama Rao, Chitiprolu Anantha; Rao, K. V.; Raju, B. M. K.; Srinivas, Karlapudi; Manimanjari, Dammu; Maheswari, Mandapaka
2015-01-01
The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM) of future data on daily maximum (T.max), minimum (T.min) air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1). This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF) -2020, Distant future (DF)-2050 and Very Distant future (VDF)—2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1–2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18–22% over baseline. Analysis of variance (ANOVA) was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%), model (1.74%) and scenario (0.74%). The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods. PMID:25671564
NASA Astrophysics Data System (ADS)
Cao, Qing; Nastac, Laurentiu; Pitts-Baggett, April; Yu, Qiulin
2018-06-01
A quick modeling analysis approach for predicting the slag-steel reaction and desulfurization kinetics in argon gas-stirred ladles has been developed in this study. The model consists of two uncoupled components: (i) a computational fluid dynamics (CFD) model for predicting the fluid flow and the characteristics of slag-steel interface, and (ii) a multicomponent reaction kinetics model for calculating the desulfurization evolution. The steel-slag interfacial area and mass transfer coefficients predicted by the CFD simulation are used as the processing data for the reaction model. Since the desulfurization predictions are uncoupled from the CFD simulation, the computational time of this uncoupled predictive approach is decreased by at least 100 times for each case study when compared with the CFD-reaction kinetics fully coupled model. The uncoupled modeling approach was validated by comparing the evolution of steel and slag compositions with the experimentally measured data during ladle metallurgical furnace (LMF) processing at Nucor Steel Tuscaloosa, Inc. Then, the validated approach was applied to investigate the effects of the initial steel and slag compositions, as well as different types of additions during the refining process on the desulfurization efficiency. The results revealed that the sulfur distribution ratio and the desulfurization reaction can be promoted by making Al and CaO additions during the refining process. It was also shown that by increasing the initial Al content in liquid steel, both Al oxidation and desulfurization rates rapidly increase. In addition, it was found that the variation of the initial Si content in steel has no significant influence on the desulfurization rate. Lastly, if the initial CaO content in slag is increased or the initial Al2O3 content is decreased in the fluid-slag compositional range, the desulfurization rate can be improved significantly during the LMF process.
Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J
2015-10-01
An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.
Modeling and Testing of the Viscoelastic Properties of a Graphite Nanoplatelet/Epoxy Composite
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Thomas S.
2005-01-01
In order to facilitate the interpretation of experimental data, a micromechanical modeling procedure is developed to predict the viscoelastic properties of a graphite nanoplatelet/epoxy composite as a function of volume fraction and nanoplatelet diameter. The predicted storage and loss moduli for the composite are compared to measured values from the same material using three test methods; Dynamical Mechanical Analysis, nanoindentation, and quasi-static tensile tests. In most cases, the model and experiments indicate that for increasing volume fractions of nanoplatelets, both the storage and loss moduli increase. Also, the results indicate that for nanoplatelet sizes above 15 microns, nanoindentation is capable of measuring properties of individual constituents of a composite system. Comparison of the predicted values to the measured data helps illustrate the relative similarities and differences between the bulk and local measurement techniques.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
Predictive Criteria to Study the Pathogenesis of Malaria-Associated ALI/ARDS in Mice
Ortolan, Luana S.; Sercundes, Michelle K.; Debone, Daniela; Hagen, Stefano C. F.; D' Império Lima, Maria Regina; Alvarez, José M.; Marinho, Claudio R. F.; Epiphanio, Sabrina
2014-01-01
Malaria-associated acute lung injury/acute respiratory distress syndrome (ALI/ARDS) often results in morbidity and mortality. Murine models to study malaria-associated ALI/ARDS have been described; we still lack a method of distinguishing which mice will develop ALI/ARDS before death. This work aimed to characterize malaria-associated ALI/ARDS in a murine model and to demonstrate the first method to predict whether mice are suffering from ALI/ARDS before death. DBA/2 mice infected with Plasmodium berghei ANKA developing ALI/ARDS or hyperparasitemia (HP) were compared using histopathology, PaO2 measurement, pulmonary X-ray, breathing capacity, lung permeability, and serum vascular endothelial growth factor (VEGF) levels according to either the day of death or the suggested predictive criteria. We proposed a model to predict malaria-associated ALI/ARDS using breathing patterns (enhanced pause and frequency respiration) and parasitemia as predictive criteria from mice whose cause of death was known to retrospectively diagnose the sacrificed mice as likely to die of ALI/ARDS as early as 7 days after infection. Using this method, we showed increased VEGF levels and increased lung permeability in mice predicted to die of ALI/ARDS. This proposed method for accurately identifying mice suffering from ALI/ARDS before death will enable the use of this model to study the pathogenesis of this disease. PMID:25276057
Empirical models for the prediction of ground motion duration for intraplate earthquakes
NASA Astrophysics Data System (ADS)
Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.
2017-07-01
Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the existing relationships.
Derivation of a formula to predict patient volume based on temperature at college football games.
Kman, Nicholas E; Russell, Gregory B; Bozeman, William P; Ehrman, Kevin; Winslow, James
2007-01-01
We sought to explore the relationship between temperature and spectator illness at Division I college football games by deriving a formula to predict the number of patrons seeking medical care based on the ambient temperature and attendance of the game. A retrospective review was conducted of medical records from 47 Division I college football games at two outdoor stadiums from 2001 through 2005. Any person presenting for medical care was counted as a patient seen. Weather data were collected from the National Weather Service. A binomial model was fit to the spectator illness records by using the patients seen per attendance as the outcome measure, with temperature as the predictor. Using a binomial model, a formula was derived to estimate the number of patients needing medical attention based on the temperature and the number of spectators in attendance. Predicted # of Patients = exp (-7.4383 - 0.24439* Temperature C + 0.0156032 * Temperature C(2) - 0.000229196 * Temperature(3)) * number of spectators; all factors were highly significant (p < 0.0001). The model suggests that as the temperature rises, the number of patients seeking medical attention will also increase. The formula shows that an increase in temperature from 20 to 21 degrees C will show an increase in patient encounters from 3.64 to 4.05 visits per 10,000 in attendance (an 11% increase). These results show that temperature is an important variable to consider when determining the medical resources needed in caring for spectators at outdoor football games. Our model may help providers predict the number of spectators presenting for medical care based on the forecasted temperature and predicted attendance.
Choudhary, Gaurav; Jankowich, Matthew; Wu, Wen-Chih
2014-07-01
Although elevated pulmonary artery systolic pressure (PASP) is associated with heart failure (HF), whether PASP measurement can help predict future HF admissions is not known, especially in African Americans who are at increased risk for HF. We hypothesized that elevated PASP is associated with increased risk of HF admission and improves HF prediction in African American population. We conducted a longitudinal analysis using the Jackson Heart Study cohort (n=3125; 32.2% men) with baseline echocardiography-derived PASP and follow-up for HF admissions. Hazard ratio for HF admission was estimated using Cox proportional hazard model adjusted for variables in the Atherosclerosis Risk in Community (ARIC) HF prediction model. During a median follow-up of 3.46 years, 3.42% of the cohort was admitted for HF. Subjects with HF had a higher PASP (35.6±11.4 versus 27.6±6.9 mm Hg; P<0.001). The hazard of HF admission increased with higher baseline PASP (adjusted hazard ratio per 10 mm Hg increase in PASP: 2.03; 95% confidence interval, 1.67-2.48; adjusted hazard ratio for highest [≥33 mm Hg] versus lowest quartile [<24 mm Hg] of PASP: 2.69; 95% confidence interval, 1.43-5.06) and remained significant irrespective of history of HF or preserved/reduced ejection fraction. Addition of PASP to the ARIC model resulted in a significant improvement in model discrimination (area under the curve=0.82 before versus 0.84 after; P=0.03) and improved net reclassification index (11-15%) using PASP as a continuous or dichotomous (cutoff=33 mm Hg) variable. Elevated PASP predicts HF admissions in African Americans and may aid in early identification of at-risk subjects for aggressive risk factor modification. © 2014 American Heart Association, Inc.
Biogeochemical modeling of CO 2 and CH 4 production in anoxic Arctic soil microcosms
Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng; ...
2016-09-12
Soil organic carbon turnover to CO 2 and CH 4 is sensitive to soil redox potential and pH conditions. But, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximatelymore » describe the observed pH evolution without additional parameterization. Though Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. Furthermore, the equilibrium speciation predicts a substantial increase in CO 2 solubility as pH increases, and taking into account CO 2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO 2 production from closed microcosms can be substantially underestimated based on headspace CO 2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.« less
Occupant-vehicle dynamics and the role of the internal model
NASA Astrophysics Data System (ADS)
Cole, David J.
2018-05-01
With the increasing need to reduce time and cost of vehicle development there is increasing advantage in simulating mathematically the dynamic interaction of a vehicle and its occupant. The larger design space arising from the introduction of automated vehicles further increases the potential advantage. The aim of the paper is to outline the role of the internal model hypothesis in understanding and modelling occupant-vehicle dynamics, specifically the dynamics associated with direction and speed control of the vehicle. The internal model is the driver's or passenger's understanding of the vehicle dynamics and is thought to be employed in the perception, cognition and action processes of the brain. The internal model aids the estimation of the states of the vehicle from noisy sensory measurements. It can also be used to optimise cognitive control action by predicting the consequence of the action; thus model predictive control (MPC) theory provides a foundation for modelling the cognition process. The stretch reflex of the neuromuscular system also makes use of the prediction of the internal model. Extensions to the MPC approach are described which account for: interaction with an automated vehicle; robust control; intermittent control; and cognitive workload. Further work to extend understanding of occupant-vehicle dynamic interaction is outlined. This paper is based on a keynote presentation given by the author to the 13th International Symposium on Advanced Vehicle Control (AVEC) conference held in Munich, September 2016.
Prediction of BP reactivity to talking using hybrid soft computing approaches.
Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar
2014-01-01
High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.
Modelling proteins' hidden conformations to predict antibiotic resistance
NASA Astrophysics Data System (ADS)
Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.
2016-10-01
TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
NASA Astrophysics Data System (ADS)
Bernardes, S.
2017-12-01
Outputs from coupled carbon-climate models show considerable variability in atmospheric and land fields over the 21st century, including changes in temperature and in the spatiotemporal distribution and quantity of precipitation over the planet. Reductions in water availability due to decreased precipitation and increased water demand by the atmosphere may reduce carbon uptake by critical ecosystems. Conversely, increases in atmospheric carbon dioxide have the potential to offset reductions in productivity. This work focuses on predicted responses of plants to environmental changes and on how plants will adjust their water use efficiency (WUE, plant production per water loss by evapotranspiration) in the 21st century. Predicted changes in WUE were investigated using an ensemble of Earth System Models from the Coupled Model Intercomparison Project 5 (CMIP5), flux tower data and products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Scenarios for climate futures used two representative concentration pathways, including carbon concentration peak in 2040 (RCP4.5) and rising emissions throughout the 21st century (RCP8.5). Model results included the periods 2006-2009 (predicted) and 1850-2005 (reference). IPCC SREX regions were used to compare modeled, flux and satellite data and to address the significant intermodel variability observed for the CMIP5 ensemble (larger variability for RCP8.5, higher intermodel agreement in Southeast Asia, lower intermodel agreement in arid areas). An evaluation of model skill at the regional level supported model selection and the spatiotemporal analysis of changes in WUE. Departures of projected conditions in relation to historical values are presented for both concentration pathways at global, regional levels, including latitudinal distributions. High model sensitivity to different concentration pathways and increase in GPP and WUE was observed for most of the planet (increases consistently higher for RCP8.5). Higher latitudes in the northern hemisphere (boreal region) are predicted to experience higher increases in GPP and WUE, with WUE usually following GPP in changes. Models point to decreases in productivity and WUE mostly in the tropics, affecting tropical forests in the Amazon and in Central America.
NASA Technical Reports Server (NTRS)
Nese, Jon M.; Dutton, John A.
1993-01-01
The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.
Schummers, Laura; Himes, Katherine P; Bodnar, Lisa M; Hutcheon, Jennifer A
2016-09-21
Compelled by the intuitive appeal of predicting each individual patient's risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225) were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke's r 2 ) for each model. Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4). Area under the receiver operating characteristic curve values of >0.8 were necessary to achieve reasonable risk stratification capacity. Our findings provide a guide for researchers to estimate the expected performance of a prediction model before a model has been built based on the characteristics of available predictors.
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Parker, Scott L; Sivaganesan, Ahilan; Chotai, Silky; McGirt, Matthew J; Asher, Anthony L; Devin, Clinton J
2018-06-15
OBJECTIVE Hospital readmissions lead to a significant increase in the total cost of care in patients undergoing elective spine surgery. Understanding factors associated with an increased risk of postoperative readmission could facilitate a reduction in such occurrences. The aims of this study were to develop and validate a predictive model for 90-day hospital readmission following elective spine surgery. METHODS All patients undergoing elective spine surgery for degenerative disease were enrolled in a prospective longitudinal registry. All 90-day readmissions were prospectively recorded. For predictive modeling, all covariates were selected by choosing those variables that were significantly associated with readmission and by incorporating other relevant variables based on clinical intuition and the Akaike information criterion. Eighty percent of the sample was randomly selected for model development and 20% for model validation. Multiple logistic regression analysis was performed with Bayesian model averaging (BMA) to model the odds of 90-day readmission. Goodness of fit was assessed via the C-statistic, that is, the area under the receiver operating characteristic curve (AUC), using the training data set. Discrimination (predictive performance) was assessed using the C-statistic, as applied to the 20% validation data set. RESULTS A total of 2803 consecutive patients were enrolled in the registry, and their data were analyzed for this study. Of this cohort, 227 (8.1%) patients were readmitted to the hospital (for any cause) within 90 days postoperatively. Variables significantly associated with an increased risk of readmission were as follows (OR [95% CI]): lumbar surgery 1.8 [1.1-2.8], government-issued insurance 2.0 [1.4-3.0], hypertension 2.1 [1.4-3.3], prior myocardial infarction 2.2 [1.2-3.8], diabetes 2.5 [1.7-3.7], and coagulation disorder 3.1 [1.6-5.8]. These variables, in addition to others determined a priori to be clinically relevant, comprised 32 inputs in the predictive model constructed using BMA. The AUC value for the training data set was 0.77 for model development and 0.76 for model validation. CONCLUSIONS Identification of high-risk patients is feasible with the novel predictive model presented herein. Appropriate allocation of resources to reduce the postoperative incidence of readmission may reduce the readmission rate and the associated health care costs.
A strategy to establish Food Safety Model Repositories.
Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M
2015-07-02
Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.
Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
An uncertain future for lightning
NASA Astrophysics Data System (ADS)
Murray, Lee T.
2018-03-01
The most commonly used method for representing lightning in global atmospheric models generally predicts lightning increases in a warmer world. A new scheme finds the opposite result, directly challenging the predictive skill of an old stalwart.
Finch, Bryson E; Marzooghi, Solmaz; Di Toro, Dominic M; Stubblefield, William A
2017-08-01
Crude oils are composed of an assortment of hydrocarbons, some of which are polycyclic aromatic hydrocarbons (PAHs). Polycyclic aromatic hydrocarbons are of particular interest due to their narcotic and potential phototoxic effects. Several studies have examined the phototoxicity of individual PAHs and fresh and weathered crude oils, and several models have been developed to predict PAH toxicity. Fingerprint analyses of oils have shown that PAHs in crude oils are predominantly alkylated. However, current models for estimating PAH phototoxicity assume toxic equivalence between unsubstituted (i.e., parent) and alkyl-substituted compounds. This approach may be incorrect if substantial differences in toxic potency exist between unsubstituted and substituted PAHs. The objective of the present study was to examine the narcotic and photo-enhanced toxicity of commercially available unsubstituted and alkylated PAHs to mysid shrimp (Americamysis bahia). Data were used to validate predictive models of phototoxicity based on the highest occupied molecular orbital-lowest unoccupied molecular orbital (HOMO-LUMO) gap approach and to develop relative effect potencies. Results demonstrated that photo-enhanced toxicity increased with increasing methylation and that phototoxic PAH potencies vary significantly among unsubstituted compounds. Overall, predictive models based on the HOMO-LUMO gap were relatively accurate in predicting phototoxicity for unsubstituted PAHs but are limited to qualitative assessments. Environ Toxicol Chem 2017;36:2043-2049. © 2017 SETAC. © 2017 SETAC.
NASA Astrophysics Data System (ADS)
Rylander, Marissa N.; Feng, Yusheng; Diller, Kenneth; Bass, J.
2005-04-01
Heat shock proteins (HSP) are critical components of a complex defense mechanism essential for preserving cell survival under adverse environmental conditions. It is inevitable that hyperthermia will enhance tumor tissue viability, due to HSP expression in regions where temperatures are insufficient to coagulate proteins, and would likely increase the probability of cancer recurrence. Although hyperthermia therapy is commonly used in conjunction with radiotherapy, chemotherapy, and gene therapy to increase therapeutic effectiveness, the efficacy of these therapies can be substantially hindered due to HSP expression when hyperthermia is applied prior to these procedures. Therefore, in planning hyperthermia protocols, prediction of the HSP response of the tumor must be incorporated into the treatment plan to optimize the thermal dose delivery and permit prediction of overall tissue response. In this paper, we present a highly accurate, adaptive, finite element tumor model capable of predicting the HSP expression distribution and tissue damage region based on measured cellular data when hyperthermia protocols are specified. Cubic spline representations of HSP27 and HSP70, and Arrhenius damage models were integrated into the finite element model to enable prediction of the HSP expression and damage distribution in the tissue following laser heating. Application of the model can enable optimized treatment planning by controlling of the tissue response to therapy based on accurate prediction of the HSP expression and cell damage distribution.
Applicability of linear regression equation for prediction of chlorophyll content in rice leaves
NASA Astrophysics Data System (ADS)
Li, Yunmei
2005-09-01
A modeling approach is used to assess the applicability of the derived equations which are capable to predict chlorophyll content of rice leaves at a given view direction. Two radiative transfer models, including PROSPECT model operated at leaf level and FCR model operated at canopy level, are used in the study. The study is consisted of three steps: (1) Simulation of bidirectional reflectance from canopy with different leaf chlorophyll contents, leaf-area-index (LAI) and under storey configurations; (2) Establishment of prediction relations of chlorophyll content by stepwise regression; and (3) Assessment of the applicability of these relations. The result shows that the accuracy of prediction is affected by different under storey configurations and, however, the accuracy tends to be greatly improved with increase of LAI.
Ecological prediction with nonlinear multivariate time-frequency functional data models
Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.
2013-01-01
Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.
Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José
2014-12-23
Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. Copyright © 2015 Montesinos-López et al.
Integrated modelling of H-mode pedestal and confinement in JET-ILW
NASA Astrophysics Data System (ADS)
Saarelma, S.; Challis, C. D.; Garzotti, L.; Frassinetti, L.; Maggi, C. F.; Romanelli, M.; Stokes, C.; Contributors, JET
2018-01-01
A pedestal prediction model Europed is built on the existing EPED1 model by coupling it with core transport simulation using a Bohm-gyroBohm transport model to self-consistently predict JET-ILW power scan for hybrid plasmas that display weaker power degradation than the IPB98(y, 2) scaling of the energy confinement time. The weak power degradation is reproduced in the coupled core-pedestal simulation. The coupled core-pedestal model is further tested for a 3.0 MA plasma with the highest stored energy achieved in JET-ILW so far, giving a prediction of the stored plasma energy within the error margins of the measured experimental value. A pedestal density prediction model based on the neutral penetration is tested on a JET-ILW database giving a prediction with an average error of 17% from the experimental data when a parameter taking into account the fuelling rate is added into the model. However the model fails to reproduce the power dependence of the pedestal density implying missing transport physics in the model. The future JET-ILW deuterium campaign with increased heating power is predicted to reach plasma energy of 11 MJ, which would correspond to 11-13 MW of fusion power in equivalent deuterium-tritium plasma but with isotope effects on pedestal stability and core transport ignored.
NASA Astrophysics Data System (ADS)
Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra
2016-07-01
Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.
Nonlinear lymphangion pressure-volume relationship minimizes edema
Venugopal, Arun M.; Stewart, Randolph H.; Laine, Glen A.
2010-01-01
Lymphangions, the segments of lymphatic vessel between two valves, contract cyclically and actively pump, analogous to cardiac ventricles. Besides having a discernable systole and diastole, lymphangions have a relatively linear end-systolic pressure-volume relationship (with slope Emax) and a nonlinear end-diastolic pressure-volume relationship (with slope Emin). To counter increased microvascular filtration (causing increased lymphatic inlet pressure), lymphangions must respond to modest increases in transmural pressure by increasing pumping. To counter venous hypertension (causing increased lymphatic inlet and outlet pressures), lymphangions must respond to potentially large increases in transmural pressure by maintaining lymph flow. We therefore hypothesized that the nonlinear lymphangion pressure-volume relationship allows transition from a transmural pressure-dependent stroke volume to a transmural pressure-independent stroke volume as transmural pressure increases. To test this hypothesis, we applied a mathematical model based on the time-varying elastance concept typically applied to ventricles (the ratio of pressure to volume cycles periodically from a minimum, Emin, to a maximum, Emax). This model predicted that lymphangions increase stroke volume and stroke work with transmural pressure if Emin < Emax at low transmural pressures, but maintain stroke volume and stroke work if Emin= Emax at higher transmural pressures. Furthermore, at higher transmural pressures, stroke work is evenly distributed among a chain of lymphangions. Model predictions were tested by comparison to previously reported data. Model predictions were consistent with reported lymphangion properties and pressure-flow relationships of entire lymphatic systems. The nonlinear lymphangion pressure-volume relationship therefore minimizes edema resulting from both increased microvascular filtration and venous hypertension. PMID:20601461
Adapting the Water Erosion Prediction Project (WEPP) model for forest applications
Shuhui Dun; Joan Q. Wu; William J. Elliot; Peter R. Robichaud; Dennis C. Flanagan; James R. Frankenberger; Robert E. Brown; Arthur C. Xu
2009-01-01
There has been an increasing public concern over forest stream pollution by excessive sedimentation due to natural or human disturbances. Adequate erosion simulation tools are needed for sound management of forest resources. The Water Erosion Prediction Project (WEPP) watershed model has proved useful in forest applications where Hortonian flow is the major form of...
The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eden, H.F.; Mooers, C.N.K.
1990-06-01
The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less
Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann
2003-01-01
Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.
A model to predict progression in brain-injured patients.
Tommasino, N; Forteza, D; Godino, M; Mizraji, R; Alvarez, I
2014-11-01
The study of brain death (BD) epidemiology and the acute brain injury (ABI) progression profile is important to improve public health programs, organ procurement strategies, and intensive care unit (ICU) protocols. The purpose of this study was to analyze the ABI progression profile among patients admitted to ICUs with a Glasgow Coma Score (GCS) ≤8, as well as establishing a prediction model of probability of death and BD. This was a retrospective analysis of prospective data that included all brain-injured patients with GCS ≤8 admitted to a total of four public and private ICUs in Uruguay (N = 1447). The independent predictor factors of death and BD were studied using logistic regression analysis. A hierarchical model consisting of 2 nested logit regression models was then created. With these models, the probabilities of death, BD, and death by cardiorespiratory arrest were analyzed. In the first regression, we observed that as the GCS decreased and age increased, the probability of death rose. Each additional year of age increased the probability of death by 0.014. In the second model, however, BD risk decreased with each year of age. The presence of swelling, mass effect, and/or space-occupying lesion increased BD risk for the same given GCS. In the presence of injuries compatible with intracranial hypertension, age behaved as a protective factor that reduced the probability of BD. Based on the analysis of the local epidemiology, a model to predict the probability of death and BD can be developed. The organ potential donation of a country, region, or hospital can be predicted on the basis of this model, customizing it to each specific situation.
Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.
Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret
2005-01-01
Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.
Daily spillover to and from binge eating in first-year university females.
Barker, Erin T; Williams, Rebecca L; Galambos, Nancy L
2006-01-01
Coping models of binge eating propose that stress and/or negative affect trigger binge eating, which serves to shift attention to the binge and its consequences. The current study tested these general assumptions using 14-day daily diary data collected from 66 first-year university females. Hierarchical Generalized Linear Modeling results showed that increased stress, negative affect, and weight concerns were associated with an increased likelihood of reporting symptoms of binge eating within days. Elevated weight concerns predicted next-day binge eating and binge eating predicted greater next-day negative affect. Discussion focuses on implications for coping models of binge eating.
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub
Renwick, Katherine M.; Curtis, Caroline; Kleinhesselink, Andrew R.; Schlaepfer, Daniel R.; Bradley, Bethany A.; Aldridge, Cameron L.; Poulter, Benjamin; Adler, Peter B.
2018-01-01
A number of modeling approaches have been developed to predict the impacts of climate change on species distributions, performance, and abundance. The stronger the agreement from models that represent different processes and are based on distinct and independent sources of information, the greater the confidence we can have in their predictions. Evaluating the level of confidence is particularly important when predictions are used to guide conservation or restoration decisions. We used a multi-model approach to predict climate change impacts on big sagebrush (Artemisia tridentata), the dominant plant species on roughly 43 million hectares in the western United States and a key resource for many endemic wildlife species. To evaluate the climate sensitivity of A. tridentata, we developed four predictive models, two based on empirically derived spatial and temporal relationships, and two that applied mechanistic approaches to simulate sagebrush recruitment and growth. This approach enabled us to produce an aggregate index of climate change vulnerability and uncertainty based on the level of agreement between models. Despite large differences in model structure, predictions of sagebrush response to climate change were largely consistent. Performance, as measured by change in cover, growth, or recruitment, was predicted to decrease at the warmest sites, but increase throughout the cooler portions of sagebrush's range. A sensitivity analysis indicated that sagebrush performance responds more strongly to changes in temperature than precipitation. Most of the uncertainty in model predictions reflected variation among the ecological models, raising questions about the reliability of forecasts based on a single modeling approach. Our results highlight the value of a multi-model approach in forecasting climate change impacts and uncertainties and should help land managers to maximize the value of conservation investments.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
NASA Technical Reports Server (NTRS)
Kalluri, Sreeramesh
2013-01-01
Structural materials used in engineering applications routinely subjected to repetitive mechanical loads in multiple directions under non-isothermal conditions. Over past few decades, several multiaxial fatigue life estimation models (stress- and strain-based) developed for isothermal conditions. Historically, numerous fatigue life prediction models also developed for thermomechanical fatigue (TMF) life prediction, predominantly for uniaxial mechanical loading conditions. Realistic structural components encounter multiaxial loads and non-isothermal loading conditions, which increase potential for interaction of damage modes. A need exists for mechanical testing and development verification of life prediction models under such conditions.
NASA Technical Reports Server (NTRS)
Wakelyn, N. T.; Gregory, G. L.
1980-01-01
Data for one day of the 1977 southeastern Virginia urban plume study are compared with computer predictions from a traveling air parcel model using a contemporary photochemical mechanism with a minimal description of nonmethane hydrocarbon (NMHC) constitution and chemistry. With measured initial NOx and O3 concentrations and a current separate estimate of urban source loading input to the model, and for a variation of initial NMHC over a reasonable range, an ozone increase over the day is predicted from the photochemical simulation which is consistent with the flight path averaged airborne data.
Projected Impact of Climate Change on Hydrological Regimes in the Philippines
Kanamaru, Hideki; Keesstra, Saskia; Maroulis, Jerry; David, Carlos Primo C.; Ritsema, Coen J.
2016-01-01
The Philippines is one of the most vulnerable countries in the world to the potential impacts of climate change. To fully understand these potential impacts, especially on future hydrological regimes and water resources (2010-2050), 24 river basins located in the major agricultural provinces throughout the Philippines were assessed. Calibrated using existing historical interpolated climate data, the STREAM model was used to assess future river flows derived from three global climate models (BCM2, CNCM3 and MPEH5) under two plausible scenarios (A1B and A2) and then compared with baseline scenarios (20th century). Results predict a general increase in water availability for most parts of the country. For the A1B scenario, CNCM3 and MPEH5 models predict an overall increase in river flows and river flow variability for most basins, with higher flow magnitudes and flow variability, while an increase in peak flow return periods is predicted for the middle and southern parts of the country during the wet season. However, in the north, the prognosis is for an increase in peak flow return periods for both wet and dry seasons. These findings suggest a general increase in water availability for agriculture, however, there is also the increased threat of flooding and enhanced soil erosion throughout the country. PMID:27749908
The Greenland Ice Sheet's surface mass balance in a seasonally sea ice-free Arctic
NASA Astrophysics Data System (ADS)
Day, J. J.; Bamber, J. L.; Valdes, P. J.
2013-09-01
General circulation models predict a rapid decrease in sea ice extent with concurrent increases in near-surface air temperature and precipitation in the Arctic over the 21st century. This has led to suggestions that some Arctic land ice masses may experience an increase in accumulation due to enhanced evaporation from a seasonally sea ice-free Arctic Ocean. To investigate the impact of this phenomenon on Greenland Ice Sheet climate and surface mass balance (SMB), a regional climate model, HadRM3, was used to force an insolation-temperature melt SMB model. A set of experiments designed to investigate the role of sea ice independently from sea surface temperature (SST) forcing are described. In the warmer and wetter SI + SST simulation, Greenland experiences a 23% increase in winter SMB but 65% reduced summer SMB, resulting in a net decrease in the annual value. This study shows that sea ice decline contributes to the increased winter balance, causing 25% of the increase in winter accumulation; this is largest in eastern Greenland as the result of increased evaporation in the Greenland Sea. These results indicate that the seasonal cycle of Greenland's SMB will increase dramatically as global temperatures increase, with the largest changes in temperature and precipitation occurring in winter. This demonstrates that the accurate prediction of changes in sea ice cover is important for predicting Greenland SMB and ice sheet evolution.
Yao, Chen; Zhu, Xiaojin; Weigel, Kent A
2016-11-07
Genomic prediction for novel traits, which can be costly and labor-intensive to measure, is often hampered by low accuracy due to the limited size of the reference population. As an option to improve prediction accuracy, we introduced a semi-supervised learning strategy known as the self-training model, and applied this method to genomic prediction of residual feed intake (RFI) in dairy cattle. We describe a self-training model that is wrapped around a support vector machine (SVM) algorithm, which enables it to use data from animals with and without measured phenotypes. Initially, a SVM model was trained using data from 792 animals with measured RFI phenotypes. Then, the resulting SVM was used to generate self-trained phenotypes for 3000 animals for which RFI measurements were not available. Finally, the SVM model was re-trained using data from up to 3792 animals, including those with measured and self-trained RFI phenotypes. Incorporation of additional animals with self-trained phenotypes enhanced the accuracy of genomic predictions compared to that of predictions that were derived from the subset of animals with measured phenotypes. The optimal ratio of animals with self-trained phenotypes to animals with measured phenotypes (2.5, 2.0, and 1.8) and the maximum increase achieved in prediction accuracy measured as the correlation between predicted and actual RFI phenotypes (5.9, 4.1, and 2.4%) decreased as the size of the initial training set (300, 400, and 500 animals with measured phenotypes) increased. The optimal number of animals with self-trained phenotypes may be smaller when prediction accuracy is measured as the mean squared error rather than the correlation between predicted and actual RFI phenotypes. Our results demonstrate that semi-supervised learning models that incorporate self-trained phenotypes can achieve genomic prediction accuracies that are comparable to those obtained with models using larger training sets that include only animals with measured phenotypes. Semi-supervised learning can be helpful for genomic prediction of novel traits, such as RFI, for which the size of reference population is limited, in particular, when the animals to be predicted and the animals in the reference population originate from the same herd-environment.
Fear and Loving in Las Vegas: Evolution, Emotion, and Persuasion
Griskevicius, Vladas; Goldstein, Noah J.; Mortensen, Chad R.; Sundie, Jill M.; Cialdini, Robert B.; Kenrick, Douglas T.
2009-01-01
How do arousal-inducing contexts, such as frightening or romantic television programs, influence the effectiveness of basic persuasion heuristics? Different predictions are made by three theoretical models: A general arousal model predicts that arousal should increase effectiveness of heuristics; an affective valence model predicts that effectiveness should depend on whether the context elicits positive or negative affect; an evolutionary model predicts that persuasiveness should depend on both the specific emotion that is elicited and the content of the particular heuristic. Three experiments examined how fear-inducing versus romantic contexts influenced the effectiveness of two widely used heuristics—social proof (e.g., “most popular”) and scarcity (e.g., “limited edition”). Results supported predictions from an evolutionary model, showing that fear can lead scarcity appeals to be counter-persuasive, and that romantic desire can lead social proof appeals to be counter-persuasive. The findings highlight how an evolutionary theoretical approach can lead to novel theoretical and practical marketing insights. PMID:19727416
Flood loss model transfer: on the value of additional data
NASA Astrophysics Data System (ADS)
Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno
2017-04-01
The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.
Mustonen, Kaisa-Riikka; Mykrä, Heikki; Marttila, Hannu; Sarremejane, Romain; Veijalainen, Noora; Sippel, Kalle; Muotka, Timo; Hawkins, Charles P
2018-06-01
Air temperature at the northernmost latitudes is predicted to increase steeply and precipitation to become more variable by the end of the 21st century, resulting in altered thermal and hydrological regimes. We applied five climate scenarios to predict the future (2070-2100) benthic macroinvertebrate assemblages at 239 near-pristine sites across Finland (ca. 1200 km latitudinal span). We used a multitaxon distribution model with air temperature and modeled daily flow as predictors. As expected, projected air temperature increased the most in northernmost Finland. Predicted taxonomic richness also increased the most in northern Finland, congruent with the predicted northwards shift of many species' distributions. Compositional changes were predicted to be high even without changes in richness, suggesting that species replacement may be the main mechanism causing climate-induced changes in macroinvertebrate assemblages. Northern streams were predicted to lose much of the seasonality of their flow regimes, causing potentially marked changes in stream benthic assemblages. Sites with the highest loss of seasonality were predicted to support future assemblages that deviate most in compositional similarity from the present-day assemblages. Macroinvertebrate assemblages were also predicted to change more in headwaters than in larger streams, as headwaters were particularly sensitive to changes in flow patterns. Our results emphasize the importance of focusing protection and mitigation on headwater streams with high-flow seasonality because of their vulnerability to climate change. © 2018 John Wiley & Sons Ltd.
A univariate model of river water nitrate time series
NASA Astrophysics Data System (ADS)
Worrall, F.; Burt, T. P.
1999-01-01
Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.
Anthony R. DeGange; Bruce G. Marcot; James Lawler; Torre Jorgenson; Robert Winfree
2013-01-01
We used a modeling framework and a recent ecological land classification and land cover map to predict how ecosystems and wildlife habitat in northwest Alaska might change in response to increasing temperature. Our results suggest modest increases in forest and tall shrub ecotypes in Northwest Alaska by the end of this century thereby increasing habitat for forest-...
Ariafar, M Nima; Buzrul, Sencer; Akçelik, Nefise
2016-03-01
Biofilm formation of Salmonella Virchow was monitored with respect to time at three different temperature (20, 25 and 27.5 °C) and pH (5.2, 5.9 and 6.6) values. As the temperature increased at a constant pH level, biofilm formation decreased while as the pH level increased at a constant temperature, biofilm formation increased. Modified Gompertz equation with high adjusted determination coefficient (Radj(2)) and low mean square error (MSE) values produced reasonable fits for the biofilm formation under all conditions. Parameters of the modified Gompertz equation could be described in terms of temperature and pH by use of a second order polynomial function. In general, as temperature increased maximum biofilm quantity, maximum biofilm formation rate and time of acceleration of biofilm formation decreased; whereas, as pH increased; maximum biofilm quantity, maximum biofilm formation rate and time of acceleration of biofilm formation increased. Two temperature (23 and 26 °C) and pH (5.3 and 6.3) values were used up to 24 h to predict the biofilm formation of S. Virchow. Although the predictions did not perfectly match with the data, reasonable estimates were obtained. In principle, modeling and predicting the biofilm formation of different microorganisms on different surfaces under various conditions could be possible.
Autonomy and social norms in a three factor grief model predicting perinatal grief in India.
Roberts, Lisa R; Lee, Jerry W
2014-01-01
Perinatal grief following stillbirth is a significant social and mental health burden. We examined associations among the following latent variables: autonomy, social norms, self-despair, strained coping, and acute grief-among poor, rural women in India who experienced stillbirth. A structural equation model was built and tested using quantitative data from 347 women of reproductive age in Chhattisgarh. Maternal acceptance of traditional social norms worsens self-despair and strained coping, and increases the autonomy granted to women. Greater autonomy increases acute grief. Greater despair and acute grief increase strained coping. Social and cultural factors were found to predict perinatal grief in India.
Gaussian Processes for Prediction of Homing Pigeon Flight Trajectories
NASA Astrophysics Data System (ADS)
Mann, Richard; Freeman, Robin; Osborne, Michael; Garnett, Roman; Meade, Jessica; Armstrong, Chris; Biro, Dora; Guilford, Tim; Roberts, Stephen
2009-12-01
We construct and apply a stochastic Gaussian Process (GP) model of flight trajectory generation for pigeons trained to home from specific release sites. The model shows increasing predictive power as the birds become familiar with the sites, mirroring the animal's learning process. We show how the increasing similarity between successive flight trajectories can be used to infer, with increasing accuracy, an idealised route that captures the repeated spatial aspects of the bird's flight. We subsequently use techniques associated with reduced-rank GP approximations to objectively identify the key waypoints used by each bird to memorise its idiosyncratic habitual route between the release site and the home loft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dholabhai, Pratik P., E-mail: pratik.dholabhai@asu.ed; Anwar, Shahriar, E-mail: anwar@asu.ed; Adams, James B., E-mail: jim.adams@asu.ed
Kinetic lattice Monte Carlo (KLMC) model is developed for investigating oxygen vacancy diffusion in praseodymium-doped ceria. The current approach uses a database of activation energies for oxygen vacancy migration, calculated using first-principles, for various migration pathways in praseodymium-doped ceria. Since the first-principles calculations revealed significant vacancy-vacancy repulsion, we investigate the importance of that effect by conducting simulations with and without a repulsive interaction. Initially, as dopant concentrations increase, vacancy concentration and thus conductivity increases. However, at higher concentrations, vacancies interfere and repel one another, and dopants trap vacancies, creating a 'traffic jam' that decreases conductivity, which is consistent with themore » experimental findings. The modeled effective activation energy for vacancy migration slightly increased with increasing dopant concentration in qualitative agreement with the experiment. The current methodology comprising a blend of first-principle calculations and KLMC model provides a very powerful fundamental tool for predicting the optimal dopant concentration in ceria related materials. -- graphical abstract: Ionic conductivity in praseodymium doped ceria as a function of dopant concentration calculated using the kinetic lattice Monte Carlo vacancy-repelling model, which predicts the optimal composition for achieving maximum conductivity. Display Omitted Research highlights: {yields} KLMC method calculates the accurate time-dependent diffusion of oxygen vacancies. {yields} KLMC-VR model predicts a dopant concentration of {approx}15-20% to be optimal in PDC. {yields} At higher dopant concentration, vacancies interfere and repel one another, and dopants trap vacancies. {yields} Activation energy for vacancy migration increases as a function of dopant content« less
Modeling aircraft noise induced sleep disturbance
NASA Astrophysics Data System (ADS)
McGuire, Sarah M.
One of the primary impacts of aircraft noise on a community is its disruption of sleep. Aircraft noise increases the time to fall asleep, the number of awakenings, and decreases the amount of rapid eye movement and slow wave sleep. Understanding these changes in sleep may be important as they could increase the risk for developing next-day effects such as sleepiness and reduced performance and long-term health effects such as cardiovascular disease. There are models that have been developed to predict the effect of aircraft noise on sleep. However, most of these models only predict the percentage of the population that is awakened. Markov and nonlinear dynamic models have been developed to predict an individual's sleep structure during the night. However, both of these models have limitations. The Markov model only accounts for whether an aircraft event occurred not the noise level or other sound characteristics of the event that may affect the degree of disturbance. The nonlinear dynamic models were developed to describe normal sleep regulation and do not have a noise effects component. In addition, the nonlinear dynamic models have slow dynamics which make it difficult to predict short duration awakenings which occur both spontaneously and as a result of nighttime noise exposure. The purpose of this research was to examine these sleep structure models to determine how they could be altered to predict the effect of aircraft noise on sleep. Different approaches for adding a noise level dependence to the Markov Model was explored and the modified model was validated by comparing predictions to behavioral awakening data. In order to determine how to add faster dynamics to the nonlinear dynamic sleep models it was necessary to have a more detailed sleep stage classification than was available from visual scoring of sleep data. An automatic sleep stage classification algorithm was developed which extracts different features of polysomnography data including the occurrence of rapid eye movements, sleep spindles, and slow wave sleep. Using these features an approach for classifying sleep stages every one second during the night was developed. From observation of the results of the sleep stage classification, it was determined how to add faster dynamics to the nonlinear dynamic model. Slow and fast REM activity are modeled separately and the activity in the gamma frequency band of the EEG signal is used to model both spontaneous and noise-induced awakenings. The nonlinear model predicts changes in sleep structure similar to those found by other researchers and reported in the sleep literature and similar to those found in obtained survey data. To compare sleep disturbance model predictions, flight operations data from US airports were obtained and sleep disturbance in communities was predicted for different operations scenarios using the modified Markov model, the nonlinear dynamic model, and other aircraft noise awakening models. Similarities and differences in model predictions were evaluated in order to determine if the use of the developed sleep structure model leads to improved predictions of the impact of nighttime noise on communities.
Projections of Future Summertime Ozone over the U.S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfister, G. G.; Walters, Stacy; Lamarque, J. F.
This study uses a regional fully coupled chemistry-transport model to assess changes in surface ozone over the summertime U.S. between present and a 2050 future time period at high spatial resolution (12 km grid spacing) under the SRES A2 climate and RCP8.5 anthropogenic pre-cursor emission scenario. The impact of predicted changes in climate and global background ozone is estimated to increase surface ozone over most of the U.S; the 5th - 95th percentile range for daily 8-hour maximum surface ozone increases from 31-79 ppbV to 30-87 ppbV between the present and future time periods. The analysis of a set ofmore » meteorological drivers suggests that these mostly will add to increasing ozone, but the set of simulations conducted does not allow to separate this effect from that through enhanced global background ozone. Statistically the most robust positive feedbacks are through increased temperature, biogenic emissions and solar radiation. Stringent emission controls can counteract these feedbacks and if considered, we estimate large reductions in surface ozone with the 5th-95th percentile reduced to 27-55 ppbV. A comparison of the high-resolution projections to global model projections shows that even though the global model is biased high in surface ozone compared to the regional model and compared to observations, both the global and the regional model predict similar changes in ozone between the present and future time periods. However, on smaller spatial scales, the regional predictions show more pronounced changes between urban and rural regimes that cannot be resolved at the coarse resolution of global model. In addition, the sign of the changes in overall ozone mixing ratios can be different between the global and the regional predictions in certain regions, such as the Western U.S. This study confirms the key role of emission control strategies in future air quality predictions and demonstrates the need for considering degradation of air quality with future climate change in emission policy making. It also illustrates the need for high resolution modeling when the objective is to address regional and local air quality or establish links to human health and society.« less
Prediction skill of rainstorm events over India in the TIGGE weather prediction models
NASA Astrophysics Data System (ADS)
Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.
2017-12-01
Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.
Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim
2017-02-01
This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
Generalized role for the cerebellum in encoding internal models: evidence from semantic processing.
Moberget, Torgeir; Gullesen, Eva Hilland; Andersson, Stein; Ivry, Richard B; Endestad, Tor
2014-02-19
The striking homogeneity of cerebellar microanatomy is strongly suggestive of a corresponding uniformity of function. Consequently, theoretical models of the cerebellum's role in motor control should offer important clues regarding cerebellar contributions to cognition. One such influential theory holds that the cerebellum encodes internal models, neural representations of the context-specific dynamic properties of an object, to facilitate predictive control when manipulating the object. The present study examined whether this theoretical construct can shed light on the contribution of the cerebellum to language processing. We reasoned that the cerebellum might perform a similar coordinative function when the context provided by the initial part of a sentence can be highly predictive of the end of the sentence. Using functional MRI in humans we tested two predictions derived from this hypothesis, building on previous neuroimaging studies of internal models in motor control. First, focal cerebellar activation-reflecting the operation of acquired internal models-should be enhanced when the linguistic context leads terminal words to be predictable. Second, more widespread activation should be observed when such predictions are violated, reflecting the processing of error signals that can be used to update internal models. Both predictions were confirmed, with predictability and prediction violations associated with increased blood oxygenation level-dependent signal in the posterior cerebellum (Crus I/II). Our results provide further evidence for cerebellar involvement in predictive language processing and suggest that the notion of cerebellar internal models may be extended to the language domain.
Putting reward in art: A tentative prediction error account of visual art
Van de Cruys, Sander; Wagemans, Johan
2011-01-01
The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260
Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok
2017-03-01
Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics
Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2017-01-01
Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)4 model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends. PMID:28273856
ERIC Educational Resources Information Center
Wurm, Lee H.; Seaman, Sean R.
2008-01-01
Previous research has demonstrated that the subjective danger and usefulness of words affect lexical decision times. Usually, an interaction is found: Increasing danger predicts faster reaction times (RTs) for words low on usefulness, but increasing danger predicts slower RTs for words high on usefulness. The authors show the same interaction with…
Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai
2015-01-01
Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368
Herd factors associated with dairy cow mortality.
McConnel, C; Lombard, J; Wagner, B; Kopral, C; Garry, F
2015-08-01
Summary studies of dairy cow removal indicate increasing levels of mortality over the past several decades. This poses a serious problem for the US dairy industry. The objective of this project was to evaluate associations between facilities, herd management practices, disease occurrence and death rates on US dairy operations through an analysis of the National Animal Health Monitoring System's Dairy 2007 survey. The survey included farms in 17 states that represented 79.5% of US dairy operations and 82.5% of the US dairy cow population. During the first phase of the study operations were randomly selected from a sampling list maintained by the National Agricultural Statistics Service. Only farms that participated in phase I and had 30 or more dairy cows were eligible to participate in phase II. In total, 459 farms had complete data for all selected variables and were included in this analysis. Univariable associations between dairy cow mortality and 162 a priori identified operation-level management practices or characteristics were evaluated. Sixty of the 162 management factors explored in the univariate analysis met initial screening criteria and were further evaluated in a multivariable model exploring more complex relationships. The final weighted, negative binomial regression model included six variables. Based on the incidence rate ratio, this model predicted 32.0% less mortality for operations that vaccinated heifers for at least one of the following: bovine viral diarrhea, infectious bovine rhinotracheitis, parainfluenza 3, bovine respiratory syncytial virus, Haemophilus somnus, leptospirosis, Salmonella, Escherichia coli or clostridia. The final multivariable model also predicted a 27.0% increase in mortality for operations from which a bulk tank milk sample tested ELISA positive for bovine leukosis virus. Additionally, an 18.0% higher mortality was predicted for operations that used necropsies to determine the cause of death for some proportion of dead dairy cows. The final model also predicted that increased proportions of dairy cows with clinical mastitis and infertility problems were associated with increased mortality. Finally, an increase in mortality was predicted to be associated with an increase in the proportion of lame or injured permanently removed dairy cows. In general terms, this model identified that mortality was associated with reproductive problems, non-infectious postpartum disease, infectious disease and infectious disease prevention, and information derived from postmortem evaluations. Ultimately, addressing excessive mortality levels requires a concerted effort that recognizes and appropriately manages the numerous and diverse underlying risks.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
NASA Astrophysics Data System (ADS)
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A
2015-04-01
A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated 'fitness contours' (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments.
Emery, Noah N; Simons, Jeffrey S
2017-08-01
This study tested a model linking sensitivity to punishment (SP) and reward (SR) to marijuana use and problems via affect lability and poor control. A 6-month prospective design was used in a sample of 2,270 young-adults (64% female). The hypothesized SP × SR interaction did not predict affect lability or poor control, but did predict use likelihood at baseline. At low levels of SR, SP was associated with an increased likelihood of abstaining, which was attenuated as SR increased. SP and SR displayed positive main effects on both affect lability and poor control. Affect lability and poor control, in turn, mediated effects on the marijuana outcomes. Poor control predicted both increased marijuana use and, controlling for use level, greater intensity of problems. Affect lability predicted greater intensity of problems, but was not associated with use level. There were few prospective effects. SR consistently predicted greater marijuana use and problems. SP however, exhibited both risk and protective pathways. Results indicate that SP is associated with a decreased likelihood of marijuana use. However, once use is initiated SP is associated with increased risk of problems, in part, due to its effects on both affect and behavioral dysregulation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Can shoulder dystocia be reliably predicted?
Dodd, Jodie M; Catcheside, Britt; Scheil, Wendy
2012-06-01
To evaluate factors reported to increase the risk of shoulder dystocia, and to evaluate their predictive value at a population level. The South Australian Pregnancy Outcome Unit's population database from 2005 to 2010 was accessed to determine the occurrence of shoulder dystocia in addition to reported risk factors, including age, parity, self-reported ethnicity, presence of diabetes and infant birth weight. Odds ratios (and 95% confidence interval) of shoulder dystocia was calculated for each risk factor, which were then incorporated into a logistic regression model. Test characteristics for each variable in predicting shoulder dystocia were calculated. As a proportion of all births, the reported rate of shoulder dystocia increased significantly from 0.95% in 2005 to 1.38% in 2010 (P = 0.0002). Using a logistic regression model, induction of labour and infant birth weight greater than both 4000 and 4500 g were identified as significant independent predictors of shoulder dystocia. The value of risk factors alone and when incorporated into the logistic regression model was poorly predictive of the occurrence of shoulder dystocia. While there are a number of factors associated with an increased risk of shoulder dystocia, none are of sufficient sensitivity or positive predictive value to allow their use clinically to reliably and accurately identify the occurrence of shoulder dystocia. © 2012 The Authors ANZJOG © 2012 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.
2012-04-01
Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less
Hybrid-PIC Modeling of a High-Voltage, High-Specific-Impulse Hall Thruster
NASA Technical Reports Server (NTRS)
Smith, Brandon D.; Boyd, Iain D.; Kamhawi, Hani; Huang, Wensheng
2013-01-01
The primary life-limiting mechanism of Hall thrusters is the sputter erosion of the discharge channel walls by high-energy propellant ions. Because of the difficulty involved in characterizing this erosion experimentally, many past efforts have focused on numerical modeling to predict erosion rates and thruster lifespan, but those analyses were limited to Hall thrusters operating in the 200-400V discharge voltage range. Thrusters operating at higher discharge voltages (V(sub d) >= 500 V) present an erosion environment that may differ greatly from that of the lower-voltage thrusters modeled in the past. In this work, HPHall, a well-established hybrid-PIC code, is used to simulate NASA's High-Voltage Hall Accelerator (HiVHAc) at discharge voltages of 300, 400, and 500V as a first step towards modeling the discharge channel erosion. It is found that the model accurately predicts the thruster performance at all operating conditions to within 6%. The model predicts a normalized plasma potential profile that is consistent between all three operating points, with the acceleration zone appearing in the same approximate location. The expected trend of increasing electron temperature with increasing discharge voltage is observed. An analysis of the discharge current oscillations shows that the model predicts oscillations that are much greater in amplitude than those measured experimentally at all operating points, suggesting that the differences in oscillation amplitude are not strongly associated with discharge voltage.
Berthet, Pierre; Lansner, Anders
2014-01-01
Optogenetic stimulation of specific types of medium spiny neurons (MSNs) in the striatum has been shown to bias the selection of mice in a two choices task. This shift is dependent on the localisation and on the intensity of the stimulation but also on the recent reward history. We have implemented a way to simulate this increased activity produced by the optical flash in our computational model of the basal ganglia (BG). This abstract model features the direct and indirect pathways commonly described in biology, and a reward prediction pathway (RP). The framework is similar to Actor-Critic methods and to the ventral/dorsal distinction in the striatum. We thus investigated the impact on the selection caused by an added stimulation in each of the three pathways. We were able to reproduce in our model the bias in action selection observed in mice. Our results also showed that biasing the reward prediction is sufficient to create a modification in the action selection. However, we had to increase the percentage of trials with stimulation relative to that in experiments in order to impact the selection. We found that increasing only the reward prediction had a different effect if the stimulation in RP was action dependent (only for a specific action) or not. We further looked at the evolution of the change in the weights depending on the stage of learning within a block. A bias in RP impacts the plasticity differently depending on that stage but also on the outcome. It remains to experimentally test how the dopaminergic neurons are affected by specific stimulations of neurons in the striatum and to relate data to predictions of our model.
Enfield, Kyle B; Schafer, Katherine; Zlupko, Mike; Herasevich, Vitaly; Novicoff, Wendy M; Gajic, Ognjen; Hoke, Tracey R; Truwit, Jonathon D
2012-01-01
Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8%) and 24.6% (95% CI 22.7%, 26.5%) (t-test p-value<0.001). The r(2) for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results suggest this finding can not be applied globally to patients admitted to intensive care units. As patients and providers increasingly use publicly reported information in making health care decisions and referrals, it is critical that the provided information be understood. Our results suggest that severity of illness may influence the mortality index in administrative models. We suggest that when interpreting "report cards" or metrics, health care providers determine how the risk adjustment was made and compares to other risk adjustment models.
NASA Astrophysics Data System (ADS)
Wulandari, D. W.; Kusratmoko, E.; Indra, T. L.
2018-05-01
Land use changes (LUC) as a result of increasing human need for space are likely to destroy the hydrological function of the watershed, increase land degradation, stimulate erosion and drive the process of sedimentation. This study aimed to predict LUC during the period 1990 to 2030 in relation to sediment yield in Cilutung and Cipeles Watershed, West Java. LUC were simulated following the model of Cellular Automata-Marcov Chain, whereas land use composition in 2030 was predicted using Land Change Modeler on Idrisi Selva Software. Elevation, slope, distance from road, distance from river, and distance from settlement were selected as driving factors for LUC in this study. Erosion and sediment yield were predicted using WATEM/SEDEM model based on land use, rainfall, soil texture and topography. The results showed that the areas of forest and shrub have slightly declined up to 5% during the period 1990 to 2016, generally being converted into rice fields, settlements, non-irrigated fields and plantations. In addition, rice fields, settlements, and plantations were expected to substantially increase up to 50% in 2030. Furthermore, the study also revealed that erosion and sediment yield tend to increase every year. This is likely associated with LUC occurring in Cipeles and Cilutung Watershed.
An SOA model for toluene oxidation in the presence of inorganic aerosols.
Cao, Gang; Jang, Myoseon
2010-01-15
A predictive model for secondary organic aerosol (SOA) formation including both partitioning and heterogeneous reactions is explored for the SOA produced from the oxidation of toluene in the presence of inorganic seed aerosols. The predictive SOA model comprises the explicit gas-phase chemistry of toluene, gas-particle partitioning, and heterogeneous chemistry. The resulting products from the explicit gas phase chemistry are lumped into several classes of chemical species based on their vapor pressure and reactivity for heterogeneous reactions. Both the gas-particle partitioning coefficient and the heterogeneous reaction rate constant of each lumped gas-phase product are theoretically determined using group contribution and molecular structure-reactivity. In the SOA model, the predictive SOA mass is decoupled into partitioning (OM(P)) and heterogeneous aerosol production (OM(H)). OM(P) is estimated from the SOA partitioning model developed by Schell et al. (J. Geophys. Res. 2001, 106, 28275-28293 ) that has been used in a regional air quality model (CMAQ 4.7). OM(H) is predicted from the heterogeneous SOA model developed by Jang et al. (Environ. Sci. Technol. 2006, 40, 3013-3022 ). The SOA model is evaluated using a number of the experimental SOA data that are generated in a 2 m(3) indoor Teflon film chamber under various experimental conditions (e.g., humidity, inorganic seed compositions, NO(x) concentrations). The SOA model reasonably predicts not only the gas-phase chemistry, such as the ozone formation, the conversion of NO to NO(2), and the toluene decay, but also the SOA production. The model predicted that the OM(H) fraction of the total toluene SOA mass increases as NO(x) concentrations decrease: 0.73-0.83 at low NO(x) levels and 0.17-0.47 at middle and high NO(x) levels for SOA experiments with high initial toluene concentrations. Our study also finds a significant increase in the OM(H) mass fraction in the SOA generated with low initial toluene concentrations, compared to those with high initial toluene concentrations. On average, more than a 1-fold increase in OM(H) fraction is observed when the comparison is made between SOA experiments with 40 ppb toluene to those with 630 ppb toluene. Such an observation implies that heterogeneous reactions of the second-generation products of toluene oxidation can contribute considerably to the total SOA mass under atmospheric relevant conditions.
Gambling and the Reasoned Action Model: Predicting Past Behavior, Intentions, and Future Behavior.
Dahl, Ethan; Tagler, Michael J; Hohman, Zachary P
2018-03-01
Gambling is a serious concern for society because it is highly addictive and is associated with a myriad of negative outcomes. The current study applied the Reasoned Action Model (RAM) to understand and predict gambling intentions and behavior. Although prior studies have taken a reasoned action approach to understand gambling, no prior study has fully applied the RAM or used the RAM to predict future gambling. Across two studies the RAM was used to predict intentions to gamble, past gambling behavior, and future gambling behavior. In study 1 the model significantly predicted intentions and past behavior in both a college student and Amazon Mechanical Turk sample. In study 2 the model predicted future gambling behavior, measured 2 weeks after initial measurement of the RAM constructs. This study stands as the first to show the utility of the RAM in predicting future gambling behavior. Across both studies, attitudes and perceived normative pressure were the strongest predictors of intentions to gamble. These findings provide increased understanding of gambling and inform the development of gambling interventions based on the RAM.
Oke, Tobi A; Hager, Heather A
2017-01-01
The fate of Northern peatlands under climate change is important because of their contribution to global carbon (C) storage. Peatlands are maintained via greater plant productivity (especially of Sphagnum species) than decomposition, and the processes involved are strongly mediated by climate. Although some studies predict that warming will relax constraints on decomposition, leading to decreased C sequestration, others predict increases in productivity and thus increases in C sequestration. We explored the lack of congruence between these predictions using single-species and integrated species distribution models as proxies for understanding the environmental correlates of North American Sphagnum peatland occurrence and how projected changes to the environment might influence these peatlands under climate change. Using Maximum entropy and BIOMOD modelling platforms, we generated single and integrated species distribution models for four common Sphagnum species in North America under current climate and a 2050 climate scenario projected by three general circulation models. We evaluated the environmental correlates of the models and explored the disparities in niche breadth, niche overlap, and climate suitability among current and future models. The models consistently show that Sphagnum peatland distribution is influenced by the balance between soil moisture deficit and temperature of the driest quarter-year. The models identify the east and west coasts of North America as the core climate space for Sphagnum peatland distribution. The models show that, at least in the immediate future, the area of suitable climate for Sphagnum peatland could expand. This result suggests that projected warming would be balanced effectively by the anticipated increase in precipitation, which would increase Sphagnum productivity.
Oke, Tobi A.; Hager, Heather A.
2017-01-01
The fate of Northern peatlands under climate change is important because of their contribution to global carbon (C) storage. Peatlands are maintained via greater plant productivity (especially of Sphagnum species) than decomposition, and the processes involved are strongly mediated by climate. Although some studies predict that warming will relax constraints on decomposition, leading to decreased C sequestration, others predict increases in productivity and thus increases in C sequestration. We explored the lack of congruence between these predictions using single-species and integrated species distribution models as proxies for understanding the environmental correlates of North American Sphagnum peatland occurrence and how projected changes to the environment might influence these peatlands under climate change. Using Maximum entropy and BIOMOD modelling platforms, we generated single and integrated species distribution models for four common Sphagnum species in North America under current climate and a 2050 climate scenario projected by three general circulation models. We evaluated the environmental correlates of the models and explored the disparities in niche breadth, niche overlap, and climate suitability among current and future models. The models consistently show that Sphagnum peatland distribution is influenced by the balance between soil moisture deficit and temperature of the driest quarter-year. The models identify the east and west coasts of North America as the core climate space for Sphagnum peatland distribution. The models show that, at least in the immediate future, the area of suitable climate for Sphagnum peatland could expand. This result suggests that projected warming would be balanced effectively by the anticipated increase in precipitation, which would increase Sphagnum productivity. PMID:28426754
Lucas, Joseph E.; Bazemore, Taylor C.; Alo, Celan; Monahan, Patrick B.
2017-01-01
HMG-CoA reductase inhibitors (or “statins”) are important and commonly used medications to lower cholesterol and prevent cardiovascular disease. Nearly half of patients stop taking statin medications one year after they are prescribed leading to higher cholesterol, increased cardiovascular risk, and costs due to excess hospitalizations. Identifying which patients are at highest risk for not adhering to long-term statin therapy is an important step towards individualizing interventions to improve adherence. Electronic health records (EHR) are an increasingly common source of data that are challenging to analyze but have potential for generating more accurate predictions of disease risk. The aim of this study was to build an EHR based model for statin adherence and link this model to biologic and clinical outcomes in patients receiving statin therapy. We gathered EHR data from the Military Health System which maintains administrative data for active duty, retirees, and dependents of the United States armed forces military that receive health care benefits. Data were gathered from patients prescribed their first statin prescription in 2005 and 2006. Baseline billing, laboratory, and pharmacy claims data were collected from the two years leading up to the first statin prescription and summarized using non-negative matrix factorization. Follow up statin prescription refill data was used to define the adherence outcome (> 80 percent days covered). The subsequent factors to emerge from this model were then used to build cross-validated, predictive models of 1) overall disease risk using coalescent regression and 2) statin adherence (using random forest regression). The predicted statin adherence for each patient was subsequently used to correlate with cholesterol lowering and hospitalizations for cardiovascular disease during the 5 year follow up period using Cox regression. The analytical dataset included 138 731 individuals and 1840 potential baseline predictors that were reduced to 30 independent EHR “factors”. A random forest predictive model taking patient, statin prescription, predicted disease risk, and the EHR factors as potential inputs produced a cross-validated c-statistic of 0.736 for classifying statin non-adherence. The addition of the first refill to the model increased the c-statistic to 0.81. The predicted statin adherence was independently associated with greater cholesterol lowering (correlation = 0.14, p < 1e-20) and lower hospitalization for myocardial infarction, coronary artery disease, and stroke (hazard ratio = 0.84, p = 1.87E-06). Electronic health records data can be used to build a predictive model of statin adherence that also correlates with statins’ cardiovascular benefits. PMID:29155848
NASA Astrophysics Data System (ADS)
Concepción Ramos, Maria
2017-04-01
This aim of the research was to analyse the effect of rainfall distribution and intensity on soil erosion in vines cultivated in the Mediterranean under the projected climate change scenario. The simulations were done at plot scale using the WEPP model. Climatic data for the period 1996-2014 were obtained from a meteorological station located 6km far from the plot. Soil characteristics such as texture, organic matter content, water retention capacity and infiltration were analysed. Runoff and soil losses were measured at four locations within the plot during 4 years and used to calibrate and validate the model. According to evidences recorded in the area, changes of rainfall intensities of 10 and 20% were considered for different rainfall distributions. The simulations were extended to the predicted changes for 2030, 2050 and 2070 based on the HadGEM2-CC under the Representative Concentration Pathways (RCPs) 8.5 scenario. WEPP model provided a suitable prediction of the seasonal runoff and erosion as simulated relatively well the runoff and erosion of the most important events although some deficiencies were found for those events that produced low runoff. The simulation confirmed the contribution of the extreme events to annual erosion rates in 70%, on average. The model responded to changes in precipitation predicted under a climate change scenario with a decrease of runoff and erosion, and with higher erosion rates for an increase in rainfall intensity. A 10% increase may imply erosion rates up to 22% greater for the scenario 2030, and despite the predicted decrease in precipitation for the scenario 2050, soil losses may be up to 40% greater than at present for some rainfall distributions and intensity rainfall increases of 20%. These findings show the need of considering rainfall intensity as one of the main driven factors when soil erosion rates under climate change are predicted. Keywords: extreme events, rainfall distribution, runoff, soil losses, wines, WEPP.
Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?
2017-01-01
Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692
A multi-step reaction model for ignition of fully-dense Al-CuO nanocomposite powders
NASA Astrophysics Data System (ADS)
Stamatis, D.; Ermoline, A.; Dreizin, E. L.
2012-12-01
A multi-step reaction model is developed to describe heterogeneous processes occurring upon heating of an Al-CuO nanocomposite material prepared by arrested reactive milling. The reaction model couples a previously derived Cabrera-Mott oxidation mechanism describing initial, low temperature processes and an aluminium oxidation model including formation of different alumina polymorphs at increased film thicknesses and higher temperatures. The reaction model is tuned using traces measured by differential scanning calorimetry. Ignition is studied for thin powder layers and individual particles using respectively the heated filament (heating rates of 103-104 K s-1) and laser ignition (heating rate ∼106 K s-1) experiments. The developed heterogeneous reaction model predicts a sharp temperature increase, which can be associated with ignition when the laser power approaches the experimental ignition threshold. In experiments, particles ignited by the laser beam are observed to explode, indicating a substantial gas release accompanying ignition. For the heated filament experiments, the model predicts exothermic reactions at the temperatures, at which ignition is observed experimentally; however, strong thermal contact between the metal filament and powder prevents the model from predicting the thermal runaway. It is suggested that oxygen gas release from decomposing CuO, as observed from particles exploding upon ignition in the laser beam, disrupts the thermal contact of the powder and filament; this phenomenon must be included in the filament ignition model to enable prediction of the temperature runaway.
Evaluation of an ensemble of genetic models for prediction of a quantitative trait.
Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola
2014-01-01
Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.
NASA Astrophysics Data System (ADS)
Lee, Soon Hwan; Kim, Ji Sun; Lee, Kang Yeol; Shon, Keon Tae
2017-04-01
Air quality due to increasing Particulate Matter(PM) in Korea in Asia is getting worse. At present, the PM forecast is announced based on the PM concentration predicted from the air quality prediction numerical model. However, forecast accuracy is not as high as expected due to various uncertainties for PM physical and chemical characteristics. The purpose of this study was to develop a numerical-statistically ensemble models to improve the accuracy of prediction of PM10 concentration. Numerical models used in this study are the three dimensional atmospheric model Weather Research and Forecasting(WRF) and the community multiscale air quality model (CMAQ). The target areas for the PM forecast are Seoul, Busan, Daegu, and Daejeon metropolitan areas in Korea. The data used in the model development are PM concentration and CMAQ predictions and the data period is 3 months (March 1 - May 31, 2014). The dynamic-statistical technics for reducing the systematic error of the CMAQ predictions was applied to the dynamic linear model(DLM) based on the Baysian Kalman filter technic. As a result of applying the metrics generated from the dynamic linear model to the forecasting of PM concentrations accuracy was improved. Especially, at the high PM concentration where the damage is relatively large, excellent improvement results are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, Jeffrey W., E-mail: jeffrey.fisher@fda.hhs.gov; Twaddle, Nathan C.; Vanlandingham, Michelle
A physiologically based pharmacokinetic (PBPK) model was developed for bisphenol A (BPA) in adult rhesus monkeys using intravenous (iv) and oral bolus doses of 100 {mu}g d6-BPA/kg (). This calibrated PBPK adult monkey model for BPA was then evaluated against published monkey kinetic studies with BPA. Using two versions of the adult monkey model based on monkey BPA kinetic data from and , the aglycone BPA pharmacokinetics were simulated for human oral ingestion of 5 mg d16-BPA per person (Voelkel et al., 2002). Voelkel et al. were unable to detect the aglycone BPA in plasma, but were able to detectmore » BPA metabolites. These human model predictions of the aglycone BPA in plasma were then compared to previously published PBPK model predictions obtained by simulating the Voelkel et al. kinetic study. Our BPA human model, using two parameter sets reflecting two adult monkey studies, both predicted lower aglycone levels in human serum than the previous human BPA PBPK model predictions. BPA was metabolized at all ages of monkey (PND 5 to adult) by the gut wall and liver. However, the hepatic metabolism of BPA and systemic clearance of its phase II metabolites appear to be slower in younger monkeys than adults. The use of the current non-human primate BPA model parameters provides more confidence in predicting the aglycone BPA in serum levels in humans after oral ingestion of BPA. -- Highlights: Black-Right-Pointing-Pointer A bisphenol A (BPA) PBPK model for the infant and adult monkey was constructed. Black-Right-Pointing-Pointer The hepatic metabolic rate of BPA increased with age of the monkey. Black-Right-Pointing-Pointer The systemic clearance rate of metabolites increased with age of the monkey. Black-Right-Pointing-Pointer Gut wall metabolism of orally administered BPA was substantial across all ages of monkeys. Black-Right-Pointing-Pointer Aglycone BPA plasma concentrations were predicted in humans orally given oral doses of deuterated BPA.« less
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
Predicting Trihalomethanes (THMs) in the New York City Water Supply
NASA Astrophysics Data System (ADS)
Mukundan, R.; Van Dreason, R.
2013-12-01
Chlorine, a commonly used disinfectant in most water supply systems, can combine with organic carbon to form disinfectant byproducts including carcinogenic trihalomethanes (THMs). We used water quality data from 24 monitoring sites within the New York City (NYC) water supply distribution system, measured between January 2009 and April 2012, to develop site-specific empirical models for predicting total trihalomethane (TTHM) levels. Terms in the model included various combinations of the following water quality parameters: total organic carbon, pH, specific conductivity, and water temperature. Reasonable estimates of TTHM levels were achieved with overall R2 of about 0.87 and predicted values within 5 μg/L of measured values. The relative importance of factors affecting TTHM formation was estimated by ranking the model regression coefficients. Site-specific models showed improved model performance statistics compared to a single model for the entire system most likely because the single model did not consider locational differences in the water treatment process. Although never out of compliance in 2011, the TTHM levels in the water supply increased following tropical storms Irene and Lee with 45% of the samples exceeding the 80 μg/L Maximum Contaminant Level (MCL) in October and November. This increase was explained by changes in water quality parameters, particularly by the increase in total organic carbon concentration and pH during this period.
De Kauwe, Martin G; Medlyn, Belinda E; Zaehle, Sönke; Walker, Anthony P; Dietze, Michael C; Wang, Ying-Ping; Luo, Yiqi; Jain, Atul K; El-Masri, Bassil; Hickler, Thomas; Wårlind, David; Weng, Ensheng; Parton, William J; Thornton, Peter E; Wang, Shusen; Prentice, I Colin; Asao, Shinichi; Smith, Benjamin; McCarthy, Heather R; Iversen, Colleen M; Hanson, Paul J; Warren, Jeffrey M; Oren, Ram; Norby, Richard J
2014-01-01
Elevated atmospheric CO2 concentration (eCO2) has the potential to increase vegetation carbon storage if increased net primary production causes increased long-lived biomass. Model predictions of eCO2 effects on vegetation carbon storage depend on how allocation and turnover processes are represented. We used data from two temperate forest free-air CO2 enrichment (FACE) experiments to evaluate representations of allocation and turnover in 11 ecosystem models. Observed eCO2 effects on allocation were dynamic. Allocation schemes based on functional relationships among biomass fractions that vary with resource availability were best able to capture the general features of the observations. Allocation schemes based on constant fractions or resource limitations performed less well, with some models having unintended outcomes. Few models represent turnover processes mechanistically and there was wide variation in predictions of tissue lifespan. Consequently, models did not perform well at predicting eCO2 effects on vegetation carbon storage. Our recommendations to reduce uncertainty include: use of allocation schemes constrained by biomass fractions; careful testing of allocation schemes; and synthesis of allocation and turnover data in terms of model parameters. Data from intensively studied ecosystem manipulation experiments are invaluable for constraining models and we recommend that such experiments should attempt to fully quantify carbon, water and nutrient budgets. PMID:24844873
Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein
2015-01-01
Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333
Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer
NASA Astrophysics Data System (ADS)
Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana
2017-03-01
Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.
Predictive microbiology in a dynamic environment: a system theory approach.
Van Impe, J F; Nicolaï, B M; Schellekens, M; Martens, T; De Baerdemaeker, J
1995-05-01
The main factors influencing the microbial stability of chilled prepared food products for which there is an increased consumer interest-are temperature, pH, and water activity. Unlike the pH and the water activity, the temperature may vary extensively throughout the complete production and distribution chain. The shelf life of this kind of foods is usually limited due to spoilage by common microorganisms, and the increased risk for food pathogens. In predicting the shelf life, mathematical models are a powerful tool to increase the insight in the different subprocesses and their interactions. However, the predictive value of the sigmoidal functions reported in the literature to describe a bacterial growth curve as an explicit function of time is only guaranteed at a constant temperature within the temperature range of microbial growth. As a result, they are less appropriate in optimization studies of a whole production and distribution chain. In this paper a more general modeling approach, inspired by system theory concepts, is presented if for instance time varying temperature profiles are to be taken into account. As a case study, we discuss a recently proposed dynamic model to predict microbial growth and inactivation under time varying temperature conditions from a system theory point of view. Further, the validity of this methodology is illustrated with experimental data of Brochothrix thermosphacta and Lactobacillus plantarum. Finally, we propose some possible refinements of this model inspired by experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohri, Nitin; Dicker, Adam P.; Lawrence, Yaacov Richard, E-mail: yaacovla@gmail.com
2012-05-01
Purpose: Hypofractionated radiotherapy (hRT) is being explored for a number of malignancies. The potential benefit of giving concurrent chemotherapy with hRT is not known. We sought to predict the effects of combined modality treatments by using mathematical models derived from laboratory data. Methods and Materials: Data from 26 published clonogenic survival assays for cancer cell lines with and without the use of radiosensitizing chemotherapy were collected. The first three data points of the RT arm of each assay were used to derive parameters for the linear quadratic (LQ) model, the multitarget (MT) model, and the generalized linear quadratic (gLQ) model.more » For each assay and model, the difference between the predicted and observed surviving fractions at the highest tested RT dose was calculated. The gLQ model was fitted to all the data from each RT cell survival assay, and the biologically equivalent doses in 2-Gy fractions (EQD2s) of clinically relevant hRT regimens were calculated. The increase in cell kill conferred by the addition of chemotherapy was used to estimate the EQD2 of hRT along with a radiosensitizing agent. For comparison, this was repeated using conventionally fractionated RT regimens. Results: At a mean RT dose of 8.0 Gy, the average errors for the LQ, MT, and gLQ models were 1.63, 0.83, and 0.56 log units, respectively, favoring the gLQ model (p < 0.05). Radiosensitizing chemotherapy increased the EQD2 of hRT schedules by an average of 28% to 82%, depending on disease site. This increase was similar to the gains predicted for the addition of chemotherapy to conventionally fractionated RT. Conclusions: Based on published in vitro assays, the gLQ equation is superior to the LQ and MT models in predicting cell kill at high doses of RT. Modeling exercises demonstrate that significant increases in biologically equivalent dose may be achieved with the addition of radiosensitizing agents to hRT. Clinical study of this approach is warranted.« less
Calus, M P L; de Haas, Y; Veerkamp, R F
2013-10-01
Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction models are especially important in small data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
M. Hurteau; M. North; T. Foines
2009-01-01
Climate change models for Californiaâs Sierra Nevada predict greater inter-annual variability in precipitation over the next 50 years. These increases in precipitation variability coupled with increases in nitrogen deposition fromfossil fuel consumption are likely to result in increased productivity levels and significant increases in...
Selby, Edward A; Kranzler, Amy; Panza, Emily; Fehling, Kara B
2016-04-01
Influenced by chaos theory, the emotional cascade model proposes that rumination and negative emotion may promote each other in a self-amplifying cycle that increases over time. Accordingly, exponential-compounding effects may better describe the relationship between rumination and negative emotion when they occur in impulsive persons, and predict impulsive behavior. Forty-seven community and undergraduate participants who reported frequent engagement in impulsive behaviors monitored their ruminative thoughts and negative emotion multiple times daily for two weeks using digital recording devices. Hypotheses were tested using cross-lagged mixed model analyses. Findings indicated that rumination predicted subsequent elevations in rumination that lasted over extended periods of time. Rumination and negative emotion predicted increased levels of each other at subsequent assessments, and exponential functions for these associations were supported. Results also supported a synergistic effect between rumination and negative emotion, predicting larger elevations in subsequent rumination and negative emotion than when one variable alone was elevated. Finally, there were synergistic effects of rumination and negative emotion in predicting number of impulsive behaviors subsequently reported. These findings are consistent with the emotional cascade model in suggesting that momentary rumination and negative emotion progressively propagate and magnify each other over time in impulsive people, promoting impulsive behavior. © 2014 Wiley Periodicals, Inc.
Can plantar soft tissue mechanics enhance prognosis of diabetic foot ulcer?
Naemi, R; Chatzistergos, P; Suresh, S; Sundar, L; Chockalingam, N; Ramachandran, A
2017-04-01
To investigate if the assessment of the mechanical properties of plantar soft tissue can increase the accuracy of predicting Diabetic Foot Ulceration (DFU). 40 patients with diabetic neuropathy and no DFU were recruited. Commonly assessed clinical parameters along with plantar soft tissue stiffness and thickness were measured at baseline using ultrasound elastography technique. 7 patients developed foot ulceration during a 12months follow-up. Logistic regression was used to identify parameters that contribute to predicting the DFU incidence. The effect of using parameters related to the mechanical behaviour of plantar soft tissue on the specificity, sensitivity, prediction strength and accuracy of the predicting models for DFU was assessed. Patients with higher plantar soft tissue thickness and lower stiffness at the 1st Metatarsal head area showed an increased risk of DFU. Adding plantar soft tissue stiffness and thickness to the model improved its specificity (by 3%), sensitivity (by 14%), prediction accuracy (by 5%) and prognosis strength (by 1%). The model containing all predictors was able to effectively (χ 2 (8, N=40)=17.55, P<0.05) distinguish between the patients with and without DFU incidence. The mechanical properties of plantar soft tissue can be used to improve the predictability of DFU in moderate/high risk patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter
2017-08-10
A better understanding of the genetic architecture underlying complex traits (e.g., the distribution of causal variants and their effects) may aid in the genomic prediction. Here, we hypothesized that the genomic variants of complex traits might be enriched in a subset of genomic regions defined by genes grouped on the basis of "Gene Ontology" (GO), and that incorporating this independent biological information into genomic prediction models might improve their predictive ability. Four complex traits (i.e., milk, fat and protein yields, and mastitis) together with imputed sequence variants in Holstein (HOL) and Jersey (JER) cattle were analysed. We first carried out a post-GWAS analysis in a HOL training population to assess the degree of enrichment of the association signals in the gene regions defined by each GO term. We then extended the genomic best linear unbiased prediction model (GBLUP) to a genomic feature BLUP (GFBLUP) model, including an additional genomic effect quantifying the joint effect of a group of variants located in a genomic feature. The GBLUP model using a single random effect assumes that all genomic variants contribute to the genomic relationship equally, whereas GFBLUP attributes different weights to the individual genomic relationships in the prediction equation based on the estimated genomic parameters. Our results demonstrate that the immune-relevant GO terms were more associated with mastitis than milk production, and several biologically meaningful GO terms improved the prediction accuracy with GFBLUP for the four traits, as compared with GBLUP. The improvement of the genomic prediction between breeds (the average increase across the four traits was 0.161) was more apparent than that it was within the HOL (the average increase across the four traits was 0.020). Our genomic feature modelling approaches provide a framework to simultaneously explore the genetic architecture and genomic prediction of complex traits by taking advantage of independent biological knowledge.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Ji, Xiang; Liu, Li-Ming; Li, Hong-Qing
2014-11-01
Taking Jinjing Town in Dongting Lake area as a case, this paper analyzed the evolution of rural landscape patterns by means of life cycle theory, simulated the evolution cycle curve, and calculated its evolution period, then combining CA-Markov model, a complete prediction model was built based on the rule of rural landscape change. The results showed that rural settlement and paddy landscapes of Jinjing Town would change most in 2020, with the rural settlement landscape increased to 1194.01 hm2 and paddy landscape greatly reduced to 3090.24 hm2. The quantitative and spatial prediction accuracies of the model were up to 99.3% and 96.4%, respectively, being more explicit than single CA-Markov model. The prediction model of rural landscape patterns change proposed in this paper would be helpful for rural landscape planning in future.
Unscented Kalman Filter-Trained Neural Networks for Slip Model Prediction
Li, Zhencai; Wang, Yang; Liu, Zhen
2016-01-01
The purpose of this work is to investigate the accurate trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model prediction. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and accurately under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to predict the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The results of numerical simulation show that the desired trajectory tracking control can be performed by predicting the nonlinear slip model. PMID:27467703
Irvine, Michael A; Konrad, Bernhard P; Michelow, Warren; Balshaw, Robert; Gilbert, Mark; Coombs, Daniel
2018-03-01
Increasing HIV testing rates among high-risk groups should lead to increased numbers of cases being detected. Coupled with effective treatment and behavioural change among individuals with detected infection, increased testing should also reduce onward incidence of HIV in the population. However, it can be difficult to predict the strengths of these effects and thus the overall impact of testing. We construct a mathematical model of an ongoing HIV epidemic in a population of gay, bisexual and other men who have sex with men. The model incorporates different levels of infection risk, testing habits and awareness of HIV status among members of the population. We introduce a novel Bayesian analysis that is able to incorporate potentially unreliable sexual health survey data along with firm clinical diagnosis data. We parameterize the model using survey and diagnostic data drawn from a population of men in Vancouver, Canada. We predict that increasing testing frequency will yield a small-scale but long-term impact on the epidemic in terms of new infections averted, as well as a large short-term impact on numbers of detected cases. These effects are predicted to occur even when a testing intervention is short-lived. We show that a short-lived but intensive testing campaign can potentially produce many of the same benefits as a campaign that is less intensive but of longer duration. © 2018 The Author(s).
NEXT Ion Thruster Thermal Model
NASA Technical Reports Server (NTRS)
VanNoord, Jonathan L.
2010-01-01
As the NEXT ion thruster progresses towards higher technology readiness, it is necessary to develop the tools that will support its implementation into flight programs. An ion thruster thermal model has been developed for the latest prototype model design to aid in predicting thruster temperatures for various missions. This model is comprised of two parts. The first part predicts the heating from the discharge plasma for various throttling points based on a discharge chamber plasma model. This model shows, as expected, that the internal heating is strongly correlated with the discharge power. Typically, the internal plasma heating increases with beam current and decreases slightly with beam voltage. The second is a model based on a finite difference thermal code used to predict the thruster temperatures. Both parts of the model will be described in this paper. This model has been correlated with a thermal development test on the NEXT Prototype Model 1 thruster with most predicted component temperatures within 5 to 10 C of test temperatures. The model indicates that heating, and hence current collection, is not based purely on the footprint of the magnet rings, but follows a 0.1:1:2:1 ratio for the cathode-to-conical-to-cylindrical-to-front magnet rings. This thermal model has also been used to predict the temperatures during the worst case mission profile that is anticipated for the thruster. The model predicts ample thermal margin for all of its components except the external cable harness under the hottest anticipated mission scenario. The external cable harness will be re-rated or replaced to meet the predicted environment.
NASA Astrophysics Data System (ADS)
Xiong, H.; Hamila, N.; Boisse, P.
2017-10-01
Pre-impregnated thermoplastic composites have recently attached increasing interest in the automotive industry for their excellent mechanical properties and their rapid cycle manufacturing process, modelling and numerical simulations of forming processes for composites parts with complex geometry is necessary to predict and optimize manufacturing practices, especially for the consolidation effects. A viscoelastic relaxation model is proposed to characterize the consolidation behavior of thermoplastic prepregs based on compaction tests with a range of temperatures. The intimate contact model is employed to predict the evolution of the consolidation which permits the microstructure prediction of void presented through the prepreg. Within a hyperelastic framework, several simulation tests are launched by combining a new developed solid shell finite element and the consolidation models.
NASA Astrophysics Data System (ADS)
Jepsen, S. M.; Harmon, T. C.; Ficklin, D. L.; Molotch, N. P.; Guan, B.
2018-01-01
Changes in long-term, montane actual evapotranspiration (ET) in response to climate change could impact future water supplies and forest species composition. For scenarios of atmospheric warming, predicted changes in long-term ET tend to differ between studies using space-for-time substitution (STS) models and integrated watershed models, and the influence of spatially varying factors on these differences is unclear. To examine this, we compared warming-induced (+2 to +6 °C) changes in ET simulated by an STS model and an integrated watershed model across zones of elevation, substrate available water capacity, and slope in the snow-influenced upper San Joaquin River watershed, Sierra Nevada, USA. We used the Soil Water and Assessment Tool (SWAT) for the watershed modeling and a Budyko-type relationship for the STS modeling. Spatially averaged increases in ET from the STS model increasingly surpassed those from the SWAT model in the higher elevation zones of the watershed, resulting in 2.3-2.6 times greater values from the STS model at the watershed scale. In sparse, deep colluvium or glacial soils on gentle slopes, the SWAT model produced ET increases exceeding those from the STS model. However, watershed areas associated with these conditions were too localized for SWAT to produce spatially averaged ET-gains comparable to the STS model. The SWAT model results nevertheless demonstrate that such soils on high-elevation, gentle slopes will form ET "hot spots" exhibiting disproportionately large increases in ET, and concomitant reductions in runoff yield, in response to warming. Predicted ET responses to warming from STS models and integrated watershed models may, in general, substantially differ (e.g., factor of 2-3) for snow-influenced watersheds exhibiting an elevational gradient in substrate water holding capacity and slope. Long-term water supplies in these settings may therefore be more resilient to warming than STS model predictions would suggest.
The Behaviour of Naturally Debonded Composites Due to Bending Using a Meso-Level Model
NASA Astrophysics Data System (ADS)
Lord, C. E.; Rongong, J. A.; Hodzic, A.
2012-06-01
Numerical simulations and analytical models are increasingly being sought for the design and behaviour prediction of composite materials. The use of high-performance composite materials is growing in both civilian and defence related applications. With this growth comes the necessity to understand and predict how these new materials will behave under their exposed environments. In this study, the displacement behaviour of naturally debonded composites under out-of-plane bending conditions has been investigated. An analytical approach has been developed to predict the displacement response behaviour. The analytical model supports multi-layered composites with full and partial delaminations. The model can be used to extract bulk effective material properties in which can be represented, later, as an ESL (Equivalent Single Layer). The friction between each of the layers is included in the analytical model and is shown to have distinct behaviour for these types of composites. Acceptable agreement was observed between the model predictions, the ANSYS finite element model, and the experiments.
Predicted effects of climate warming on the distribution of 50 stream fishes in Wisconsin, U.S.A.
Lyons, J.; Stewart, J.S.; Mitro, M.
2010-01-01
Summer air and stream water temperatures are expected to rise in the state of Wisconsin, U.S.A., over the next 50 years. To assess potential climate warming effects on stream fishes, predictive models were developed for 50 common fish species using classification-tree analysis of 69 environmental variables in a geographic information system. Model accuracy was 56.0-93.5% in validation tests. Models were applied to all 86 898 km of stream in the state under four different climate scenarios: current conditions, limited climate warming (summer air temperatures increase 1?? C and water 0.8?? C), moderate warming (air 3?? C and water 2.4?? C) and major warming (air 5?? C and water 4?? C). With climate warming, 23 fishes were predicted to decline in distribution (three to extirpation under the major warming scenario), 23 to increase and four to have no change. Overall, declining species lost substantially more stream length than increasing species gained. All three cold-water and 16 cool-water fishes and four of 31 warm-water fishes were predicted to decline, four warm-water fishes to remain the same and 23 warm-water fishes to increase in distribution. Species changes were predicted to be most dramatic in small streams in northern Wisconsin that currently have cold to cool summer water temperatures and are dominated by cold-water and cool-water fishes, and least in larger and warmer streams and rivers in southern Wisconsin that are currently dominated by warm-water fishes. Results of this study suggest that even small increases in summer air and water temperatures owing to climate warming will have major effects on the distribution of stream fishes in Wisconsin. ?? 2010 The Authors. Journal of Fish Biology ?? 2010 The Fisheries Society of the British Isles.
Predicted effects of climate warming on the distribution of 50 stream fishes in Wisconsin, U.S.A.
Stewart, Jana S.; Lyons, John D.; Matt Mitro,
2010-01-01
Summer air and stream water temperatures are expected to rise in the state of Wisconsin, U.S.A., over the next 50 years. To assess potential climate warming effects on stream fishes, predictive models were developed for 50 common fish species using classification-tree analysis of 69 environmental variables in a geographic information system. Model accuracy was 56·0–93·5% in validation tests. Models were applied to all 86 898 km of stream in the state under four different climate scenarios: current conditions, limited climate warming (summer air temperatures increase 1° C and water 0·8° C), moderate warming (air 3° C and water 2·4° C) and major warming (air 5° C and water 4° C). With climate warming, 23 fishes were predicted to decline in distribution (three to extirpation under the major warming scenario), 23 to increase and four to have no change. Overall, declining species lost substantially more stream length than increasing species gained. All three cold-water and 16 cool-water fishes and four of 31 warm-water fishes were predicted to decline, four warm-water fishes to remain the same and 23 warm-water fishes to increase in distribution. Species changes were predicted to be most dramatic in small streams in northern Wisconsin that currently have cold to cool summer water temperatures and are dominated by cold-water and cool-water fishes, and least in larger and warmer streams and rivers in southern Wisconsin that are currently dominated by warm-water fishes. Results of this study suggest that even small increases in summer air and water temperatures owing to climate warming will have major effects on the distribution of stream fishes in Wisconsin.
Estimating wildfire risk on a Mojave Desert landscape using remote sensing and field sampling
Van Linn, Peter F.; Nussear, Kenneth E.; Esque, Todd C.; DeFalco, Lesley A.; Inman, Richard D.; Abella, Scott R.
2013-01-01
Predicting wildfires that affect broad landscapes is important for allocating suppression resources and guiding land management. Wildfire prediction in the south-western United States is of specific concern because of the increasing prevalence and severe effects of fire on desert shrublands and the current lack of accurate fire prediction tools. We developed a fire risk model to predict fire occurrence in a north-eastern Mojave Desert landscape. First we developed a spatial model using remote sensing data to predict fuel loads based on field estimates of fuels. We then modelled fire risk (interactions of fuel characteristics and environmental conditions conducive to wildfire) using satellite imagery, our model of fuel loads, and spatial data on ignition potential (lightning strikes and distance to roads), topography (elevation and aspect) and climate (maximum and minimum temperatures). The risk model was developed during a fire year at our study landscape and validated at a nearby landscape; model performance was accurate and similar at both sites. This study demonstrates that remote sensing techniques used in combination with field surveys can accurately predict wildfire risk in the Mojave Desert and may be applicable to other arid and semiarid lands where wildfires are prevalent.
DeGange, Anthony R.; Marcot, Bruce G.; Lawler, James; Jorgenson, Torre; Winfree, Robert
2014-01-01
We used a modeling framework and a recent ecological land classification and land cover map to predict how ecosystems and wildlife habitat in northwest Alaska might change in response to increasing temperature. Our results suggest modest increases in forest and tall shrub ecotypes in Northwest Alaska by the end of this century thereby increasing habitat for forest-dwelling and shrub-using birds and mammals. Conversely, we predict declines in several more open low shrub, tussock, and meadow ecotypes favored by many waterbird, shorebird, and small mammal species.
Belay, T K; Dagnachew, B S; Boison, S A; Ådnøy, T
2018-03-28
Milk infrared spectra are routinely used for phenotyping traits of interest through links developed between the traits and spectra. Predicted individual traits are then used in genetic analyses for estimated breeding value (EBV) or for phenotypic predictions using a single-trait mixed model; this approach is referred to as indirect prediction (IP). An alternative approach [direct prediction (DP)] is a direct genetic analysis of (a reduced dimension of) the spectra using a multitrait model to predict multivariate EBV of the spectral components and, ultimately, also to predict the univariate EBV or phenotype for the traits of interest. We simulated 3 traits under different genetic (low: 0.10 to high: 0.90) and residual (zero to high: ±0.90) correlation scenarios between the 3 traits and assumed the first trait is a linear combination of the other 2 traits. The aim was to compare the IP and DP approaches for predictions of EBV and phenotypes under the different correlation scenarios. We also evaluated relationships between performances of the 2 approaches and the accuracy of calibration equations. Moreover, the effect of using different regression coefficients estimated from simulated phenotypes (β p ), true breeding values (β g ), and residuals (β r ) on performance of the 2 approaches were evaluated. The simulated data contained 2,100 parents (100 sires and 2,000 cows) and 8,000 offspring (4 offspring per cow). Of the 8,000 observations, 2,000 were randomly selected and used to develop links between the first and the other 2 traits using partial least square (PLS) regression analysis. The different PLS regression coefficients, such as β p , β g , and β r , were used in subsequent predictions following the IP and DP approaches. We used BLUP analyses for the remaining 6,000 observations using the true (co)variance components that had been used for the simulation. Accuracy of prediction (of EBV and phenotype) was calculated as a correlation between predicted and true values from the simulations. The results showed that accuracies of EBV prediction were higher in the DP than in the IP approach. The reverse was true for accuracy of phenotypic prediction when using β p but not when using β g and β r , where accuracy of phenotypic prediction in the DP was slightly higher than in the IP approach. Within the DP approach, accuracies of EBV when using β g were higher than when using β p only at the low genetic correlation scenario. However, we found no differences in EBV prediction accuracy between the β p and β g in the IP approach. Accuracy of the calibration models increased with an increase in genetic and residual correlations between the traits. Performance of both approaches increased with an increase in accuracy of the calibration models. In conclusion, the DP approach is a good strategy for EBV prediction but not for phenotypic prediction, where the classical PLS regression-based equations or the IP approach provided better results. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Haase, Claudia M.; Holley, Sarah; Bloch, Lian; Verstaen, Alice; Levenson, Robert W.
2016-01-01
Objectively coded interpersonal emotional behaviors that emerged during a 15-minute marital conflict interaction predicted the development of physical symptoms in a 20-year longitudinal study of long-term marriages. Dyadic latent growth curve modeling showed that anger behavior predicted increases in cardiovascular symptoms and stonewalling behavior predicted increases in musculoskeletal symptoms. Both associations were found for husbands (although cross-lagged path models also showed some support for wives) and were controlled for sociodemographic characteristics (age, education) and behaviors (i.e., exercise, smoking, alcohol consumption, caffeine consumption) known to influence health. Both associations did not exist at the start of the study, but only emerged over the ensuing 20 years. There was some support for the specificity of these relationships (i.e., stonewalling behavior did not predict cardiovascular symptoms; anger behavior did not predict musculoskeletal symptoms; neither symptom was predicted by fear nor sadness behavior), with the anger-cardiovascular relationship emerging as most robust. Using cross-lagged path models to probe directionality of these associations, emotional behaviors predicted physical health symptoms over time (with some reverse associations found as well). These findings illuminate longstanding theoretical and applied issues concerning the association between interpersonal emotional behaviors and physical health and suggest opportunities for preventive interventions focused on specific emotions to help address major public health problems. PMID:27213730
Corneal cell culture models: a tool to study corneal drug absorption.
Dey, Surajit
2011-05-01
In recent times, there has been an ever increasing demand for ocular drugs to treat sight threatening diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. As more drugs are developed, there is a great need to test in vitro permeability of these drugs to predict their efficacy and bioavailability in vivo. Corneal cell culture models are the only tool that can predict drug absorption across ocular layers accurately and rapidly. Cell culture studies are also valuable in reducing the number of animals needed for in vivo studies which can increase the cost of the drug developmental process. Currently, rabbit corneal cell culture models are used to predict human corneal absorption due to the difficulty in human corneal studies. More recently, a three dimensional human corneal equivalent has been developed using three different cell types to mimic the human cornea. In the future, human corneal cell culture systems need to be developed to be used as a standardized model for drug permeation.
Satellite Observations and Chemistry Climate Models - A Meandering Path Towards Better Predictions
NASA Technical Reports Server (NTRS)
Douglass, Anne R.
2011-01-01
Knowledge of the chemical and dynamical processes that control the stratospheric ozone layer has grown rapidly since the 1970s, when ideas that depletion of the ozone layer due to human activity were put forth. The concept of ozone depletion due to anthropogenic chlorine increase is simple; quantification of the effect is much more difficult. The future of stratospheric ozone is complicated because ozone is expected to increase for two reasons: the slow decrease in anthropogenic chlorine due to the Montreal Protocol and its amendments and stratospheric cooling caused by increases in carbon dioxide and other greenhouse gases. Prediction of future ozone levels requires three-dimensional models that represent physical, photochemical and radiative processes, i.e., chemistry climate models (CCMs). While laboratory kinetic and photochemical data are necessary inputs for a CCM, atmospheric measurements are needed both to reveal physical and chemical processes and for comparison with simulations to test the conceptual model that CCMs represent. Global measurements are available from various satellites including but not limited to the LIMS and TOMS instruments on Nimbus 7 (1979 - 1993), and various instruments on the Upper Atmosphere Research Satellite (1991 - 2005), Envisat (2002 - ongoing), Sci-Sat (2003 - ongoing) and Aura (2004 - ongoing). Every successful satellite instrument requires a physical concept for the measurement, knowledge of physical chemical properties of the molecules to be measured, and stellar engineering to design an instrument that will survive launch and operate for years with no opportunity for repair but providing enough information that trend information can be separated from any instrument change. The on-going challenge is to use observations to decrease uncertainty in prediction. This talk will focus on two applications. The first considers transport diagnostics and implications for prediction of the eventual demise of the Antarctic ozone hole. The second focuses on the upper stratosphere, where ozone is predicted to increase both due to chlorine decrease and due to temperature decrease expected as a result of increased concentrations Of CO2 and other greenhouse gases. Both applications show how diagnostics developed from global observations are being used to explain why the ozone response varies among CCM predictions for stratospheric ozone in the 21st century.
NASA Astrophysics Data System (ADS)
Eyarkai Nambi, Vijayaram; Thangavel, Kuladaisamy; Manickavasagan, Annamalai; Shahir, Sultan
2017-01-01
Prediction of ripeness level in climacteric fruits is essential for post-harvest handling. An index capable of predicting ripening level with minimum inputs would be highly beneficial to the handlers, processors and researchers in fruit industry. A study was conducted with Indian mango cultivars to develop a ripeness index and associated model. Changes in physicochemical, colour and textural properties were measured throughout the ripening period and the period was classified into five stages (unripe, early ripe, partially ripe, ripe and over ripe). Multivariate regression techniques like partial least square regression, principal component regression and multi linear regression were compared and evaluated for its prediction. Multi linear regression model with 12 parameters was found more suitable in ripening prediction. Scientific variable reduction method was adopted to simplify the developed model. Better prediction was achieved with either 2 or 3 variables (total soluble solids, colour and acidity). Cross validation was done to increase the robustness and it was found that proposed ripening index was more effective in prediction of ripening stages. Three-variable model would be suitable for commercial applications where reasonable accuracies are sufficient. However, 12-variable model can be used to obtain more precise results in research and development applications.
Predicting the High Redshift Galaxy Population for JWST
NASA Astrophysics Data System (ADS)
Flynn, Zoey; Benson, Andrew
2017-01-01
The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.
Effect of increases in energy-related labor forces upon retailing in Alabama
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robicheaux, R.A.
1983-06-01
The heightened mining employment that will result from increased extraction of coal from Alabama's Warrior Coal Basin will boost retail sales and employment. The Warrior Coal Basin counties (Fayette, Jefferson, Tuscaloosa and Walker) are heavily dependent upon coal mining as a source of employment and wages. Further, since the counties' economies grew increasingly dependent upon coal mining activities throughout the 1970s, it was believed that it would be possible to measure, with some acceptable level of reliability, the impact of the steadily rising mining activity upon the area's retailing sector. Therefore, a small scale econometric model was developed which representsmore » the interrelationships among income, mining and trade employment and retail sales in the four-county Warrior Coal Basin area. The results of two versions of the model are presented. In the first version, area-wide retail sales are treated in the aggregate. In the second version, retail sales are disaggregated into twelve categories (e.g., food, apparel, furniture, etc.). The models were specified using 1960 to 1976 data. The mining employment growth scenario used in this report called for steady increases in mining employment that culminated in an employment level that is 4000 above the baseline employment projections by 1985. Both versions of the model predicted that cumulative real regional income would increase by $1.39 billion over seven years with the added mining employment. The predicted impacts on trade employment and real retail sales varied between the two models, however. The aggregate model predicts the addition of 7500 trade workers and an additional $1.35 billion in real retail sales. The disaggregate model suggests that food stores, automobile dealers, general merchandise stores, gas stations and lumber and building materials retailers would enjoy the greatest positive benefits.« less
Maintenance of equilibrium point control during an unexpectedly loaded rapid limb movement.
Simmons, R W; Richardson, C
1984-06-08
Two experiments investigated whether the equilibrium point hypothesis or the mass-spring model of motor control subserves positioning accuracy during spring loaded, rapid, bi-articulated movement. For intact preparations, the equilibrium point hypothesis predicts response accuracy to be determined by a mixture of afferent and efferent information, whereas the mass-spring model predicts positioning to be under a direct control system. Subjects completed a series of load-resisted training trials to a spatial target. The magnitude of a sustained spring load was unexpectedly increased on selected trials. Results indicated positioning accuracy and applied force varied with increases in load, which suggests that the original efferent commands are modified by afferent information during the movement as predicted by the equilibrium point hypothesis.
Lowrey, Chris E.; Longshore, Kathleen M.; Riddle, Brett R.; Mantooth, Stacy
2016-01-01
Although montane sky islands surrounded by desert scrub and shrub steppe comprise a large part of the biological diversity of the Basin and Range Province of southwestern North America, comprehensive ecological and population demographic studies for high-elevation small mammals within these areas are rare. Here, we examine the ecology and population parameters of the Palmer’s chipmunk (Tamias palmeri) in the Spring Mountains of southern Nevada, and present a predictive GIS-based distribution and probability of occurrence model at both home range and geographic spatial scales. Logistic regression analyses and Akaike Information Criterion model selection found variables of forest type, slope, and distance to water sources as predictive of chipmunk occurrence at the geographic scale. At the home range scale, increasing population density, decreasing overstory canopy cover, and decreasing understory canopy cover contributed to increased survival rates.
Criteria for predicting the formation of single-phase high-entropy alloys
Troparevsky, M Claudia; Morris, James R..; Kent, Paul R.; ...
2015-03-15
High entropy alloys constitute a new class of materials whose very existence poses fundamental questions. Originally thought to be stabilized by the large entropy of mixing, these alloys have attracted attention due to their potential applications, yet no model capable of robustly predicting which combinations of elements will form a single-phase currently exists. Here we propose a model that, through the use of high-throughput computation of the enthalpies of formation of binary compounds, is able to confirm all known high-entropy alloys while rejecting similar alloys that are known to form multiple phases. Despite the increasing entropy, our model predicts thatmore » the number of potential single-phase multicomponent alloys decreases with an increasing number of components: out of more than two million possible 7-component alloys considered, fewer than twenty single-phase alloys are likely.« less
Core self-evaluations and Snyder's hope theory in persons with spinal cord injuries.
Smedema, Susan Miller; Chan, Jacob Yuichung; Phillips, Brian N
2014-11-01
The objective of the study was to evaluate a motivational model of core self-evaluations (CSE), hope (agency and pathways thinking), participation, and life satisfaction in persons with spinal cord injuries. A cross-sectional, correlational design with path analysis was used to evaluate the model. 187 adults with spinal cord injuries participated in this study. The results indicated an excellent fit between the data and the proposed model. Specifically, CSE was found to directly predict agency and pathways thinking, participation, and life satisfaction. CSE was also found to indirectly predict participation and life satisfaction through agency thinking. Although CSE contributes directly to participation and life satisfaction, it also has a unique role in increasing individuals' motivation to pursue goals, which also predicts participation and life satisfaction. Counseling interventions should be multifaceted and address the components of CSE to increase hope, participation, and life satisfaction. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Stephen C.; Ratcliff, Matthew; McCormick, Robert
In some studies, a relationship has been observed between increasing ethanol content in gasoline and increased particulate matter (PM) emissions from vehicles equipped with spark ignition engines. The fundamental cause of the PM increase seen for moderate ethanol concentrations is not well understood. Ethanol features a greater heat of vaporization (HOV) than gasoline and also influences vaporization by altering the liquid and vapor composition throughout the distillation process. A droplet vaporization model was developed to explore ethanol's effect on the evaporation of aromatic compounds known to be PM precursors. The evolving droplet composition is modeled as a distillation process, withmore » non-ideal interactions between oxygenates and hydrocarbons accounted for using UNIFAC group contribution theory. Predicted composition and distillation curves were validated by experiments. Detailed hydrocarbon analysis was applied to fuel samples and to distillate fractions, and used as input for the initial droplet composition. With composition calculated throughout the distillation, the changing HOV and other physical properties can be found using reference data. The droplet can thus be modeled in terms of energy transfer, which in turn provides the transient mass transfer, droplet temperature, and droplet diameter. Model predictions suggest that non-ideal vapor-liquid equilibrium along with an increase in HOV can alter the droplet composition evolution. Results predict that the presence of ethanol causes enrichment of the higher boiling fractions (T90+) in the aromatic components as well as lengthens the droplet lifetime. A simulation of the evaporation process in a transient environment as experienced within an engine cylinder predicts a decrease in mixing time of the heaviest fractions of the fuel prior to spark initiation, possibly explaining observations linking ethanol to PM.« less
Burke, Stephen C.; Ratcliff, Matthew; McCormick, Robert; ...
2017-03-28
In some studies, a relationship has been observed between increasing ethanol content in gasoline and increased particulate matter (PM) emissions from vehicles equipped with spark ignition engines. The fundamental cause of the PM increase seen for moderate ethanol concentrations is not well understood. Ethanol features a greater heat of vaporization (HOV) than gasoline and also influences vaporization by altering the liquid and vapor composition throughout the distillation process. A droplet vaporization model was developed to explore ethanol's effect on the evaporation of aromatic compounds known to be PM precursors. The evolving droplet composition is modeled as a distillation process, withmore » non-ideal interactions between oxygenates and hydrocarbons accounted for using UNIFAC group contribution theory. Predicted composition and distillation curves were validated by experiments. Detailed hydrocarbon analysis was applied to fuel samples and to distillate fractions, and used as input for the initial droplet composition. With composition calculated throughout the distillation, the changing HOV and other physical properties can be found using reference data. The droplet can thus be modeled in terms of energy transfer, which in turn provides the transient mass transfer, droplet temperature, and droplet diameter. Model predictions suggest that non-ideal vapor-liquid equilibrium along with an increase in HOV can alter the droplet composition evolution. Results predict that the presence of ethanol causes enrichment of the higher boiling fractions (T90+) in the aromatic components as well as lengthens the droplet lifetime. A simulation of the evaporation process in a transient environment as experienced within an engine cylinder predicts a decrease in mixing time of the heaviest fractions of the fuel prior to spark initiation, possibly explaining observations linking ethanol to PM.« less
Baig, Sofia; Medlyn, Belinda E; Mercado, Lina M; Zaehle, Sönke
2015-12-01
The temperature dependence of the reaction kinetics of the Rubisco enzyme implies that, at the level of a chloroplast, the response of photosynthesis to rising atmospheric CO2 concentration (Ca ) will increase with increasing air temperature. Vegetation models incorporating this interaction predict that the response of net primary productivity (NPP) to elevated CO2 (eCa ) will increase with rising temperature and will be substantially larger in warm tropical forests than in cold boreal forests. We tested these model predictions against evidence from eCa experiments by carrying out two meta-analyses. Firstly, we tested for an interaction effect on growth responses in factorial eCa × temperature experiments. This analysis showed a positive, but nonsignificant interaction effect (95% CI for above-ground biomass response = -0.8, 18.0%) between eCa and temperature. Secondly, we tested field-based eCa experiments on woody plants across the globe for a relationship between the eCa effect on plant biomass and mean annual temperature (MAT). This second analysis showed a positive but nonsignificant correlation between the eCa response and MAT. The magnitude of the interactions between CO2 and temperature found in both meta-analyses were consistent with model predictions, even though both analyses gave nonsignificant results. Thus, we conclude that it is not possible to distinguish between the competing hypotheses of no interaction vs. an interaction based on Rubisco kinetics from the available experimental database. Experiments in a wider range of temperature zones are required. Until such experimental data are available, model predictions should aim to incorporate uncertainty about this interaction. © 2015 John Wiley & Sons Ltd.
Development of machine learning models for diagnosis of glaucoma.
Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong
2017-01-01
The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.
A model for the plastic flow of landslides
Savage, William Z.; Smith, William K.
1986-01-01
To further the understanding of the mechanics of landslide flow, we present a model that predicts many of the observed attributes of landslides. The model is based on an integration of the hyperbolic differential equations for stress and velocity fields in a two-dimensional, inclined, semi-infinite half-space of Coulomb plastic material under elevated pore pressure and gravity. Our landslide model predicts commonly observed features. For example, compressive (passive), plug, or extending (active) flow will occur under appropriate longitudinal strain rates. Also, the model predicts that longitudinal stresses increase elliptically with depth to the basal slide plane, and that stress and velocity characteristics, surfaces along which discontinuities in stress and velocity are propagated, are coincident. Finally, the model shows how thrust and normal faults develop at the landslide surface in compressive and extending flow.
Li, John; Maclehose, Rich; Smith, Kirk; Kaehler, Dawn; Hedberg, Craig
2011-01-01
Foodborne illness surveillance based on consumer complaints detects outbreaks by finding common exposures among callers, but this process is often difficult. Laboratory testing of ill callers could also help identify potential outbreaks. However, collection of stool samples from all callers is not feasible. Methods to help screen calls for etiology are needed to increase the efficiency of complaint surveillance systems and increase the likelihood of detecting foodborne outbreaks caused by Salmonella. Data from the Minnesota Department of Health foodborne illness surveillance database (2000 to 2008) were analyzed. Complaints with identified etiologies were examined to create a predictive model for Salmonella. Bootstrap methods were used to internally validate the model. Seventy-one percent of complaints in the foodborne illness database with known etiologies were due to norovirus. The predictive model had a good discriminatory ability to identify Salmonella calls. Three cutoffs for the predictive model were tested: one that maximized sensitivity, one that maximized specificity, and one that maximized predictive ability, providing sensitivities and specificities of 32 and 96%, 100 and 54%, and 89 and 72%, respectively. Development of a predictive model for Salmonella could help screen calls for etiology. The cutoff that provided the best predictive ability for Salmonella corresponded to a caller reporting diarrhea and fever with no vomiting, and five or fewer people ill. Screening calls for etiology would help identify complaints for further follow-up and result in identifying Salmonella cases that would otherwise go unconfirmed; in turn, this could lead to the identification of more outbreaks.
A prediction model for colon cancer surveillance data.
Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G
2015-08-15
Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less
NASA Astrophysics Data System (ADS)
Xie, Yan; Li, Mu; Zhou, Jin; Zheng, Chang-zheng
2009-07-01
Agricultural machinery total power is an important index to reflex and evaluate the level of agricultural mechanization. It is the power source of agricultural production, and is the main factors to enhance the comprehensive agricultural production capacity expand production scale and increase the income of the farmers. Its demand is affected by natural, economic, technological and social and other "grey" factors. Therefore, grey system theory can be used to analyze the development of agricultural machinery total power. A method based on genetic algorithm optimizing grey modeling process is introduced in this paper. This method makes full use of the advantages of the grey prediction model and characteristics of genetic algorithm to find global optimization. So the prediction model is more accurate. According to data from a province, the GM (1, 1) model for predicting agricultural machinery total power was given based on the grey system theories and genetic algorithm. The result indicates that the model can be used as agricultural machinery total power an effective tool for prediction.
Morphodynamic data assimilation used to understand changing coasts
Plant, Nathaniel G.; Long, Joseph W.
2015-01-01
Morphodynamic data assimilation blends observations with model predictions and comes in many forms, including linear regression, Kalman filter, brute-force parameter estimation, variational assimilation, and Bayesian analysis. Importantly, data assimilation can be used to identify sources of prediction errors that lead to improved fundamental understanding. Overall, models incorporating data assimilation yield better information to the people who must make decisions impacting safety and wellbeing in coastal regions that experience hazards due to storms, sea-level rise, and erosion. We present examples of data assimilation associated with morphologic change. We conclude that enough morphodynamic predictive capability is available now to be useful to people, and that we will increase our understanding and the level of detail of our predictions through assimilation of observations and numerical-statistical models.
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
Modelling the transmission of healthcare associated infections: a systematic review
2013-01-01
Background Dynamic transmission models are increasingly being used to improve our understanding of the epidemiology of healthcare-associated infections (HCAI). However, there has been no recent comprehensive review of this emerging field. This paper summarises how mathematical models have informed the field of HCAI and how methods have developed over time. Methods MEDLINE, EMBASE, Scopus, CINAHL plus and Global Health databases were systematically searched for dynamic mathematical models of HCAI transmission and/or the dynamics of antimicrobial resistance in healthcare settings. Results In total, 96 papers met the eligibility criteria. The main research themes considered were evaluation of infection control effectiveness (64%), variability in transmission routes (7%), the impact of movement patterns between healthcare institutes (5%), the development of antimicrobial resistance (3%), and strain competitiveness or co-colonisation with different strains (3%). Methicillin-resistant Staphylococcus aureus was the most commonly modelled HCAI (34%), followed by vancomycin resistant enterococci (16%). Other common HCAIs, e.g. Clostridum difficile, were rarely investigated (3%). Very few models have been published on HCAI from low or middle-income countries. The first HCAI model has looked at antimicrobial resistance in hospital settings using compartmental deterministic approaches. Stochastic models (which include the role of chance in the transmission process) are becoming increasingly common. Model calibration (inference of unknown parameters by fitting models to data) and sensitivity analysis are comparatively uncommon, occurring in 35% and 36% of studies respectively, but their application is increasing. Only 5% of models compared their predictions to external data. Conclusions Transmission models have been used to understand complex systems and to predict the impact of control policies. Methods have generally improved, with an increased use of stochastic models, and more advanced methods for formal model fitting and sensitivity analyses. Insights gained from these models could be broadened to a wider range of pathogens and settings. Improvements in the availability of data and statistical methods could enhance the predictive ability of models. PMID:23809195
ERIC Educational Resources Information Center
Porter, Stephen R.
Annual funds face pressures to contact all alumni to maximize participation, but these efforts are costly. This paper uses a logistic regression model to predict likely donors among alumni from the College of Arts & Humanities at the University of Maryland, College Park. Alumni were grouped according to their predicted probability of donating…
Observed heavy precipitation increase confirms theory and early model
NASA Astrophysics Data System (ADS)
Fischer, E. M.; Knutti, R.
2016-12-01
Environmental phenomena are often first observed, and then explained or simulated quantitatively. The complexity and diversity of processes, the range of scales involved, and the lack of first principles to describe many processes make it challenging to predict conditions beyond the ones observed. Here we use the intensification of heavy precipitation as a counterexample, where seemingly complex and potentially computationally intractable processes to first order manifest themselves in simple ways: the intensification of heavy precipitation is now emerging in the observed record across many regions of the world, confirming both theory and a variety of model predictions made decades ago, before robust evidence arose from observations. We here compare heavy precipitation changes over Europe and the contiguous United States across station series and gridded observations, theoretical considerations and multi-model ensembles of GCMs and RCMs. We demonstrate that the observed heavy precipitation intensification aggregated over large areas agrees remarkably well with Clausius-Clapeyron scaling. The observed changes in heavy precipitation are consistent yet somewhat larger than predicted by very coarse resolution GCMs in the 1980s and simulated by the newest generation of GCMs and RCMs. For instance the number of days with very heavy precipitation over Europe has increased by about 45% in observations (years 1981-2013 compared to 1951-1980) and by about 25% in the model average in both GCMs and RCMs, although with substantial spread across models and locations. As the anthropogenic climate signal strengthens, there will be more opportunities to test climate predictions for other variables against observations and across a hierarchy of different models and theoretical concepts. *Fischer, E.M., and R. Knutti, 2016, Observed heavy precipitation increase confirms theory and early models, Nature Climate Change, in press.
Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models
The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Hermes, Helen E.; Teutonico, Donato; Preuss, Thomas G.; Schneckener, Sebastian
2018-01-01
The environmental fates of pharmaceuticals and the effects of crop protection products on non-target species are subjects that are undergoing intense review. Since measuring the concentrations and effects of xenobiotics on all affected species under all conceivable scenarios is not feasible, standard laboratory animals such as rabbits are tested, and the observed adverse effects are translated to focal species for environmental risk assessments. In that respect, mathematical modelling is becoming increasingly important for evaluating the consequences of pesticides in untested scenarios. In particular, physiologically based pharmacokinetic/toxicokinetic (PBPK/TK) modelling is a well-established methodology used to predict tissue concentrations based on the absorption, distribution, metabolism and excretion of drugs and toxicants. In the present work, a rabbit PBPK/TK model is developed and evaluated with data available from the literature. The model predictions include scenarios of both intravenous (i.v.) and oral (p.o.) administration of small and large compounds. The presented rabbit PBPK/TK model predicts the pharmacokinetics (Cmax, AUC) of the tested compounds with an average 1.7-fold error. This result indicates a good predictive capacity of the model, which enables its use for risk assessment modelling and simulations. PMID:29561908
Forecasting biodiversity in breeding birds using best practices
Taylor, Shawn D.; White, Ethan P.
2018-01-01
Biodiversity forecasts are important for conservation, management, and evaluating how well current models characterize natural systems. While the number of forecasts for biodiversity is increasing, there is little information available on how well these forecasts work. Most biodiversity forecasts are not evaluated to determine how well they predict future diversity, fail to account for uncertainty, and do not use time-series data that captures the actual dynamics being studied. We addressed these limitations by using best practices to explore our ability to forecast the species richness of breeding birds in North America. We used hindcasting to evaluate six different modeling approaches for predicting richness. Hindcasts for each method were evaluated annually for a decade at 1,237 sites distributed throughout the continental United States. All models explained more than 50% of the variance in richness, but none of them consistently outperformed a baseline model that predicted constant richness at each site. The best practices implemented in this study directly influenced the forecasts and evaluations. Stacked species distribution models and “naive” forecasts produced poor estimates of uncertainty and accounting for this resulted in these models dropping in the relative performance compared to other models. Accounting for observer effects improved model performance overall, but also changed the rank ordering of models because it did not improve the accuracy of the “naive” model. Considering the forecast horizon revealed that the prediction accuracy decreased across all models as the time horizon of the forecast increased. To facilitate the rapid improvement of biodiversity forecasts, we emphasize the value of specific best practices in making forecasts and evaluating forecasting methods. PMID:29441230
A new model for force generation by skeletal muscle, incorporating work-dependent deactivation
Williams, Thelma L.
2010-01-01
A model is developed to predict the force generated by active skeletal muscle when subjected to imposed patterns of lengthening and shortening, such as those that occur during normal movements. The model is based on data from isolated lamprey muscle and can predict the forces developed during swimming. The model consists of a set of ordinary differential equations, which are solved numerically. The model's first part is a simplified description of the kinetics of Ca2+ release from sarcoplasmic reticulum and binding to muscle protein filaments, in response to neural activation. The second part is based on A. V. Hill's mechanical model of muscle, consisting of elastic and contractile elements in series, the latter obeying known physiological properties. The parameters of the model are determined by fitting the appropriate mathematical solutions to data recorded from isolated lamprey muscle activated under conditions of constant length or rate of change of length. The model is then used to predict the forces developed under conditions of applied sinusoidal length changes, and the results compared with corresponding data. The most significant advance of this model is the incorporation of work-dependent deactivation, whereby a muscle that has been shortening under load generates less force after the shortening ceases than otherwise expected. In addition, the stiffness in this model is not constant but increases with increasing activation. The model yields a closer prediction to data than has been obtained before, and can thus prove an important component of investigations of the neural—mechanical—environmental interactions that occur during natural movements. PMID:20118315
Dynamic Smagorinsky model on anisotropic grids
NASA Technical Reports Server (NTRS)
Scotti, A.; Meneveau, C.; Fatica, M.
1996-01-01
Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.
Finite Element Model Development and Validation for Aircraft Fuselage Structures
NASA Technical Reports Server (NTRS)
Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.
2000-01-01
The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.
Atteia, Olivier; Höhener, Patrick
2010-08-15
Volatilization of toxic organic contaminants from groundwater to the soil surface is often considered an important pathway in risk analysis. Most of the risk models use simplified linear solutions that may overpredict the volatile flux. Although complex numerical models have been developed, their use is restricted to experienced users and for sites where field data are known in great detail. We present here a novel semianalytical model running on a spreadsheet that simulates the volatilization flux and vertical concentration profile in a soil based on the Van Genuchten functions. These widely used functions describe precisely the gas and water saturations and movement in the capillary fringe. The analytical model shows a good accuracy over several orders of magnitude when compared to a numerical model and laboratory data. The effect of barometric pumping is also included in the semianalytical formulation, although the model predicts that barometric pumping is often negligible. A sensitivity study predicts significant fluxes in sandy vadose zones and much smaller fluxes in other soils. Fluxes are linked to the dimensionless Henry's law constant H for H < 0.2 and increase by approximately 20% when temperature increases from 5 to 25 degrees C.
Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.
2017-01-01
Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923
Gutiérrez, Alvaro G.; Armesto, Juan J.; Díaz, M. Francisca; Huth, Andreas
2014-01-01
Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S). The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area). We compared the responses of a young stand (YS, ca. 60 years-old) and an old-growth forest (OG, >500 years-old) in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests. PMID:25068869
Vena, Daniel; Yadollahi, A; Bradley, T Douglas
2014-01-01
Obstructive sleep apnea (OSA) is a common respiratory disorder among adults. Recently we have shown that sedentary lifestyle causes an increase in diurnal leg fluid volume (LFV), which can shift into the neck at night when lying down to sleep and increase OSA severity. The purpose of this work was to investigate various metrics that represent baseline fluid retention in the legs and examine their correlation with neck fluid volume (NFV) and to develop a robust model for predicting fluid accumulation in the neck. In 13 healthy awake non-obese men, LFV and NFV were recorded continuously and simultaneously while standing for 5 minutes and then lying supine for 90 minutes. Simple regression was used to examine correlations between baseline LFV, baseline neck circumference (NC) and change in LFV with the outcome variables: change in NC (ΔNC) and in NFV (ΔNFV90) after lying supine for 90 minutes. An exhaustive grid search was implemented to find combinations of input variables which best modeled outcomes. We found strong positive correlations between baseline LFV (supine and standing) and ΔNFV90. Models developed for predicting ΔNFV90 included baseline standing LFV, baseline NC combined with change in LFV after lying supine for 90 minutes. These correlations and the developed models suggest that a greater baseline LFV might contribute to increased fluid accumulation in the neck. These results give more evidence that sedentary lifestyle might play a role in the pathogenesis of OSA by increasing the baseline LFV. The best models for predicting ΔNC include baseline LFV and NC; they improved accuracies of estimating ΔNC over individual predictors, suggesting that a combination of baseline fluid metrics is a good predictor of the change in NC while lying supine. Future work is aimed at adding additional baseline demographic features to improve model accuracy and eventually use it as a screening tool to predict severity of OSA prior to sleep.
Gutiérrez, Alvaro G; Armesto, Juan J; Díaz, M Francisca; Huth, Andreas
2014-01-01
Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S). The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area). We compared the responses of a young stand (YS, ca. 60 years-old) and an old-growth forest (OG, >500 years-old) in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.
2014-01-01
Introduction Prolonged ventilation and failed extubation are associated with increased harm and cost. The added value of heart and respiratory rate variability (HRV and RRV) during spontaneous breathing trials (SBTs) to predict extubation failure remains unknown. Methods We enrolled 721 patients in a multicenter (12 sites), prospective, observational study, evaluating clinical estimates of risk of extubation failure, physiologic measures recorded during SBTs, HRV and RRV recorded before and during the last SBT prior to extubation, and extubation outcomes. We excluded 287 patients because of protocol or technical violations, or poor data quality. Measures of variability (97 HRV, 82 RRV) were calculated from electrocardiogram and capnography waveforms followed by automated cleaning and variability analysis using Continuous Individualized Multiorgan Variability Analysis (CIMVA™) software. Repeated randomized subsampling with training, validation, and testing were used to derive and compare predictive models. Results Of 434 patients with high-quality data, 51 (12%) failed extubation. Two HRV and eight RRV measures showed statistically significant association with extubation failure (P <0.0041, 5% false discovery rate). An ensemble average of five univariate logistic regression models using RRV during SBT, yielding a probability of extubation failure (called WAVE score), demonstrated optimal predictive capacity. With repeated random subsampling and testing, the model showed mean receiver operating characteristic area under the curve (ROC AUC) of 0.69, higher than heart rate (0.51), rapid shallow breathing index (RBSI; 0.61) and respiratory rate (0.63). After deriving a WAVE model based on all data, training-set performance demonstrated that the model increased its predictive power when applied to patients conventionally considered high risk: a WAVE score >0.5 in patients with RSBI >105 and perceived high risk of failure yielded a fold increase in risk of extubation failure of 3.0 (95% confidence interval (CI) 1.2 to 5.2) and 3.5 (95% CI 1.9 to 5.4), respectively. Conclusions Altered HRV and RRV (during the SBT prior to extubation) are significantly associated with extubation failure. A predictive model using RRV during the last SBT provided optimal accuracy of prediction in all patients, with improved accuracy when combined with clinical impression or RSBI. This model requires a validation cohort to evaluate accuracy and generalizability. Trial registration ClinicalTrials.gov NCT01237886. Registered 13 October 2010. PMID:24713049