Sample records for models predict substantial

  1. A fuzzy set preference model for market share analysis

    NASA Technical Reports Server (NTRS)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).

  2. A Demonstration of Regression False Positive Selection in Data Mining

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2014-01-01

    Business analytics courses, such as marketing research, data mining, forecasting, and advanced financial modeling, have substantial predictive modeling components. The predictive modeling in these courses requires students to estimate and test many linear regressions. As a result, false positive variable selection ("type I errors") is…

  3. The use of the logistic model in space motion sickness prediction

    NASA Technical Reports Server (NTRS)

    Lin, Karl K.; Reschke, Millard F.

    1987-01-01

    The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.

  4. Consumer preference models: fuzzy theory approach

    NASA Astrophysics Data System (ADS)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  5. Performance and robustness of penalized and unpenalized methods for genetic prediction of complex human disease.

    PubMed

    Abraham, Gad; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael

    2013-02-01

    A central goal of medical genetics is to accurately predict complex disease from genotypes. Here, we present a comprehensive analysis of simulated and real data using lasso and elastic-net penalized support-vector machine models, a mixed-effects linear model, a polygenic score, and unpenalized logistic regression. In simulation, the sparse penalized models achieved lower false-positive rates and higher precision than the other methods for detecting causal SNPs. The common practice of prefiltering SNP lists for subsequent penalized modeling was examined and shown to substantially reduce the ability to recover the causal SNPs. Using genome-wide SNP profiles across eight complex diseases within cross-validation, lasso and elastic-net models achieved substantially better predictive ability in celiac disease, type 1 diabetes, and Crohn's disease, and had equivalent predictive ability in the rest, with the results in celiac disease strongly replicating between independent datasets. We investigated the effect of linkage disequilibrium on the predictive models, showing that the penalized methods leverage this information to their advantage, compared with methods that assume SNP independence. Our findings show that sparse penalized approaches are robust across different disease architectures, producing as good as or better phenotype predictions and variance explained. This has fundamental ramifications for the selection and future development of methods to genetically predict human disease. © 2012 WILEY PERIODICALS, INC.

  6. The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.

    PubMed

    Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro

    2018-03-01

    Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.

  7. Predicting dimensions of personality disorder from domains and facets of the Five-Factor Model.

    PubMed

    Reynolds, S K; Clark, L A

    2001-04-01

    We compared the utility of several trait models for describing personality disorder in a heterogeneous clinical sample (N = 94). Participants completed the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1993b), a self-report measure that assesses traits relevant to personality disorder, and two measures of the Five-Factor Model: the Revised NEO Personality Inventory (NEO-PI-R; Costa and McCrae, 1992) and the Big Five Inventory (BFI; John, Donahue, & Kentle, 1991). Regression analyses indicated substantial overlap between the SNAP scales and the NEO-PI-R facets. In addition, use of the NEO-PI-R facets afforded substantial improvement over the Five-Factor Model domains in predicting interview-based ratings of DSM-IV personality disorder (American Psychiatric Association, 1994), such that the NEO facets and the SNAP scales demonstrated roughly equivalent levels of predictive power. Results support assessment of the full range of NEO-PI-R facets over the Five-Factor Model domains for both research and clinical use.

  8. Predicting post-fire tree mortality for 14 conifers in the Pacific Northwest, USA: Model evaluation, development, and thresholds

    Treesearch

    Lindsay M. Grayson; Robert A. Progar; Sharon M. Hood

    2017-01-01

    Fire is a driving force in the North American landscape and predicting post-fire tree mortality is vital to land management. Post-fire tree mortality can have substantial economic and social impacts, and natural resource managers need reliable predictive methods to anticipate potential mortality following fire events. Current fire mortality models are limited to a few...

  9. Using GPS, GIS, and Accelerometer Data to Predict Transportation Modes.

    PubMed

    Brondeel, Ruben; Pannier, Bruno; Chaix, Basile

    2015-12-01

    Active transportation is a substantial source of physical activity, which has a positive influence on many health outcomes. A survey of transportation modes for each trip is challenging, time-consuming, and requires substantial financial investments. This study proposes a passive collection method and the prediction of modes at the trip level using random forests. The RECORD GPS study collected real-life trip data from 236 participants over 7 d, including the transportation mode, global positioning system, geographical information systems, and accelerometer data. A prediction model of transportation modes was constructed using the random forests method. Finally, we investigated the performance of models on the basis of a limited number of participants/trips to predict transportation modes for a large number of trips. The full model had a correct prediction rate of 90%. A simpler model of global positioning system explanatory variables combined with geographical information systems variables performed nearly as well. Relatively good predictions could be made using a model based on the 991 trips of the first 30 participants. This study uses real-life data from a large sample set to test a method for predicting transportation modes at the trip level, thereby providing a useful complement to time unit-level prediction methods. By enabling predictions on the basis of a limited number of observations, this method may decrease the workload for participants/researchers and provide relevant trip-level data to investigate relations between transportation and health.

  10. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture.

    PubMed

    Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv

    2017-02-01

    Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.

  11. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    PubMed

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  12. The transferability of safety-driven access management models for application to other sites.

    DOT National Transportation Integrated Search

    2001-01-01

    Several research studies have produced mathematical models that predict the safety impacts of selected access management techniques. Since new models require substantial resources to construct, this study evaluated five existing models with regard to...

  13. Data worth and prediction uncertainty for pesticide transport and fate models in Nebraska and Maryland, United States

    USGS Publications Warehouse

    Nolan, Bernard T.; Malone, Robert W.; Doherty, John E.; Barbash, Jack E.; Ma, Liwang; Shaner, Dale L.

    2015-01-01

    CONCLUSIONS: Although the observed data were sparse, they substantially reduced prediction uncertainty in unsampled regions of pesticide breakthrough curves. Nitrate evidently functioned as a surrogate for soil hydraulic data in well-drained loam soils conducive to conservative transport of nitrogen. Pesticide properties and macropore parameters could most benefit from improved characterization further to reduce model misfit and prediction uncertainty.

  14. Modeling of exposure to carbon monoxide in fires

    NASA Technical Reports Server (NTRS)

    Cagliostro, D. E.

    1980-01-01

    A mathematical model is developed to predict carboxyhemoglobin concentrations in regions of the body for short exposures to carbon monoxide levels expected during escape from aircraft fires. The model includes the respiratory and circulatory dynamics of absorption and distribution of carbon monoxide and carboxyhemoglobin. Predictions of carboxyhemoglobin concentrations are compared to experimental values obtained for human exposures to constant high carbon monoxide levels. Predictions are within 20% of experimental values. For short exposure times, transient concentration effects are predicted. The effect of stress is studied and found to increase carboxyhemoglobin levels substantially compared to a rest state.

  15. Biogeochemical modeling of CO2 and CH4 production in anoxic Arctic soil microcosms

    NASA Astrophysics Data System (ADS)

    Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng; Yang, Ziming; Graham, David E.; Gu, Baohua; Painter, Scott L.; Thornton, Peter E.

    2016-09-01

    Soil organic carbon turnover to CO2 and CH4 is sensitive to soil redox potential and pH conditions. However, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximately describe the observed pH evolution without additional parameterization. Although Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. The equilibrium speciation predicts a substantial increase in CO2 solubility as pH increases, and taking into account CO2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO2 production from closed microcosms can be substantially underestimated based on headspace CO2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.

  16. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy.

    PubMed

    Ogorzalek, Tadeusz L; Hura, Greg L; Belsom, Adam; Burnett, Kathryn H; Kryshtafovych, Andriy; Tainer, John A; Rappsilber, Juri; Tsutakawa, Susan E; Fidelis, Krzysztof

    2018-03-01

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. © 2018 Wiley Periodicals, Inc.

  17. Temperature variability is a key component in accurately forecasting the effects of climate change on pest phenology.

    PubMed

    Merrill, Scott C; Peairs, Frank B

    2017-02-01

    Models describing the effects of climate change on arthropod pest ecology are needed to help mitigate and adapt to forthcoming changes. Challenges arise because climate data are at resolutions that do not readily synchronize with arthropod biology. Here we explain how multiple sources of climate and weather data can be synthesized to quantify the effects of climate change on pest phenology. Predictions of phenological events differ substantially between models that incorporate scale-appropriate temperature variability and models that do not. As an illustrative example, we predicted adult emergence of a pest of sunflower, the sunflower stem weevil Cylindrocopturus adspersus (LeConte). Predictions of the timing of phenological events differed by an average of 11 days between models with different temperature variability inputs. Moreover, as temperature variability increases, developmental rates accelerate. Our work details a phenological modeling approach intended to help develop tools to plan for and mitigate the effects of climate change. Results show that selection of scale-appropriate temperature data is of more importance than selecting a climate change emission scenario. Predictions derived without appropriate temperature variability inputs will likely result in substantial phenological event miscalculations. Additionally, results suggest that increased temperature instability will lead to accelerated pest development. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  18. Super H-mode: theoretical prediction and initial observations of a new high performance regime for tokamak operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Philip B.; Solomon, Wayne M.; Burrell, Keith H.

    2015-07-21

    A new “Super H-mode” regime is predicted, which enables pedestal height and predicted fusion performance substantially higher than for H-mode operation. This new regime is predicted to exist by the EPED pedestal model, which calculates criticality constraints for peeling-ballooning and kinetic ballooning modes, and combines them to predict the pedestal height and width. EPED usually predicts a single (“H-mode”) pedestal solution for each set of input parameters, however, in strongly shaped plasmas above a critical density, multiple pedestal solutions are found, including the standard “Hmode” solution, and a “Super H-Mode” solution at substantially larger pedestal height and width. The Supermore » H-mode regime is predicted to be accessible by controlling the trajectory of the density, and to increase fusion performance for ITER, as well as for DEMO designs with strong shaping. A set of experiments on DIII-D has identified the predicted Super H-mode regime, and finds pedestal height and width, and their variation with density, in good agreement with theoretical predictions from the EPED model. Finally, the very high pedestal enables operation at high global beta and high confinement, including the highest normalized beta achieved on DIII-D with a quiescent edge.« less

  19. Seasonal to interannual Arctic sea ice predictability in current global climate models

    NASA Astrophysics Data System (ADS)

    Tietsche, S.; Day, J. J.; Guemas, V.; Hurlin, W. J.; Keeley, S. P. E.; Matei, D.; Msadek, R.; Collins, M.; Hawkins, E.

    2014-02-01

    We establish the first intermodel comparison of seasonal to interannual predictability of present-day Arctic climate by performing coordinated sets of idealized ensemble predictions with four state-of-the-art global climate models. For Arctic sea ice extent and volume, there is potential predictive skill for lead times of up to 3 years, and potential prediction errors have similar growth rates and magnitudes across the models. Spatial patterns of potential prediction errors differ substantially between the models, but some features are robust. Sea ice concentration errors are largest in the marginal ice zone, and in winter they are almost zero away from the ice edge. Sea ice thickness errors are amplified along the coasts of the Arctic Ocean, an effect that is dominated by sea ice advection. These results give an upper bound on the ability of current global climate models to predict important aspects of Arctic climate.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branstator, Grant

    The overall aim of our project was to quantify and characterize predictability of the climate as it pertains to decadal time scale predictions. By predictability we mean the degree to which a climate forecast can be distinguished from the climate that exists at initial forecast time, taking into consideration the growth of uncertainty that occurs as a result of the climate system being chaotic. In our project we were especially interested in predictability that arises from initializing forecasts from some specific state though we also contrast this predictability with predictability arising from forecasting the reaction of the system to externalmore » forcing – for example changes in greenhouse gas concentration. Also, we put special emphasis on the predictability of prominent intrinsic patterns of the system because they often dominate system behavior. Highlights from this work include: • Development of novel methods for estimating the predictability of climate forecast models. • Quantification of the initial value predictability limits of ocean heat content and the overturning circulation in the Atlantic as they are represented in various state of the art climate models. These limits varied substantially from model to model but on average were about a decade with North Atlantic heat content tending to be more predictable than North Pacific heat content. • Comparison of predictability resulting from knowledge of the current state of the climate system with predictability resulting from estimates of how the climate system will react to changes in greenhouse gas concentrations. It turned out that knowledge of the initial state produces a larger impact on forecasts for the first 5 to 10 years of projections. • Estimation of the predictability of dominant patterns of ocean variability including well-known patterns of variability in the North Pacific and North Atlantic. For the most part these patterns were predictable for 5 to 10 years. • Determination of especially predictable patterns in the North Atlantic. The most predictable of these retain predictability substantially longer than generic patterns, with some being predictable for two decades.« less

  1. The case for the relativistic hot big bang cosmology

    NASA Technical Reports Server (NTRS)

    Peebles, P. J. E.; Schramm, D. N.; Kron, R. G.; Turner, E. L.

    1991-01-01

    What has become the standard model in cosmology is described, and some highlights are presented of the now substantial range of evidence that most cosmologists believe convincingly establishes this model, the relativistic hot big bang cosmology. It is shown that this model has yielded a set of interpretations and successful predictions that substantially outnumber the elements used in devising the theory, with no well-established empirical contradictions. Brief speculations are made on how the open puzzles and work in progress might affect future developments in this field.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng

    Soil organic carbon turnover to CO 2 and CH 4 is sensitive to soil redox potential and pH conditions. But, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximatelymore » describe the observed pH evolution without additional parameterization. Though Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. Furthermore, the equilibrium speciation predicts a substantial increase in CO 2 solubility as pH increases, and taking into account CO 2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO 2 production from closed microcosms can be substantially underestimated based on headspace CO 2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.« less

  3. High-frequency predictions for number counts and spectral properties of extragalactic radio sources. New evidence of a break at mm wavelengths in spectra of bright blazar sources

    NASA Astrophysics Data System (ADS)

    Tucci, M.; Toffolatti, L.; de Zotti, G.; Martínez-González, E.

    2011-09-01

    We present models to predict high-frequency counts of extragalactic radio sources using physically grounded recipes to describe the complex spectral behaviour of blazars that dominate the mm-wave counts at bright flux densities. We show that simple power-law spectra are ruled out by high-frequency (ν ≥ 100 GHz) data. These data also strongly constrain models featuring the spectral breaks predicted by classical physical models for the synchrotron emission produced in jets of blazars. A model dealing with blazars as a single population is, at best, only marginally consistent with data coming from current surveys at high radio frequencies. Our most successful model assumes different distributions of break frequencies, νM, for BL Lacs and flat-spectrum radio quasars (FSRQs). The former objects have substantially higher values of νM, implying that the synchrotron emission comes from more compact regions; therefore, a substantial increase of the BL Lac fraction at high radio frequencies and at bright flux densities is predicted. Remarkably, our best model is able to give a very good fit to all the observed data on number counts and on distributions of spectral indices of extragalactic radio sources at frequencies above 5 and up to 220 GHz. Predictions for the forthcoming sub-mm blazar counts from Planck, at the highest HFI frequencies, and from Herschel surveys are also presented. Appendices are available in electronic form at http://www.aanda.org

  4. A new seasonal-deciduous spring phenology submodel in the Community Land Model 4.5: impacts on carbon and water cycling under future climate scenarios.

    PubMed

    Chen, Min; Melaas, Eli K; Gray, Josh M; Friedl, Mark A; Richardson, Andrew D

    2016-11-01

    A spring phenology model that combines photoperiod with accumulated heating and chilling to predict spring leaf-out dates is optimized using PhenoCam observations and coupled into the Community Land Model (CLM) 4.5. In head-to-head comparison (using satellite data from 2003 to 2013 for validation) for model grid cells over the Northern Hemisphere deciduous broadleaf forests (5.5 million km 2 ), we found that the revised model substantially outperformed the standard CLM seasonal-deciduous spring phenology submodel at both coarse (0.9 × 1.25°) and fine (1 km) scales. The revised model also does a better job of representing recent (decadal) phenological trends observed globally by MODIS, as well as long-term trends (1950-2014) in the PEP725 European phenology dataset. Moreover, forward model runs suggested a stronger advancement (up to 11 days) of spring leaf-out by the end of the 21st century for the revised model. Trends toward earlier advancement are predicted for deciduous forests across the whole Northern Hemisphere boreal and temperate deciduous forest region for the revised model, whereas the standard model predicts earlier leaf-out in colder regions, but later leaf-out in warmer regions, and no trend globally. The earlier spring leaf-out predicted by the revised model resulted in enhanced gross primary production (up to 0.6 Pg C yr -1 ) and evapotranspiration (up to 24 mm yr -1 ) when results were integrated across the study region. These results suggest that the standard seasonal-deciduous submodel in CLM should be reconsidered, otherwise substantial errors in predictions of key land-atmosphere interactions and feedbacks may result. © 2016 John Wiley & Sons Ltd.

  5. Model of cohesive properties and structural phase transitions in non-metallic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majewski, J.A.; Vogl, P.

    1986-01-01

    We have developed a simple, yet microscopic and universal model for cohesive properties of solids. This model explains the physical mechanisms determining the chemical and predicts semiquantitatively static and dynamic cohesive properties. It predicts a substantial softening of the long-wavelength transverse optical phonons across the pressure induced phase transition from the zincblenda to rocksalt structure in II-VI compounds. The origin of this softening is shown to be closely related to ferroelectricity.

  6. Biogeochemical modeling of CO 2 and CH 4 production in anoxic Arctic soil microcosms

    DOE PAGES

    Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng; ...

    2016-09-12

    Soil organic carbon turnover to CO 2 and CH 4 is sensitive to soil redox potential and pH conditions. But, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximatelymore » describe the observed pH evolution without additional parameterization. Though Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. Furthermore, the equilibrium speciation predicts a substantial increase in CO 2 solubility as pH increases, and taking into account CO 2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO 2 production from closed microcosms can be substantially underestimated based on headspace CO 2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.« less

  7. Evaluation of Load Analysis Methods for NASAs GIII Adaptive Compliant Trailing Edge Project

    NASA Technical Reports Server (NTRS)

    Cruz, Josue; Miller, Eric J.

    2016-01-01

    The Air Force Research Laboratory (AFRL), NASA Armstrong Flight Research Center (AFRC), and FlexSys Inc. (Ann Arbor, Michigan) have collaborated to flight test the Adaptive Compliant Trailing Edge (ACTE) flaps. These flaps were installed on a Gulfstream Aerospace Corporation (GAC) GIII aircraft and tested at AFRC at various deflection angles over a range of flight conditions. External aerodynamic and inertial load analyses were conducted with the intention to ensure that the change in wing loads due to the deployed ACTE flap did not overload the existing baseline GIII wing box structure. The objective of this paper was to substantiate the analysis tools used for predicting wing loads at AFRC. Computational fluid dynamics (CFD) models and distributed mass inertial models were developed for predicting the loads on the wing. The analysis tools included TRANAIR (full potential) and CMARC (panel) models. Aerodynamic pressure data from the analysis codes were validated against static pressure port data collected in-flight. Combined results from the CFD predictions and the inertial load analysis were used to predict the normal force, bending moment, and torque loads on the wing. Wing loads obtained from calibrated strain gages installed on the wing were used for substantiation of the load prediction tools. The load predictions exhibited good agreement compared to the flight load results obtained from calibrated strain gage measurements.

  8. Regional Arctic sea-ice prediction: potential versus operational seasonal forecast skill

    NASA Astrophysics Data System (ADS)

    Bushuk, Mitchell; Msadek, Rym; Winton, Michael; Vecchi, Gabriel; Yang, Xiaosong; Rosati, Anthony; Gudgel, Rich

    2018-06-01

    Seasonal predictions of Arctic sea ice on regional spatial scales are a pressing need for a broad group of stakeholders, however, most assessments of predictability and forecast skill to date have focused on pan-Arctic sea-ice extent (SIE). In this work, we present the first direct comparison of perfect model (PM) and operational (OP) seasonal prediction skill for regional Arctic SIE within a common dynamical prediction system. This assessment is based on two complementary suites of seasonal prediction ensemble experiments performed with a global coupled climate model. First, we present a suite of PM predictability experiments with start dates spanning the calendar year, which are used to quantify the potential regional SIE prediction skill of this system. Second, we assess the system's OP prediction skill for detrended regional SIE using a suite of retrospective initialized seasonal forecasts spanning 1981-2016. In nearly all Arctic regions and for all target months, we find a substantial skill gap between PM and OP predictions of regional SIE. The PM experiments reveal that regional winter SIE is potentially predictable at lead times beyond 12 months, substantially longer than the skill of their OP counterparts. Both the OP and PM predictions display a spring prediction skill barrier for regional summer SIE forecasts, indicating a fundamental predictability limit for summer regional predictions. We find that a similar barrier exists for pan-Arctic sea-ice volume predictions, but is not present for predictions of pan-Arctic SIE. The skill gap identified in this work indicates a promising potential for future improvements in regional SIE predictions.

  9. The interpretation of hard X-ray polarization measurements in solar flares

    NASA Technical Reports Server (NTRS)

    Leach, J.; Emslie, A. G.; Petrosian, V.

    1983-01-01

    Observations of polarization of moderately hard X-rays in solar flares are reviewed and compared with the predictions of recent detailed modeling of hard X-ray bremsstrahlung production by non-thermal electrons. The recent advances in the complexity of the modeling lead to substantially lower predicted polarizations than in earlier models and more fully highlight how various parameters play a role in determining the polarization of the radiation field. The new predicted polarizations are comparable to those predicted by thermal modeling of solar flare hard X-ray production, and both are in agreement with the observations. In the light of these results, new polarization observations with current generation instruments are proposed which could be used to discriminate between non-thermal and thermal models of hard X-ray production in solar flares.

  10. Individualized prediction of lung-function decline in chronic obstructive pulmonary disease

    PubMed Central

    Zafari, Zafar; Sin, Don D.; Postma, Dirkje S.; Löfdahl, Claes-Göran; Vonk, Judith; Bryan, Stirling; Lam, Stephen; Tammemagi, C. Martin; Khakban, Rahman; Man, S.F. Paul; Tashkin, Donald; Wise, Robert A.; Connett, John E.; McManus, Bruce; Ng, Raymond; Hollander, Zsuszanna; Sadatsafavi, Mohsen

    2016-01-01

    Background: The rate of lung-function decline in chronic obstructive pulmonary disease (COPD) varies substantially among individuals. We sought to develop and validate an individualized prediction model for forced expiratory volume at 1 second (FEV1) in current smokers with mild-to-moderate COPD. Methods: Using data from a large long-term clinical trial (the Lung Health Study), we derived mixed-effects regression models to predict future FEV1 values over 11 years according to clinical traits. We modelled heterogeneity by allowing regression coefficients to vary across individuals. Two independent cohorts with COPD were used for validating the equations. Results: We used data from 5594 patients (mean age 48.4 yr, 63% men, mean baseline FEV1 2.75 L) to create the individualized prediction equations. There was significant between-individual variability in the rate of FEV1 decline, with the interval for the annual rate of decline that contained 95% of individuals being −124 to −15 mL/yr for smokers and −83 to 15 mL/yr for sustained quitters. Clinical variables in the final model explained 88% of variation around follow-up FEV1. The C statistic for predicting severity grades was 0.90. Prediction equations performed robustly in the 2 external data sets. Interpretation: A substantial part of individual variation in FEV1 decline can be explained by easily measured clinical variables. The model developed in this work can be used for prediction of future lung health in patients with mild-to-moderate COPD. Trial registration: Lung Health Study — ClinicalTrials.gov, no. NCT00000568; Pan-Canadian Early Detection of Lung Cancer Study — ClinicalTrials.gov, no. NCT00751660 PMID:27486205

  11. The effect of changes in space shuttle parameters on the NASA/MSFC multilayer diffusion model predictions of surface HCl concentrations

    NASA Technical Reports Server (NTRS)

    Glasser, M. E.; Rundel, R. D.

    1978-01-01

    A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.

  12. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.

    PubMed

    Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability.

  13. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes

    PubMed Central

    Yates, Katherine L.; Mellin, Camille; Caley, M. Julian; Radford, Ben T.; Meeuwig, Jessica J.

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability. PMID:27333202

  14. Potential impact of initialization on decadal predictions as assessed for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Branstator, Grant; Teng, Haiyan

    2012-06-01

    To investigate the potential for initialization to improve decadal range predictions, we quantify the initial value predictability of upper 300 m temperature in the two northern ocean basins for 12 models from Coupled Model Intercomparison Project phase 5 (CMIP5), and we contrast it with the forced predictability in Representative Concentration Pathways (RCP) 4.5 climate change projections. We use a recently introduced method that produces predictability estimates from long control runs. Many initial states are considered, and we find on average 1) initialization has the potential to improve skill in the first 5 years in the North Pacific and the first 9 years in the North Atlantic, and 2) the impact from initialization becomes secondary compared to the impact of RCP4.5 forcing after 6 1/2 and 8 years in the two basins, respectively. Model-to-model and spatial variations in these limits are, however, substantial.

  15. Cure modeling in real-time prediction: How much does it help?

    PubMed

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Childhood maltreatment and adulthood poor sleep quality: a longitudinal study.

    PubMed

    Abajobir, Amanuel A; Kisely, Steve; Williams, Gail; Strathearn, Lane; Najman, Jake M

    2017-08-01

    Available evidence from cross-sectional studies suggests that childhood maltreatment may be associated with a range of sleep disorders. However, these studies have not controlled for potential individual-, familial- and environmental-level confounders. To determine the association between childhood maltreatment and lower sleep quality after adjusting for potential confounders. Data for the present study were obtained from a pre-birth cohort study of 3778 young adults (52.6% female) of the Mater Hospital-University of Queensland Study of Pregnancy follow up at a mean age of 20.6 years. The Mater Hospital-University of Queensland Study of Pregnancy is a prospective Australian pre-birth cohort study of mothers consecutively recruited during their first obstetric clinic visit at Brisbane's Mater Hospital in 1981-1983. Participants completed the Pittsburgh Sleep Quality Index at the 21-year follow up. We linked this dataset to agency-recorded substantiated cases of childhood maltreatment. A series of separate logistic regression models was used to test whether childhood maltreatment predicted lower sleep quality after adjustment for selected confounders. Substantiated physical abuse significantly predicted lower sleep quality in males. Single and multiple forms of childhood maltreatment, including age of maltreatment and number of substantiations, did not predict lower sleep quality in either gender in both crude and adjusted models. Not being married, living in a residential problem area, cigarette smoking and internalising were significantly associated with lower sleep quality in a fully adjusted model for the male-female combined sample. Childhood maltreatment does not appear to predict young adult poor sleep quality, with the exception of physical abuse for males. While childhood maltreatment has been found to predict a range of mental health problems, childhood maltreatment does not appear to predict sleep problems occurring in young adults. Poor sleep quality was accounted for by concurrent social disadvantage, cigarette smoking and internalising. © 2017 Royal Australasian College of Physicians.

  17. Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad

    2016-02-01

    Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less

  18. IMPROVING CHEMICAL TRANSPORT MODEL PREDICTIONS OF ORGANIC AEROSOL: MEASUREMENT AND SIMULATION OF SEMIVOLATILE ORGANIC EMISSIONS FROM MOBILE AND NON-MOBILE SOURCES

    EPA Science Inventory

    Organic material contributes a significant fraction of PM2.5 mass across all regions of the United States, but state-of-the-art chemical transport models often substantially underpredict measured organic aerosol concentrations. Recent revisions to these models that...

  19. A New Hybrid Spatio-temporal Model for Estimating Daily Multi-year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data

    NASA Technical Reports Server (NTRS)

    Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel

    2014-01-01

    The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter PM(sub 2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data.We developed and cross validated models to predict daily PM(sub 2.5) at a 1X 1 km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1 X 1 km grid predictions. We used mixed models regressing PM(sub 2.5) measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R(sup 2) = 0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R(sup 2) = 0.87, R(sup)2 = 0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.

  20. A New Hybrid Spatio-Temporal Model For Estimating Daily Multi-Year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data.

    PubMed

    Kloog, Itai; Chudnovsky, Alexandra A; Just, Allan C; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel

    2014-10-01

    The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM 2.5 ) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM 2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM 2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R 2 =0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R 2 =0.87, R 2 =0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.

  1. A New Hybrid Spatio-Temporal Model For Estimating Daily Multi-Year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data

    PubMed Central

    Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel

    2017-01-01

    Background The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. Methods We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003–2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Results Our model performance was excellent (mean out-of-sample R2=0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R2=0.87, R2=0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Conclusion Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region. PMID:28966552

  2. A dual-process account of auditory change detection.

    PubMed

    McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B

    2010-08-01

    Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.

  3. Integrating models to predict regional haze from wildland fire.

    Treesearch

    D. McKenzie; S.M. O' Neill; N. Larkin; R.A. Norheim

    2006-01-01

    Visibility impairment from regional haze is a significant problem throughout the continental United States. A substantial portion of regional haze is produced by smoke from prescribed and wildland fires. Here we describe the integration of four simulation models, an array of GIS raster layers, and a set of algorithms for fire-danger calculations into a modeling...

  4. A Crowdsourcing Approach to Developing and Assessing Prediction Algorithms for AML Prognosis

    PubMed Central

    Noren, David P.; Long, Byron L.; Norel, Raquel; Rrhissorrakrai, Kahn; Hess, Kenneth; Hu, Chenyue Wendy; Bisberg, Alex J.; Schultz, Andre; Engquist, Erik; Liu, Li; Lin, Xihui; Chen, Gregory M.; Xie, Honglei; Hunter, Geoffrey A. M.; Norman, Thea; Friend, Stephen H.; Stolovitzky, Gustavo; Kornblau, Steven; Qutub, Amina A.

    2016-01-01

    Acute Myeloid Leukemia (AML) is a fatal hematological cancer. The genetic abnormalities underlying AML are extremely heterogeneous among patients, making prognosis and treatment selection very difficult. While clinical proteomics data has the potential to improve prognosis accuracy, thus far, the quantitative means to do so have yet to be developed. Here we report the results and insights gained from the DREAM 9 Acute Myeloid Prediction Outcome Prediction Challenge (AML-OPC), a crowdsourcing effort designed to promote the development of quantitative methods for AML prognosis prediction. We identify the most accurate and robust models in predicting patient response to therapy, remission duration, and overall survival. We further investigate patient response to therapy, a clinically actionable prediction, and find that patients that are classified as resistant to therapy are harder to predict than responsive patients across the 31 models submitted to the challenge. The top two performing models, which held a high sensitivity to these patients, substantially utilized the proteomics data to make predictions. Using these models, we also identify which signaling proteins were useful in predicting patient therapeutic response. PMID:27351836

  5. Progress and Challenges in Subseasonal Prediction

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried

    2003-01-01

    While substantial advances have occurred over the last few decades in both weather and seasonal prediction, progress in improving predictions on subseasonal time scales (approximately 2 weeks to 2 months) has been slow. In this talk I will highlight some of the recent progress that has been made to improve forecasts on subseasonal time scales and outline the challenges that we face both from an observational and modeling perspective. The talk will be based primarily on the results and conclusions of a recent NASA-sponsored workshop that focused on the subseasonal prediction problem. One of the key conclusions of that workshop was that there is compelling evidence for predictability at forecast lead times substantially longer than two weeks, and that much of that predictability is currently untapped. Tropical diabatic heating and soil wetness were singled out as particularly important processes affecting predictability on these time scales. Predictability was also linked to various low-frequency atmospheric phenomena such as the annular modes in high latitudes (including their connections to the stratosphere), the Pacific/North American pattern, and the Madden-Julian Oscillation. I will end the talk by summarizing the recommendations and plans that have been put forward for accelerating progress on the subseasonal prediction problem.

  6. Chemical transport model simulations of organic aerosol in southern California: model evaluation and gasoline and diesel source contributions

    NASA Astrophysics Data System (ADS)

    Jathar, Shantanu H.; Woody, Matthew; Pye, Havala O. T.; Baker, Kirk R.; Robinson, Allen L.

    2017-03-01

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA-SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data. Mobile sources were predicted to contribute 30-40 % of the OA in southern California (half of which was SOA), making mobile sources the single largest source contributor to OA in southern California. The remainder of the OA was attributed to non-mobile anthropogenic sources (e.g., cooking, biomass burning) with biogenic sources contributing to less than 5 % to the total OA. Gasoline sources were predicted to contribute about 13 times more OA than diesel sources; this difference was driven by differences in SOA production. Model predictions highlighted the need to better constrain multi-generational oxidation reactions in chemical transport models.

  7. Analytical prediction of the interior noise for cylindrical models of aircraft fuselages for prescribed exterior noise fields. Phase 2: Models for sidewall trim, stiffened structures and cabin acoustics with floor partition

    NASA Technical Reports Server (NTRS)

    Pope, L. D.; Wilby, E. G.

    1982-01-01

    An airplane interior noise prediction model is developed to determine the important parameters associated with sound transmission into the interiors of airplanes, and to identify apropriate noise control methods. Models for stiffened structures, and cabin acoustics with floor partition are developed. Validation studies are undertaken using three test articles: a ring stringer stiffened cylinder, an unstiffened cylinder with floor partition, and ring stringer stiffened cylinder with floor partition and sidewall trim. The noise reductions of the three test articles are computed using the heoretical models and compared to measured values. A statistical analysis of the comparison data indicates that there is no bias in the predictions although a substantial random error exists so that a discrepancy of more than five or six dB can be expected for about one out of three predictions.

  8. Comparing two-zone models of dust exposure.

    PubMed

    Jones, Rachael M; Simmons, Catherine E; Boelter, Fred W

    2011-09-01

    The selection and application of mathematical models to work tasks is challenging. Previously, we developed and evaluated a semi-empirical two-zone model that predicts time-weighted average (TWA) concentrations (Ctwa) of dust emitted during the sanding of drywall joint compound. Here, we fit the emission rate and random air speed variables of a mechanistic two-zone model to testing event data and apply and evaluate the model using data from two field studies. We found that the fitted random air speed values and emission rate were sensitive to (i) the size of the near-field and (ii) the objective function used for fitting, but this did not substantially impact predicted dust Ctwa. The mechanistic model predictions were lower than the semi-empirical model predictions and measured respirable dust Ctwa at Site A but were within an acceptable range. At Site B, a 10.5 m3 room, the mechanistic model did not capture the observed difference between PBZ and area Ctwa. The model predicted uniform mixing and predicted dust Ctwa up to an order of magnitude greater than was measured. We suggest that applications of the mechanistic model be limited to contexts where the near-field volume is very small relative to the far-field volume.

  9. Ensemble Canonical Correlation Prediction of Seasonal Precipitation Over the United States: Raising the Bar for Dynamical Model Forecasts

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Kim, Kyu-Myong; Shen, S. P.

    2001-01-01

    This paper presents preliminary results of an ensemble canonical correlation (ECC) prediction scheme developed at the Climate and Radiation Branch, NASA/Goddard Space Flight Center for determining the potential predictability of regional precipitation, and for climate downscaling studies. The scheme is tested on seasonal hindcasts of anomalous precipitation over the continental United States using global sea surface temperature (SST) for 1951-2000. To maximize the forecast skill derived from SST, the world ocean is divided into non-overlapping sectors. The canonical SST modes for each sector are used as the predictor for the ensemble hindcasts. Results show that the ECC yields a substantial (10-25%) increase in prediction skills for all the regions of the US in every season compared to traditional CCA prediction schemes. For the boreal winter, the tropical Pacific contributes the largest potential predictability to precipitation in the southwestern and southeastern regions, while the North Pacific and the North Atlantic are responsible to the enhanced forecast skills in the Pacific Northwest, the northern Great Plains and Ohio Valley. Most importantly, the ECC increases skill for summertime precipitation prediction and substantially reduces the spring predictability barrier over all the regions of the US continent. Besides SST, the ECC is designed with the flexibility to include any number of predictor fields, such as soil moisture, snow cover and additional local observations. The enhanced ECC forecast skill provides a new benchmark for evaluating dynamical model forecasts.

  10. Study of an intraurban travel demand model incorporating commuter preference variables

    NASA Technical Reports Server (NTRS)

    Holligan, P. E.; Coote, M. A.; Rushmer, C. R.; Fanning, M. L.

    1971-01-01

    The model is based on the substantial travel data base for the nine-county San Francisco Bay Area, provided by the Metropolitan Transportation Commission. The model is of the abstract type, and makes use of commuter attitudes towards modes and simple demographic characteristics of zones in a region to predict interzonal travel by mode for the region. A characterization of the STOL/VTOL mode was extrapolated by means of a subjective comparison of its expected characteristics with those of modes characterized by the survey. Predictions of STOL demand were made for the Bay Area and an aircraft network was developed to serve this demand. When this aircraft system is compared to the base case system, the demand for STOL service has increased five fold and the resulting economics show considerable benefit from the increased scale of operations. In the previous study all systems required subsidy in varying amounts. The new system shows a substantial profit at an average fare of $3.55 per trip.

  11. SST-Forced Seasonal Simulation and Prediction Skill for Versions of the NCEP/MRF Model.

    NASA Astrophysics Data System (ADS)

    Livezey, Robert E.; Masutani, Michiko; Jil, Ming

    1996-03-01

    The feasibility of using a two-tier approach to provide guidance to operational long-lead seasonal prediction is explored. The approach includes first a forecast of global sea surface temperatures (SSTs) using a coupled general circulation model, followed by an atmospheric forecast using an atmospheric general circulation model (AGCM). For this exploration, ensembles of decade-long integrations of the AGCM driven by observed SSTs and ensembles of integrations of select cases driven by forecast SSTs have been conducted. The ability of the model in these sets of runs to reproduce observed atmospheric conditions has been evaluated with a multiparameter performance analysis.Results have identified performance and skill levels in the specified SST runs, for winters and springs over the Pacific/North America region, that are sufficient to impact operational seasonal predictions in years with major El Niño-Southern Oscillation (ENSO) episodes. Further, these levels were substantially reproduced in the forecast SST runs for 1-month leads and in many instances for up to one-season leads. In fact, overall the 0- and 1-month-lead forecasts of seasonal temperature over the United States for three falls and winters with major ENSO episodes were substantially better than corresponding official forecasts. Thus, there is considerable reason to develop a dynamical component for the official seasonal forecast process.

  12. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    PubMed

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  13. Influence of vapor wall loss in laboratory chambers on yields of secondary organic aerosol

    PubMed Central

    Zhang, Xuan; Cappa, Christopher D.; Jathar, Shantanu H.; McVay, Renee C.; Ensberg, Joseph J.; Kleeman, Michael J.; Seinfeld, John H.

    2014-01-01

    Secondary organic aerosol (SOA) constitutes a major fraction of submicrometer atmospheric particulate matter. Quantitative simulation of SOA within air-quality and climate models—and its resulting impacts—depends on the translation of SOA formation observed in laboratory chambers into robust parameterizations. Worldwide data have been accumulating indicating that model predictions of SOA are substantially lower than ambient observations. Although possible explanations for this mismatch have been advanced, none has addressed the laboratory chamber data themselves. Losses of particles to the walls of chambers are routinely accounted for, but there has been little evaluation of the effects on SOA formation of losses of semivolatile vapors to chamber walls. Here, we experimentally demonstrate that such vapor losses can lead to substantially underestimated SOA formation, by factors as much as 4. Accounting for such losses has the clear potential to bring model predictions and observations of organic aerosol levels into much closer agreement. PMID:24711404

  14. Rising sea levels will reduce extreme temperature variations in tide-dominated reef habitats.

    PubMed

    Lowe, Ryan Joseph; Pivan, Xavier; Falter, James; Symonds, Graham; Gruber, Renee

    2016-08-01

    Temperatures within shallow reefs often differ substantially from those in the surrounding ocean; therefore, predicting future patterns of thermal stresses and bleaching at the scale of reefs depends on accurately predicting reef heat budgets. We present a new framework for quantifying how tidal and solar heating cycles interact with reef morphology to control diurnal temperature extremes within shallow, tidally forced reefs. Using data from northwestern Australia, we construct a heat budget model to investigate how frequency differences between the dominant lunar semidiurnal tide and diurnal solar cycle drive ~15-day modulations in diurnal temperature extremes. The model is extended to show how reefs with tidal amplitudes comparable to their depth, relative to mean sea level, tend to experience the largest temperature extremes globally. As a consequence, we reveal how even a modest sea level rise can substantially reduce temperature extremes within tide-dominated reefs, thereby partially offsetting the local effects of future ocean warming.

  15. The use of mathematical models to inform influenza pandemic preparedness and response

    PubMed Central

    Wu, Joseph T; Cowling, Benjamin J

    2011-01-01

    Summary Influenza pandemics have occurred throughout history and were associated with substantial excess mortality and morbidity. Mathematical models of infectious diseases permit quantitative description of epidemic processes based on the underlying biological mechanisms. Mathematical models have been widely used in the past decade to aid pandemic planning by allowing detailed predictions of the speed of spread of an influenza pandemic and the likely effectiveness of alternative control strategies. During the initial waves of the 2009 influenza pandemic, mathematical models were used to track the spread of the virus, predict the time course of the pandemic and assess the likely impact of large-scale vaccination. While mathematical modeling has made substantial contributions to influenza pandemic preparedness, its use as a real-time tool for pandemic control is currently limited by the lack of essential surveillance information such as serologic data. Mathematical modeling provided a useful framework for analyzing and interpreting surveillance data during the 2009 influenza pandemic, for highlighting limitations in existing pandemic surveillance systems, and for guiding how these systems should be strengthened in order to cope with future epidemics of influenza or other emerging infectious diseases. PMID:21727183

  16. Kobayashi-Kondo-Maskawa-'t Hooft interaction in pentaquarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitrasinovic, V.

    2005-05-01

    We review critically the predictions of pentaquarks in the quark model, in particular, those based on the flavor-spin-dependent (Glozman-Riska) hyperfine interaction and the color-spin (one-gluon-exchange Fermi-Breit) one. We include the antiquark interactions and find that: (1) the exotic SU(3) multiplets are not substantially affected in the flavor-spin model, whereas some of the nonexotic multiplets are; and (2) the variational upper bound on the {xi}{sup --}-{theta}{sup +} mass difference in the color-spin hyperfine interaction model is substantially reduced. This leads us to the U{sub A}(1) symmetry breaking Kobayashi-Kondo-Maskawa-'tHooft interaction. We discuss some of its phenomenological consequences for pentaquarks.

  17. Numerical Simulation of Bolide Entry with Ground Footprint Prediction

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian; Mathias, Donovan L.; Berger, Marsha J.

    2016-01-01

    As they decelerate through the atmosphere, meteors deposit mass, momentum and energy into the surrounding air at tremendous rates. Trauma from the entry of such bolides produces strong blast waves that can propagate hundreds of kilometers and cause substantial terrestrial damage even when no ground impact occurs. We present a new simulation technique for airburst blast prediction using a fully-conservative, Cartesian mesh, finite-volume solver and investigate the ability of this method to model far- field propagation over hundreds of kilometers. The work develops mathematical models for the deposition of mass, momentum and energy into the atmosphere and presents verification and validation through canonical problems and the comparison of surface overpressures, and blast arrival times with actual results in the literature for known bolides. The discussion also examines the effects of various approximations to the physics of bolide entry that can substantially decrease the computational expense of these simulations. We present parametric studies to quantify the influence of entry-angle, burst-height and other parameters on the ground footprint of the airburst, and these values are related to predictions from analytic and handbook-methods.

  18. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    NASA Astrophysics Data System (ADS)

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress component in SEB models for the improvement of their performance, especially under too dry or wet environments.

  19. Model assisted startup of anaerobic digesters fed with thermally hydrolysed activated sludge.

    PubMed

    Batstone, D J; Balthes, C; Barr, K

    2010-01-01

    This paper presents the use of the IWA ADM1 to predict and interpret results from two full-scale anaerobic digesters fed with thermal hyrolysate (waste activated sludge with a long upstream sludge age) from a Cambi hydrolysis process operating at 165°C and 6 bar-g. The first digester was fed conventionally-though intermittently, while the second was heavily diluted through a substantial component of the evaluation period (110 days). There were a number of important outcomes-related to both model application, and model predictions. Input and inert COD: mass ratio was very important, and was considerably higher than the 1.42 g g⁻¹ used for biomass throughout the IWA activated sludge and anaerobic digestion models. Input COD: VS ratio was 1.6 g g⁻¹, and inert COD: VS ratio was 1.7 g g⁻¹. The model succeeded on a number of levels, including effective prediction of important outputs (degradability, gas flow and composition, and final solids), clarification of the substantial data scatter, prediction of recovery times during operationally poor periods, and cross-validation of the results between digester 1 and digester 2. Key failures in model performance were related to an early incorrect assumption of the COD: VS ratio of 1.42 g g⁻¹, and intermittent high acetate levels, most likely caused by inhibition, and rapid acclimatisation to ammonia. The acute free ammonia limit was found to be 0.008 M NH(3)-N, while the chronic inhibition constant (K(I,NH₃,ac)) was 0.007 ± 0.001 M NH₃-N. Overall, this is a complex system, and application of the model added significant confidence to the initial operational decisions during an aggressive startup on an atypical feed.

  20. Geostatistical Prediction of Microbial Water Quality Throughout a Stream Network Using Meteorology, Land Cover, and Spatiotemporal Autocorrelation.

    PubMed

    Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R

    2018-06-25

    Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.

  1. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  2. Modeling Global Ocean Biogeochemistry With Physical Data Assimilation: A Pragmatic Solution to the Equatorial Instability

    NASA Astrophysics Data System (ADS)

    Park, Jong-Yeon; Stock, Charles A.; Yang, Xiaosong; Dunne, John P.; Rosati, Anthony; John, Jasmin; Zhang, Shaoqing

    2018-03-01

    Reliable estimates of historical and current biogeochemistry are essential for understanding past ecosystem variability and predicting future changes. Efforts to translate improved physical ocean state estimates into improved biogeochemical estimates, however, are hindered by high biogeochemical sensitivity to transient momentum imbalances that arise during physical data assimilation. Most notably, the breakdown of geostrophic constraints on data assimilation in equatorial regions can lead to spurious upwelling, resulting in excessive equatorial productivity and biogeochemical fluxes. This hampers efforts to understand and predict the biogeochemical consequences of El Niño and La Niña. We develop a strategy to robustly integrate an ocean biogeochemical model with an ensemble coupled-climate data assimilation system used for seasonal to decadal global climate prediction. Addressing spurious vertical velocities requires two steps. First, we find that tightening constraints on atmospheric data assimilation maintains a better equatorial wind stress and pressure gradient balance. This reduces spurious vertical velocities, but those remaining still produce substantial biogeochemical biases. The remainder is addressed by imposing stricter fidelity to model dynamics over data constraints near the equator. We determine an optimal choice of model-data weights that removed spurious biogeochemical signals while benefitting from off-equatorial constraints that still substantially improve equatorial physical ocean simulations. Compared to the unconstrained control run, the optimally constrained model reduces equatorial biogeochemical biases and markedly improves the equatorial subsurface nitrate concentrations and hypoxic area. The pragmatic approach described herein offers a means of advancing earth system prediction in parallel with continued data assimilation advances aimed at fully considering equatorial data constraints.

  3. Modeling energy expenditure in children and adolescents using quantile regression

    PubMed Central

    Yang, Yunwen; Adolph, Anne L.; Puyau, Maurice R.; Vohra, Firoz A.; Zakeri, Issa F.

    2013-01-01

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obese children. First, QR models will be developed to predict minute-by-minute awake EE at different quantile levels based on heart rate (HR) and physical activity (PA) accelerometry counts, and child characteristics of age, sex, weight, and height. Second, the QR models will be used to evaluate the covariate effects of weight, PA, and HR across the conditional EE distribution. QR and ordinary least squares (OLS) regressions are estimated in 109 children, aged 5–18 yr. QR modeling of EE outperformed OLS regression for both nonobese and obese populations. Average prediction errors for QR compared with OLS were not only smaller at the median τ = 0.5 (18.6 vs. 21.4%), but also substantially smaller at the tails of the distribution (10.2 vs. 39.2% at τ = 0.1 and 8.7 vs. 19.8% at τ = 0.9). Covariate effects of weight, PA, and HR on EE for the nonobese and obese children differed across quantiles (P < 0.05). The associations (linear and quadratic) between PA and HR with EE were stronger for the obese than nonobese population (P < 0.05). In conclusion, QR provided more accurate predictions of EE compared with conventional OLS regression, especially at the tails of the distribution, and revealed substantially different covariate effects of weight, PA, and HR on EE in nonobese and obese children. PMID:23640591

  4. Ion mobilities in diatomic gases: measurement versus prediction with non-specular scattering models.

    PubMed

    Larriba, Carlos; Hogan, Christopher J

    2013-05-16

    Ion/electrical mobility measurements of nanoparticles and polyatomic ions are typically linked to particle/ion physical properties through either application of the Stokes-Millikan relationship or comparison to mobilities predicted from polyatomic models, which assume that gas molecules scatter specularly and elastically from rigid structural models. However, there is a discrepancy between these approaches; when specular, elastic scattering models (i.e., elastic-hard-sphere scattering, EHSS) are applied to polyatomic models of nanometer-scale ions with finite-sized impinging gas molecules, predictions are in substantial disagreement with the Stokes-Millikan equation. To rectify this discrepancy, we developed and tested a new approach for mobility calculations using polyatomic models in which non-specular (diffuse) and inelastic gas-molecule scattering is considered. Two distinct semiempirical models of gas-molecule scattering from particle surfaces were considered. In the first, which has been traditionally invoked in the study of aerosol nanoparticles, 91% of collisions are diffuse and thermally accommodating, and 9% are specular and elastic. In the second, all collisions are considered to be diffuse and accommodating, but the average speed of the gas molecules reemitted from a particle surface is 8% lower than the mean thermal speed at the particle temperature. Both scattering models attempt to mimic exchange between translational, vibrational, and rotational modes of energy during collision, as would be expected during collision between a nonmonoatomic gas molecule and a nonfrozen particle surface. The mobility calculation procedure was applied considering both hard-sphere potentials between gas molecules and the atoms within a particle and the long-range ion-induced dipole (polarization) potential. Predictions were compared to previous measurements in air near room temperature of multiply charged poly(ethylene glycol) (PEG) ions, which range in morphology from compact to highly linear, and singly charged tetraalkylammonium cations. It was found that both non-specular, inelastic scattering rules lead to excellent agreement between predictions and experimental mobility measurements (within 5% of each other) and that polarization potentials must be considered to make correct predictions for high-mobility particles/ions. Conversely, traditional specular, elastic scattering models were found to substantially overestimate the mobilities of both types of ions.

  5. On The Importance of Connecting Laboratory Measurements of Ice Crystal Growth with Model Parameterizations: Predicting Ice Particle Properties

    NASA Astrophysics Data System (ADS)

    Harrington, J. Y.

    2017-12-01

    Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall lines show a substantial influence to predicted particle properties: The more natural evolution of ice crystals during riming produces graupel-like particles with size and fall-speeds required for the formation of a classic transition zone and extended stratiform precipitation region.

  6. Neonatal intensive care unit: predictive models for length of stay.

    PubMed

    Bender, G J; Koestler, D; Ombao, H; McCourt, M; Alskinis, B; Rubin, L P; Padbury, J F

    2013-02-01

    Hospital length of stay (LOS) is important to administrators and families of neonates admitted to the neonatal intensive care unit (NICU). A prediction model for NICU LOS was developed using predictors birth weight, gestational age and two severity of illness tools, the score for neonatal acute physiology, perinatal extension (SNAPPE) and the morbidity assessment index for newborns (MAIN). Consecutive admissions (n=293) to a New England regional level III NICU were retrospectively collected. Multiple predictive models were compared for complexity and goodness-of-fit, coefficient of determination (R (2)) and predictive error. The optimal model was validated prospectively with consecutive admissions (n=615). Observed and expected LOS was compared. The MAIN models had best Akaike's information criterion, highest R (2) (0.786) and lowest predictive error. The best SNAPPE model underestimated LOS, with substantial variability, yet was fairly well calibrated by birthweight category. LOS was longer in the prospective cohort than the retrospective cohort, without differences in birth weight, gestational age, MAIN or SNAPPE. LOS prediction is improved by accounting for severity of illness in the first week of life, beyond factors known at birth. Prospective validation of both MAIN and SNAPPE models is warranted.

  7. SPATIO-TEMPORAL MODELING OF AGRICULTURAL YIELD DATA WITH AN APPLICATION TO PRICING CROP INSURANCE CONTRACTS

    PubMed Central

    Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo

    2009-01-01

    This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450

  8. Towards feasible and effective predictive wavefront control for adaptive optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, L A; Veran, J

    We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.

  9. Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling

    NASA Astrophysics Data System (ADS)

    Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.

    2018-03-01

    The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.

  10. Phenomenological Constitutive Modeling of High-Temperature Flow Behavior Incorporating Individual and Coupled Effects of Processing Parameters in Super-austenitic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Roy, Swagata; Biswas, Srija; Babu, K. Arun; Mandal, Sumantra

    2018-05-01

    A novel constitutive model has been developed for predicting flow responses of super-austenitic stainless steel over a wide range of strains (0.05-0.6), temperatures (1173-1423 K) and strain rates (0.001-1 s-1). Further, the predictability of this new model has been compared with the existing Johnson-Cook (JC) and modified Zerilli-Armstrong (M-ZA) model. The JC model is not befitted for flow prediction as it is found to be exhibiting very high ( 36%) average absolute error (δ) and low ( 0.92) correlation coefficient (R). On the contrary, the M-ZA model has demonstrated relatively lower δ ( 13%) and higher R ( 0.96) for flow prediction. The incorporation of couplings of processing parameters in M-ZA model has led to exhibit better prediction than JC model. However, the flow analyses of the studied alloy have revealed the additional synergistic influences of strain and strain rate as well as strain, temperature, and strain rate apart from those considered in M-ZA model. Hence, the new phenomenological model has been formulated incorporating all the individual and synergistic effects of processing parameters and a `strain-shifting' parameter. The proposed model predicted the flow behavior of the alloy with much better correlation and generalization than M-ZA model as substantiated by its lower δ ( 7.9%) and higher R ( 0.99) of prediction.

  11. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    PubMed

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.

  12. Forecasting model of Corylus, Alnus, and Betula pollen concentration levels using spatiotemporal correlation properties of pollen count.

    PubMed

    Nowosad, Jakub; Stach, Alfred; Kasprzyk, Idalia; Weryszko-Chmielewska, Elżbieta; Piotrowska-Weryszko, Krystyna; Puc, Małgorzata; Grewling, Łukasz; Pędziszewska, Anna; Uruska, Agnieszka; Myszkowska, Dorota; Chłopek, Kazimiera; Majkowska-Wojciechowska, Barbara

    The aim of the study was to create and evaluate models for predicting high levels of daily pollen concentration of Corylus , Alnus , and Betula using a spatiotemporal correlation of pollen count. For each taxon, a high pollen count level was established according to the first allergy symptoms during exposure. The dataset was divided into a training set and a test set, using a stratified random split. For each taxon and city, the model was built using a random forest method. Corylus models performed poorly. However, the study revealed the possibility of predicting with substantial accuracy the occurrence of days with high pollen concentrations of Alnus and Betula using past pollen count data from monitoring sites. These results can be used for building (1) simpler models, which require data only from aerobiological monitoring sites, and (2) combined meteorological and aerobiological models for predicting high levels of pollen concentration.

  13. Capture of the Sun's Oort cloud from stars in its birth cluster.

    PubMed

    Levison, Harold F; Duncan, Martin J; Brasser, Ramon; Kaufmann, David E

    2010-07-09

    Oort cloud comets are currently believed to have formed in the Sun's protoplanetary disk and to have been ejected to large heliocentric orbits by the giant planets. Detailed models of this process fail to reproduce all of the available observational constraints, however. In particular, the Oort cloud appears to be substantially more populous than the models predict. Here we present numerical simulations that show that the Sun captured comets from other stars while it was in its birth cluster. Our results imply that a substantial fraction of the Oort cloud comets, perhaps exceeding 90%, are from the protoplanetary disks of other stars.

  14. Chemical transport model simulations of organic aerosol in ...

    EPA Pesticide Factsheets

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data

  15. A model for the progressive failure of laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, D. H.; Lo, D. C.

    1991-01-01

    Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.

  16. A RANDOM-SCAN DISPLAY OF PREDICTED SATELLITE POSITIONS.

    DTIC Science & Technology

    With the completion of the NRL evaluation of the experimental model of the Satellite Position Prediction and Display equipment ( SPAD ), efforts were...directed toward the design of an operational version of SPAD . Possible design and equipment configurations were proposed which would lead to a...substantial savings in cost and reduced equipment complexity. These designs involve the displaying of the SPAD information by means of a random scanning of

  17. Surrogate Analysis and Index Developer (SAID) tool

    USGS Publications Warehouse

    Domanski, Marian M.; Straub, Timothy D.; Landers, Mark N.

    2015-10-01

    The regression models created in SAID can be used in utilities that have been developed to work with the USGS National Water Information System (NWIS) and for the USGS National Real-Time Water Quality (NRTWQ) Web site. The real-time dissemination of predicted SSC and prediction intervals for each time step has substantial potential to improve understanding of sediment-related water quality and associated engineering and ecological management decisions.

  18. Frailty Models for Familial Risk with Application to Breast Cancer.

    PubMed

    Gorfine, Malka; Hsu, Li; Parmigiani, Giovanni

    2013-12-01

    In evaluating familial risk for disease we have two main statistical tasks: assessing the probability of carrying an inherited genetic mutation conferring higher risk; and predicting the absolute risk of developing diseases over time, for those individuals whose mutation status is known. Despite substantial progress, much remains unknown about the role of genetic and environmental risk factors, about the sources of variation in risk among families that carry high-risk mutations, and about the sources of familial aggregation beyond major Mendelian effects. These sources of heterogeneity contribute substantial variation in risk across families. In this paper we present simple and efficient methods for accounting for this variation in familial risk assessment. Our methods are based on frailty models. We implemented them in the context of generalizing Mendelian models of cancer risk, and compared our approaches to others that do not consider heterogeneity across families. Our extensive simulation study demonstrates that when predicting the risk of developing a disease over time conditional on carrier status, accounting for heterogeneity results in a substantial improvement in the area under the curve of the receiver operating characteristic. On the other hand, the improvement for carriership probability estimation is more limited. We illustrate the utility of the proposed approach through the analysis of BRCA1 and BRCA2 mutation carriers in the Washington Ashkenazi Kin-Cohort Study of Breast Cancer.

  19. Economic development, flow of funds, and the equilibrium interaction of financial frictions.

    PubMed

    Moll, Benjamin; Townsend, Robert M; Zhorin, Victor

    2017-06-13

    We use a variety of different datasets from Thailand to study not only the extremes of micro and macro variables but also within-country flow of funds and labor migration. We develop a general equilibrium model that encompasses regional variation in the type of financial friction and calibrate it to measured variation in regional aggregates. The model predicts substantial capital and labor flows from rural to urban areas even though these differ only in the underlying financial regime. Predictions for micro variables not used directly provide a model validation. Finally, we estimate the impact of a policy of counterfactual, regional isolationism.

  20. Economic development, flow of funds, and the equilibrium interaction of financial frictions

    PubMed Central

    Moll, Benjamin; Townsend, Robert M.; Zhorin, Victor

    2017-01-01

    We use a variety of different datasets from Thailand to study not only the extremes of micro and macro variables but also within-country flow of funds and labor migration. We develop a general equilibrium model that encompasses regional variation in the type of financial friction and calibrate it to measured variation in regional aggregates. The model predicts substantial capital and labor flows from rural to urban areas even though these differ only in the underlying financial regime. Predictions for micro variables not used directly provide a model validation. Finally, we estimate the impact of a policy of counterfactual, regional isolationism. PMID:28592655

  1. Mutations in gp41 are correlated with coreceptor tropism but do not improve prediction methods substantially.

    PubMed

    Thielen, Alexander; Lengauer, Thomas; Swenson, Luke C; Dong, Winnie W Y; McGovern, Rachel A; Lewis, Marilyn; James, Ian; Heera, Jayvant; Valdez, Hernan; Harrigan, P Richard

    2011-01-01

    The main determinants of HIV-1 coreceptor usage are located in the V3-loop of gp120, although mutations in V2 and gp41 are also known. Incorporation of V2 is known to improve prediction algorithms; however, this has not been confirmed for gp41 mutations. Samples with V3 and gp41 genotypes and Trofile assay (Monogram Biosciences, South San Francisco, CA, USA) results were taken from the HOMER cohort (n=444) and from patients screened for the MOTIVATE studies (n=1,916; 859 with maraviroc outcome data). Correlations of mutations with tropism were assessed using Fisher's exact test and prediction models trained using support vector machines. Models were validated by cross-validation, by testing models from one dataset on the other, and by analysing virological outcome. Several mutations within gp41 were highly significant for CXCR4 usage; most strikingly an insertion occurring in 7.7% of HOMER-R5 and 46.3% of HOMER-X4 samples (MOTIVATE 5.7% and 25.2%, respectively). Models trained on gp41 sequence alone achieved relatively high areas under the receiver-operating characteristic curve (AUCs; HOMER 0.713 and MOTIVATE 0.736) that were almost as good as V3 models (0.773 and 0.884, respectively). However, combining the two regions improved predictions only marginally (0.813 and 0.902, respectively). Similar results were found when models were trained on HOMER and validated on MOTIVATE or vice versa. The difference in median log viral load decrease at week 24 between patients with R5 and X4 virus was 1.65 (HOMER 2.45 and MOTIVATE 0.79) for V3 models, 1.59 for gp41-models (2.42 and 0.83, respectively) and 1.58 for the combined predictor (2.44 and 0.86, respectively). Several mutations within gp41 showed strong correlation with tropism in two independent datasets. However, incorporating gp41 mutations into prediction models is not mandatory because they do not improve substantially on models trained on V3 sequences alone.

  2. The Use of Mixed Effects Models for Obtaining Low-Cost Ecosystem Carbon Stock Estimates in Mangroves of the Asia-Pacific

    NASA Astrophysics Data System (ADS)

    Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.

    2016-12-01

    Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.

  3. Using metal-ligand binding characteristics to predict metal toxicity: quantitative ion character-activity relationships (QICARs).

    PubMed Central

    Newman, M C; McCloskey, J T; Tatara, C P

    1998-01-01

    Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900

  4. Ab Initio structure prediction for Escherichia coli: towards genome-wide protein structure modeling and fold assignment

    PubMed Central

    Xu, Dong; Zhang, Yang

    2013-01-01

    Genome-wide protein structure prediction and structure-based function annotation have been a long-term goal in molecular biology but not yet become possible due to difficulties in modeling distant-homology targets. We developed a hybrid pipeline combining ab initio folding and template-based modeling for genome-wide structure prediction applied to the Escherichia coli genome. The pipeline was tested on 43 known sequences, where QUARK-based ab initio folding simulation generated models with TM-score 17% higher than that by traditional comparative modeling methods. For 495 unknown hard sequences, 72 are predicted to have a correct fold (TM-score > 0.5) and 321 have a substantial portion of structure correctly modeled (TM-score > 0.35). 317 sequences can be reliably assigned to a SCOP fold family based on structural analogy to existing proteins in PDB. The presented results, as a case study of E. coli, represent promising progress towards genome-wide structure modeling and fold family assignment using state-of-the-art ab initio folding algorithms. PMID:23719418

  5. Assimilation of Satellite to Improve Cloud Simulation in Wrf Model

    NASA Astrophysics Data System (ADS)

    Park, Y. H.; Pour Biazar, A.; McNider, R. T.

    2012-12-01

    A simple approach has been introduced to improve cloud simulation spatially and temporally in a meteorological model. The first step for this approach is to use Geostationary Operational Environmental Satellite (GOES) observations to identify clouds and estimate the clouds structure. Then by comparing GOES observations to model cloud field, we identify areas in which model has under-predicted or over-predicted clouds. Next, by introducing subsidence in areas with over-prediction and lifting in areas with under-prediction, erroneous clouds are removed and new clouds are formed. The technique estimates a vertical velocity needed for the cloud correction and then uses a one dimensional variation schemes (1D_Var) to calculate the horizontal divergence components and the consequent horizontal wind components needed to sustain such vertical velocity. Finally, the new horizontal winds are provided as a nudging field to the model. This nudging provides the dynamical support needed to create/clear clouds in a sustainable manner. The technique was implemented and tested in the Weather Research and Forecast (WRF) Model and resulted in substantial improvement in model simulated clouds. Some of the results are presented here.

  6. Within-person variation in security of attachment: a self-determination theory perspective on attachment, need fulfillment, and well-being.

    PubMed

    La Guardia, J G; Ryan, R M; Couchman, C E; Deci, E L

    2000-09-01

    Attachment research has traditionally focused on individual differences in global patterns of attachment to important others. The current research instead focuses primarily on within-person variability in attachments across relational partners. It was predicted that within-person variability would be substantial, even among primary attachment figures of mother, father, romantic partner, and best friend. The prediction was supported in three studies. Furthermore, in line with self-determination theory, multilevel modeling and regression analyses showed that, at the relationship level, individuals' experience of fulfillment of the basic needs for autonomy, competence, and relatedness positively predicted overall attachment security, model of self, and model of other. Relations of both attachment and need satisfaction to well-being were also explored.

  7. A method for accounting for maintenance costs in flux balance analysis improves the prediction of plant cell metabolic phenotypes under stress conditions.

    PubMed

    Cheung, C Y Maurice; Williams, Thomas C R; Poolman, Mark G; Fell, David A; Ratcliffe, R George; Sweetlove, Lee J

    2013-09-01

    Flux balance models of metabolism generally utilize synthesis of biomass as the main determinant of intracellular fluxes. However, the biomass constraint alone is not sufficient to predict realistic fluxes in central heterotrophic metabolism of plant cells because of the major demand on the energy budget due to transport costs and cell maintenance. This major limitation can be addressed by incorporating transport steps into the metabolic model and by implementing a procedure that uses Pareto optimality analysis to explore the trade-off between ATP and NADPH production for maintenance. This leads to a method for predicting cell maintenance costs on the basis of the measured flux ratio between the oxidative steps of the oxidative pentose phosphate pathway and glycolysis. We show that accounting for transport and maintenance costs substantially improves the accuracy of fluxes predicted from a flux balance model of heterotrophic Arabidopsis cells in culture, irrespective of the objective function used in the analysis. Moreover, when the new method was applied to cells under control, elevated temperature and hyper-osmotic conditions, only elevated temperature led to a substantial increase in cell maintenance costs. It is concluded that the hyper-osmotic conditions tested did not impose a metabolic stress, in as much as the metabolic network is not forced to devote more resources to cell maintenance. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.

  8. Predictive Models of Acute Mountain Sickness after Rapid Ascent to Various Altitudes

    DTIC Science & Technology

    2013-01-01

    unclassified relational mountain medicine database containing individ- ual ascent profiles, demographic and physiologic subject descriptors, and...course of AMS, and define the baseline demographics and physiologic descriptors that increase the risk of AMS. In addition, these models provide...substantiated this finding in un- acclimatized women (24). Other physiologic differences between men and women (i.e., differences in endothelial

  9. Epidemiological modeling of invasion in heterogeneous landscapes: Spread of sudden oak death in California (1990-2030)

    Treesearch

    R.K. Meentemeyer; N.J. Cunniffe; A.R. Cook; J.A.N. Filipe; R.D. Hunter; D.M. Rizzo; C.A. Gilligan

    2011-01-01

    The spread of emerging infectious diseases (EIDs) in natural environments poses substantial risks to biodiversity and ecosystem function. As EIDs and their impacts grow, landscape- to regional-scale models of disease dynamics are increasingly needed for quantitative prediction of epidemic outcomes and design of practicable strategies for control. Here we use spatio-...

  10. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    NASA Astrophysics Data System (ADS)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  11. Estimating West Nile virus transmission period in Pennsylvania using an optimized degree-day model.

    PubMed

    Chen, Shi; Blanford, Justine I; Fleischer, Shelby J; Hutchinson, Michael; Saunders, Michael C; Thomas, Matthew B

    2013-07-01

    Abstract We provide calibrated degree-day models to predict potential West Nile virus (WNV) transmission periods in Pennsylvania. We begin by following the standard approach of treating the degree-days necessary for the virus to complete the extrinsic incubation period (EIP), and mosquito longevity as constants. This approach failed to adequately explain virus transmission periods based on mosquito surveillance data from 4 locations (Harrisburg, Philadelphia, Pittsburgh, and Williamsport) in Pennsylvania from 2002 to 2008. Allowing the EIP and adult longevity to vary across time and space improved model fit substantially. The calibrated models increase the ability to successfully predict the WNV transmission period in Pennsylvania to 70-80% compared to less than 30% in the uncalibrated model. Model validation showed the optimized models to be robust in 3 of the locations, although still showing errors for Philadelphia. These models and methods could provide useful tools to predict WNV transmission period from surveillance datasets, assess potential WNV risk, and make informed mosquito surveillance strategies.

  12. Rising sea levels will reduce extreme temperature variations in tide-dominated reef habitats

    PubMed Central

    Lowe, Ryan Joseph; Pivan, Xavier; Falter, James; Symonds, Graham; Gruber, Renee

    2016-01-01

    Temperatures within shallow reefs often differ substantially from those in the surrounding ocean; therefore, predicting future patterns of thermal stresses and bleaching at the scale of reefs depends on accurately predicting reef heat budgets. We present a new framework for quantifying how tidal and solar heating cycles interact with reef morphology to control diurnal temperature extremes within shallow, tidally forced reefs. Using data from northwestern Australia, we construct a heat budget model to investigate how frequency differences between the dominant lunar semidiurnal tide and diurnal solar cycle drive ~15-day modulations in diurnal temperature extremes. The model is extended to show how reefs with tidal amplitudes comparable to their depth, relative to mean sea level, tend to experience the largest temperature extremes globally. As a consequence, we reveal how even a modest sea level rise can substantially reduce temperature extremes within tide-dominated reefs, thereby partially offsetting the local effects of future ocean warming. PMID:27540589

  13. Improvement of operational prediction system applied to the oil spill prediction in the Yellow Sea

    NASA Astrophysics Data System (ADS)

    Kim, C.; Cho, Y.; Choi, B.; Jung, K.

    2012-12-01

    Multi-nested operational prediction system for the Yellow Sea (YS) has been developed to predict the movement of oil spill. Drifter trajectory simulations were performed to predict the path of the oil spill of the MV Hebei Spirit accident occurred on 7 December 2007. The oil spill trajectories at the surface predicted by numerical model without tidal forcing were remarkably faster than the observation. However the speed of drifters predicted by model considering tide was satisfactorily improved not only for the motion with tidal cycle but also for the motion with subtidal period. The subtidal flow of the simulation with tide was weaker than that without tide due to tidal stress. Tidal stress decelerated the southward subtidal flows driven by northwesterly wind along the Korean coast of the YS in winter. This result provides a substantial implication that tide must be included for accurate prediction of oil spill trajectory not only for variation within a tidal cycle but also for longer time scale advection in tide dominant area.

  14. Relating Neuronal to Behavioral Performance: Variability of Optomotor Responses in the Blowfly

    PubMed Central

    Rosner, Ronny; Warzecha, Anne-Kathrin

    2011-01-01

    Behavioral responses of an animal vary even when they are elicited by the same stimulus. This variability is due to stochastic processes within the nervous system and to the changing internal states of the animal. To what extent does the variability of neuronal responses account for the overall variability at the behavioral level? To address this question we evaluate the neuronal variability at the output stage of the blowfly's (Calliphora vicina) visual system by recording from motion-sensitive interneurons mediating head optomotor responses. By means of a simple modelling approach representing the sensory-motor transformation, we predict head movements on the basis of the recorded responses of motion-sensitive neurons and compare the variability of the predicted head movements with that of the observed ones. Large gain changes of optomotor head movements have previously been shown to go along with changes in the animals' activity state. Our modelling approach substantiates that these gain changes are imposed downstream of the motion-sensitive neurons of the visual system. Moreover, since predicted head movements are clearly more reliable than those actually observed, we conclude that substantial variability is introduced downstream of the visual system. PMID:22066014

  15. Dynamic patterns and ecological impacts of declining ocean pH in a high-resolution multi-year dataset.

    PubMed

    Wootton, J Timothy; Pfister, Catherine A; Forester, James D

    2008-12-02

    Increasing global concentrations of atmospheric CO(2) are predicted to decrease ocean pH, with potentially severe impacts on marine food webs, but empirical data documenting ocean pH over time are limited. In a high-resolution dataset spanning 8 years, pH at a north-temperate coastal site declined with increasing atmospheric CO(2) levels and varied substantially in response to biological processes and physical conditions that fluctuate over multiple time scales. Applying a method to link environmental change to species dynamics via multispecies Markov chain models reveals strong links between in situ benthic species dynamics and variation in ocean pH, with calcareous species generally performing more poorly than noncalcareous species in years with low pH. The models project the long-term consequences of these dynamic changes, which predict substantial shifts in the species dominating the habitat as a consequence of both direct effects of reduced calcification and indirect effects arising from the web of species interactions. Our results indicate that pH decline is proceeding at a more rapid rate than previously predicted in some areas, and that this decline has ecological consequences for near shore benthic ecosystems.

  16. Advancing Predictive Hepatotoxicity at the Intersection of Experimental, in Silico, and Artificial Intelligence Technologies.

    PubMed

    Fraser, Keith; Bruckner, Dylan M; Dordick, Jonathan S

    2018-06-18

    Adverse drug reactions, particularly those that result in drug-induced liver injury (DILI), are a major cause of drug failure in clinical trials and drug withdrawals. Hepatotoxicity-mediated drug attrition occurs despite substantial investments of time and money in developing cellular assays, animal models, and computational models to predict its occurrence in humans. Underperformance in predicting hepatotoxicity associated with drugs and drug candidates has been attributed to existing gaps in our understanding of the mechanisms involved in driving hepatic injury after these compounds perfuse and are metabolized by the liver. Herein we assess in vitro, in vivo (animal), and in silico strategies used to develop predictive DILI models. We address the effectiveness of several two- and three-dimensional in vitro cellular methods that are frequently employed in hepatotoxicity screens and how they can be used to predict DILI in humans. We also explore how humanized animal models can recapitulate human drug metabolic profiles and associated liver injury. Finally, we highlight the maturation of computational methods for predicting hepatotoxicity, the untapped potential of artificial intelligence for improving in silico DILI screens, and how knowledge acquired from these predictions can shape the refinement of experimental methods.

  17. Deep Visual Attention Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  18. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  19. How well can wave runup be predicted? comment on Laudier et al. (2011) and Stockdon et al. (2006)

    USGS Publications Warehouse

    Plant, Nathaniel G.; Stockdon, Hilary F.

    2015-01-01

    Laudier et al. (2011) suggested that there may be a systematic bias error in runup predictions using a model developed by Stockdon et al. (2006). Laudier et al. tested cases that sampled beach and wave conditions that differed from those used to develop the Stockdon et al. model. Based on our re-analysis, we found that in two of the three Laudier et al. cases observed overtopping was actually consistent with the Stockdon et al. predictions. In these cases, the revised predictions indicated substantial overtopping with, in one case, a freeboard deficit of 1 m. In the third case, the revised prediction had a low likelihood of overtopping, which reflected a large uncertainty due to wave conditions that included a broad and bi-modal frequency distribution. The discrepancy between Laudier et al. results and our re-analysis appear to be due, in part, to simplifications made by Laudier et al. when they implemented a reduced version of the Stockdon et al. model.

  20. Predictive modeling of addiction lapses in a mobile health application.

    PubMed

    Chih, Ming-Yuan; Patton, Timothy; McTavish, Fiona M; Isham, Andrew J; Judkins-Fisher, Chris L; Atwood, Amy K; Gustafson, David H

    2014-01-01

    The chronically relapsing nature of alcoholism leads to substantial personal, family, and societal costs. Addiction-comprehensive health enhancement support system (A-CHESS) is a smartphone application that aims to reduce relapse. To offer targeted support to patients who are at risk of lapses within the coming week, a Bayesian network model to predict such events was constructed using responses on 2,934 weekly surveys (called the Weekly Check-in) from 152 alcohol-dependent individuals who recently completed residential treatment. The Weekly Check-in is a self-monitoring service, provided in A-CHESS, to track patients' recovery progress. The model showed good predictability, with the area under receiver operating characteristic curve of 0.829 in the 10-fold cross-validation and 0.912 in the external validation. The sensitivity/specificity table assists the tradeoff decisions necessary to apply the model in practice. This study moves us closer to the goal of providing lapse prediction so that patients might receive more targeted and timely support. © 2013.

  1. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Predictive Modeling of Addiction Lapses in a Mobile Health Application

    PubMed Central

    Chih, Ming-Yuan; Patton, Timothy; McTavish, Fiona M.; Isham, Andrew; Judkins-Fisher, Chris L.; Atwood, Amy K.; Gustafson, David H.

    2013-01-01

    The chronically relapsing nature of alcoholism leads to substantial personal, family, and societal costs. Addiction-Comprehensive Health Enhancement Support System (A-CHESS) is a smartphone application that aims to reduce relapse. To offer targeted support to patients who are at risk of lapses within the coming week, a Bayesian network model to predict such events was constructed using responses on 2,934 weekly surveys (called the Weekly Check-in) from 152 alcohol-dependent individuals who recently completed residential treatment. The Weekly Check-in is a self-monitoring service, provided in A-CHESS, to track patients’ recovery progress. The model showed good predictability, with the area under receiver operating characteristic curve of 0.829 in the 10-fold cross-validation and 0.912 in the external validation. The sensitivity/specificity table assists the tradeoff decisions necessary to apply the model in practice. This study moves us closer to the goal of providing lapse prediction so that patients might receive more targeted and timely support. PMID:24035143

  3. Predicting acidification recovery at the Hubbard Brook Experimental Forest, New Hampshire: evaluation of four models.

    PubMed

    Tominaga, Koji; Aherne, Julian; Watmough, Shaun A; Alveteg, Mattias; Cosby, Bernard J; Driscoll, Charles T; Posch, Maximilian; Pourmokhtarian, Afshin

    2010-12-01

    The performance and prediction uncertainty (owing to parameter and structural uncertainties) of four dynamic watershed acidification models (MAGIC, PnET-BGC, SAFE, and VSD) were assessed by systematically applying them to data from the Hubbard Brook Experimental Forest (HBEF), New Hampshire, where long-term records of precipitation and stream chemistry were available. In order to facilitate systematic evaluation, Monte Carlo simulation was used to randomly generate common model input data sets (n = 10,000) from parameter distributions; input data were subsequently translated among models to retain consistency. The model simulations were objectively calibrated against observed data (streamwater: 1963-2004, soil: 1983). The ensemble of calibrated models was used to assess future response of soil and stream chemistry to reduced sulfur deposition at the HBEF. Although both hindcast (1850-1962) and forecast (2005-2100) predictions were qualitatively similar across the four models, the temporal pattern of key indicators of acidification recovery (stream acid neutralizing capacity and soil base saturation) differed substantially. The range in predictions resulted from differences in model structure and their associated posterior parameter distributions. These differences can be accommodated by employing multiple models (ensemble analysis) but have implications for individual model applications.

  4. Predicting BRCA1 and BRCA2 gene mutation carriers: comparison of LAMBDA, BRCAPRO, Myriad II, and modified Couch models.

    PubMed

    Lindor, Noralane M; Lindor, Rachel A; Apicella, Carmel; Dowty, James G; Ashley, Amanda; Hunt, Katherine; Mincey, Betty A; Wilson, Marcia; Smith, M Cathie; Hopper, John L

    2007-01-01

    Models have been developed to predict the probability that a person carries a detectable germline mutation in the BRCA1 or BRCA2 genes. Their relative performance in a clinical setting is unclear. To compare the performance characteristics of four BRCA1/BRCA2 gene mutation prediction models: LAMBDA, based on a checklist and scores developed from data on Ashkenazi Jewish (AJ) women; BRCAPRO, a Bayesian computer program; modified Couch tables based on regression analyses; and Myriad II tables collated by Myriad Genetics Laboratories. Family cancer history data were analyzed from 200 probands from the Mayo Clinic Familial Cancer Program, in a multispecialty tertiary care group practice. All probands had clinical testing for BRCA1 and BRCA2 mutations conducted in a single laboratory. For each model, performance was assessed by the area under the receiver operator characteristic curve (ROC) and by tests of accuracy and dispersion. Cases "missed" by one or more models (model predicted less than 10% probability of mutation when a mutation was actually found) were compared across models. All models gave similar areas under the ROC curve of 0.71 to 0.76. All models except LAMBDA substantially under-predicted the numbers of carriers. All models were too dispersed. In terms of ranking, all prediction models performed reasonably well with similar performance characteristics. Model predictions were widely discrepant for some families. Review of cancer family histories by an experienced clinician continues to be vital to ensure that critical elements are not missed and that the most appropriate risk prediction figures are provided.

  5. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    PubMed

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  6. Methods for using groundwater model predictions to guide hydrogeologic data collection, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.

    2003-01-01

    Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.

  7. Prediction of gestational age based on genome-wide differentially methylated regions.

    PubMed

    Bohlin, J; Håberg, S E; Magnus, P; Reese, S E; Gjessing, H K; Magnus, M C; Parr, C L; Page, C M; London, S J; Nystad, W

    2016-10-07

    We explored the association between gestational age and cord blood DNA methylation at birth and whether DNA methylation could be effective in predicting gestational age due to limitations with the presently used methods. We used data from the Norwegian Mother and Child Birth Cohort study (MoBa) with Illumina HumanMethylation450 data measured for 1753 newborns in two batches: MoBa 1, n = 1068; and MoBa 2, n = 685. Gestational age was computed using both ultrasound and the last menstrual period. We evaluated associations between DNA methylation and gestational age and developed a statistical model for predicting gestational age using MoBa 1 for training and MoBa 2 for predictions. The prediction model was additionally used to compare ultrasound and last menstrual period-based gestational age predictions. Furthermore, both CpGs and associated genes detected in the training models were compared to those detected in a published prediction model for chronological age. There were 5474 CpGs associated with ultrasound gestational age after adjustment for a set of covariates, including estimated cell type proportions, and Bonferroni-correction for multiple testing. Our model predicted ultrasound gestational age more accurately than it predicted last menstrual period gestational age. DNA methylation at birth appears to be a good predictor of gestational age. Ultrasound gestational age is more strongly associated with methylation than last menstrual period gestational age. The CpGs linked with our gestational age prediction model, and their associated genes, differed substantially from the corresponding CpGs and genes associated with a chronological age prediction model.

  8. Analysis of turbulence and surface growth models on the estimation of soot level in ethylene non-premixed flames

    NASA Astrophysics Data System (ADS)

    Yunardi, Y.; Munawar, Edi; Rinaldi, Wahyu; Razali, Asbar; Iskandar, Elwina; Fairweather, M.

    2018-02-01

    Soot prediction in a combustion system has become a subject of attention, as many factors influence its accuracy. An accurate temperature prediction will likely yield better soot predictions, since the inception, growth and destruction of the soot are affected by the temperature. This paper reported the study on the influences of turbulence closure and surface growth models on the prediction of soot levels in turbulent flames. The results demonstrated that a substantial distinction was observed in terms of temperature predictions derived using the k-ɛ and the Reynolds stress models, for the two ethylene flames studied here amongst the four types of surface growth rate model investigated, the assumption of the soot surface growth rate proportional to the particle number density, but independent on the surface area of soot particles, f ( A s ) = ρ N s , yields in closest agreement with the radial data. Without any adjustment to the constants in the surface growth term, other approaches where the surface growth directly proportional to the surface area and square root of surface area, f ( A s ) = A s and f ( A s ) = √ A s , result in an under- prediction of soot volume fraction. These results suggest that predictions of soot volume fraction are sensitive to the modelling of surface growth.

  9. Low-frequency sound propagation modeling over a locally-reacting boundary using the parabolic approximation

    NASA Technical Reports Server (NTRS)

    Robertson, J. S.; Siegman, W. L.; Jacobson, M. J.

    1989-01-01

    There is substantial interest in the analytical and numerical modeling of low-frequency, long-range atmospheric acoustic propagation. Ray-based models, because of frequency limitations, do not always give an adequate prediction of quantities such as sound pressure or intensity levels. However, the parabolic approximation method, widely used in ocean acoustics, and often more accurate than ray models for lower frequencies of interest, can be applied to acoustic propagation in the atmosphere. Modifications of an existing implicit finite-difference implementation for computing solutions to the parabolic approximation are discussed. A locally-reacting boundary is used together with a one-parameter impedance model. Intensity calculations are performed for a number of flow resistivity values in both quiescent and windy atmospheres. Variations in the value of this parameter are shown to have substantial effects on the spatial variation of the acoustic signal.

  10. A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.

    PubMed

    Ashraf, M Irfan; Meng, Fan-Rui; Bourque, Charles P-A; MacLean, David A

    2015-01-01

    Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2) 5-year(-1) and volume: 0.0008 m(3) 5-year(-1)). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm(2) 5-year(-1) and 0.0393 m(3) 5-year(-1) in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling.

  11. A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change

    PubMed Central

    Ashraf, M. Irfan; Meng, Fan-Rui; Bourque, Charles P.-A.; MacLean, David A.

    2015-01-01

    Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm2 5-year-1 and volume: 0.0008 m3 5-year-1). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm2 5-year-1 and 0.0393 m3 5-year-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling. PMID:26173081

  12. Acoustic test and analyses of three advanced turboprop models

    NASA Technical Reports Server (NTRS)

    Brooks, B. M.; Metzger, F. B.

    1980-01-01

    Results of acoustic tests of three 62.2 cm (24.5 inch) diameter models of the prop-fan (a small diameter, highly loaded. Multi-bladed variable pitch advanced turboprop) are presented. Results show that there is little difference in the noise produced by unswept and slightly swept designs. However, the model designed for noise reduction produces substantially less noise at test conditions simulating 0.8 Mach number cruise speed or at conditions simulating takeoff and landing. In the near field at cruise conditions the acoustically designed. In the far field at takeoff and landing conditions the acoustically designed model is 5 db quieter than unswept or slightly swept designs. Correlation between noise measurement and theoretical predictions as well as comparisons between measured and predicted acoustic pressure pulses generated by the prop-fan blades are discussed. The general characteristics of the pulses are predicted. Shadowgraph measurements were obtained which showed the location of bow and trailing waves.

  13. Biotic and abiotic factors predicting the global distribution and population density of an invasive large mammal

    PubMed Central

    Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.

    2017-01-01

    Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519

  14. Predicting the Geothermal Heat Flux in Greenland: A Machine Learning Approach

    NASA Astrophysics Data System (ADS)

    Rezvanbehbahani, Soroush; Stearns, Leigh A.; Kadivar, Amir; Walker, J. Doug; van der Veen, C. J.

    2017-12-01

    Geothermal heat flux (GHF) is a crucial boundary condition for making accurate predictions of ice sheet mass loss, yet it is poorly known in Greenland due to inaccessibility of the bedrock. Here we use a machine learning algorithm on a large collection of relevant geologic features and global GHF measurements and produce a GHF map of Greenland that we argue is within ˜15% accuracy. The main features of our predicted GHF map include a large region with high GHF in central-north Greenland surrounding the NorthGRIP ice core site, and hot spots in the Jakobshavn Isbræ catchment, upstream of Petermann Gletscher, and near the terminus of Nioghalvfjerdsfjorden glacier. Our model also captures the trajectory of Greenland movement over the Icelandic plume by predicting a stripe of elevated GHF in central-east Greenland. Finally, we show that our model can produce substantially more accurate predictions if additional measurements of GHF in Greenland are provided.

  15. Setting priorities for research on pollution reduction functions of agricultural buffers.

    PubMed

    Dosskey, Michael G

    2002-11-01

    The success of buffer installation initiatives and programs to reduce nonpoint source pollution of streams on agricultural lands will depend the ability of local planners to locate and design buffers for specific circumstances with substantial and predictable results. Current predictive capabilities are inadequate, and major sources of uncertainty remain. An assessment of these uncertainties cautions that there is greater risk of overestimating buffer impact than underestimating it. Priorities for future research are proposed that will lead more quickly to major advances in predictive capabilities. Highest priority is given for work on the surface runoff filtration function, which is almost universally important to the amount of pollution reduction expected from buffer installation and for which there remain major sources of uncertainty for predicting level of impact. Foremost uncertainties surround the extent and consequences of runoff flow concentration and pollutant accumulation. Other buffer functions, including filtration of groundwater nitrate and stabilization of channel erosion sources of sediments, may be important in some regions. However, uncertainty surrounds our ability to identify and quantify the extent of site conditions where buffer installation can substantially reduce stream pollution in these ways. Deficiencies in predictive models reflect gaps in experimental information as well as technology to account for spatial heterogeneity of pollutant sources, pathways, and buffer capabilities across watersheds. Since completion of a comprehensive watershed-scale buffer model is probably far off, immediate needs call for simpler techniques to gage the probable impacts of buffer installation at local scales.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less

  17. A predictive model for the tokamak density limit

    DOE PAGES

    Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; ...

    2016-07-28

    We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Ourmore » results are robust to a substantial variation in model parameters within the range of experiments.« less

  18. Decomposition of heterogeneous organic matterand its long-term stabilization in soils

    USGS Publications Warehouse

    Sierra, Carlos A.; Harmon, Mark E.; Perakis, Steven S.

    2011-01-01

    Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. Decomposition models represent this heterogeneity either as a set of discrete pools with different residence times or as a continuum of qualities. It is unclear though, whether these two different approaches yield comparable predictions of organic matter dynamics. Here, we compare predictions from these two different approaches and propose an intermediate approach to study organic matter decomposition based on concepts from continuous models implemented numerically. We found that the disagreement between discrete and continuous approaches can be considerable depending on the degree of nonlinearity of the model and simulation time. The two approaches can diverge substantially for predicting long-term processes in soils. Based on our alternative approach, which is a modification of the continuous quality theory, we explored the temporal patterns that emerge by treating substrate heterogeneity explicitly. The analysis suggests that the pattern of carbon mineralization over time is highly dependent on the degree and form of nonlinearity in the model, mostly expressed as differences in microbial growth and efficiency for different substrates. Moreover, short-term stabilization and destabilization mechanisms operating simultaneously result in long-term accumulation of carbon characterized by low decomposition rates, independent of the characteristics of the incoming litter. We show that representation of heterogeneity in the decomposition process can lead to substantial improvements in our understanding of carbon mineralization and its long-term stability in soils.

  19. A simplified model of a mechanical cooling tower with both a fill pack and a coil

    NASA Astrophysics Data System (ADS)

    Van Riet, Freek; Steenackers, Gunther; Verhaert, Ivan

    2017-11-01

    Cooling accounts for a large amount of the global primary energy consumption in buildings and industrial processes. A substantial part of this cooling demand is produced by mechanical cooling towers. Simulations benefit the sizing and integration of cooling towers in overall cooling networks. However, for these simulations fast-to-calculate and easy-to-parametrize models are required. In this paper, a new model is developed for a mechanical draught cooling tower with both a cooling coil and a fill pack. The model needs manufacturers' performance data at only three operational states (at varying air and water flow rates) to be parametrized. The model predicts the cooled, outgoing water temperature. These predictions were compared with experimental data for a wide range of operational states. The model was able to predict the temperature with a maximum absolute error of 0.59°C. The relative error of cooling capacity was mostly between ±5%.

  20. The Relationship Between Social Support and Subjective Well-Being Across Age

    PubMed Central

    Salthouse, Timothy A.; Oishi, Shigehiro; Jeswani, Sheena

    2014-01-01

    The relationships among types of social support and different facets of subjective well-being (i.e., life satisfaction, positive affect, and negative affect) were examined in a sample of 1,111 individuals between the ages of 18 and 95. Using structural equation modeling we found that life satisfaction was predicted by enacted and perceived support, positive affect was predicted by family embeddedness and provided support, and negative affect was predicted by perceived support. When personality variables were included in a subsequent model, the influence of the social support variables were generally reduced. Invariance analyses conducted across age groups indicated that there were no substantial differences in predictors of the different types of subjective well-being across age. PMID:25045200

  1. Peer group socialization of homophobic attitudes and behavior during adolescence.

    PubMed

    Poteat, V Paul

    2007-01-01

    A social developmental framework was applied to test for the socialization of homophobic attitudes and behavior within adolescent peer groups (Grades 7-11; aged 12-17 years). Substantial similarity within and differences across groups were documented. Multilevel models identified a group socializing contextual effect, predicting homophobic attitudes and behavior of individuals within the group 8 months later, even after controlling for the predictive effect of individuals' own previously reported attitudes and behavior. Several group characteristics moderated the extent to which individuals' previously reported attitudes predicted later attitudes. Findings indicate the need to integrate the concurrent assessment of individual and social factors to inform the construction of more comprehensive models of how prejudiced attitudes and behaviors develop and are perpetuated.

  2. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework

    PubMed Central

    Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-01-01

    Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698

  3. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.

    PubMed

    Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-02-01

    Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.

  4. Genetic-based prediction of disease traits: prediction is very difficult, especially about the future†

    PubMed Central

    Schrodi, Steven J.; Mukherjee, Shubhabrata; Shan, Ying; Tromp, Gerard; Sninsky, John J.; Callear, Amy P.; Carter, Tonia C.; Ye, Zhan; Haines, Jonathan L.; Brilliant, Murray H.; Crane, Paul K.; Smelser, Diane T.; Elston, Robert C.; Weeks, Daniel E.

    2014-01-01

    Translation of results from genetic findings to inform medical practice is a highly anticipated goal of human genetics. The aim of this paper is to review and discuss the role of genetics in medically-relevant prediction. Germline genetics presages disease onset and therefore can contribute prognostic signals that augment laboratory tests and clinical features. As such, the impact of genetic-based predictive models on clinical decisions and therapy choice could be profound. However, given that (i) medical traits result from a complex interplay between genetic and environmental factors, (ii) the underlying genetic architectures for susceptibility to common diseases are not well-understood, and (iii) replicable susceptibility alleles, in combination, account for only a moderate amount of disease heritability, there are substantial challenges to constructing and implementing genetic risk prediction models with high utility. In spite of these challenges, concerted progress has continued in this area with an ongoing accumulation of studies that identify disease predisposing genotypes. Several statistical approaches with the aim of predicting disease have been published. Here we summarize the current state of disease susceptibility mapping and pharmacogenetics efforts for risk prediction, describe methods used to construct and evaluate genetic-based predictive models, and discuss applications. PMID:24917882

  5. Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City.

    PubMed

    Yang, Wan; Olson, Donald R; Shaman, Jeffrey

    2016-11-01

    The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast.

  6. Using modelling to predict impacts of sea level rise and increased turbidity on seagrass distributions in estuarine embayments

    NASA Astrophysics Data System (ADS)

    Davis, Tom R.; Harasti, David; Smith, Stephen D. A.; Kelaher, Brendan P.

    2016-11-01

    Climate change induced sea level rise will affect shallow estuarine habitats, which are already under threat from multiple anthropogenic stressors. Here, we present the results of modelling to predict potential impacts of climate change associated processes on seagrass distributions. We use a novel application of relative environmental suitability (RES) modelling to examine relationships between variables of physiological importance to seagrasses (light availability, wave exposure, and current flow) and seagrass distributions within 5 estuarine embayments. Models were constructed separately for Posidonia australis and Zostera muelleri subsp. capricorni using seagrass data from Port Stephens estuary, New South Wales, Australia. Subsequent testing of models used independent datasets from four other estuarine embayments (Wallis Lake, Lake Illawarra, Merimbula Lake, and Pambula Lake) distributed along 570 km of the east Australian coast. Relative environmental suitability models provided adequate predictions for seagrass distributions within Port Stephens and the other estuarine embayments, indicating that they may have broad regional application. Under the predictions of RES models, both sea level rise and increased turbidity are predicted to cause substantial seagrass losses in deeper estuarine areas, resulting in a net shoreward movement of seagrass beds. Seagrass species distribution models developed in this study provide a valuable tool to predict future shifts in estuarine seagrass distributions, allowing identification of areas for protection, monitoring and rehabilitation.

  7. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  8. A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling

    NASA Astrophysics Data System (ADS)

    Moore, Chandler; Akiki, Georges; Balachandar, S.

    2017-11-01

    This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.

  9. Estimating energy expenditure from heart rate in older adults: a case for calibration.

    PubMed

    Schrack, Jennifer A; Zipunnikov, Vadim; Goldsmith, Jeff; Bandeen-Roche, Karen; Crainiceanu, Ciprian M; Ferrucci, Luigi

    2014-01-01

    Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear. To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration. Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance. Heart rate and energy expenditure were highly correlated (r=0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. =0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min). These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.

  10. Sustainable exploitation and management of autogenic ecosystem engineers: application to oysters in Chesapeake Bay.

    PubMed

    Wilberg, Michael J; Wiedenmann, John R; Robinson, Jason M

    2013-06-01

    Autogenic ecosystem engineers are critically important parts of many marine and estuarine systems because of their substantial effect on ecosystem services. Oysters are of particular importance because of their capacity to modify coastal and estuarine habitats and the highly degraded status of their habitats worldwide. However, models to predict dynamics of ecosystem engineers have not previously included the effects of exploitation. We developed a linked population and habitat model for autogenic ecosystem engineers undergoing exploitation. We parameterized the model to represent eastern oyster (Crassostrea virginica) in upper Chesapeake Bay by selecting sets of parameter values that matched observed rates of change in abundance and habitat. We used the model to evaluate the effects of a range of management and restoration options including sustainability of historical fishing pressure, effectiveness of a newly enacted sanctuary program, and relative performance of two restoration approaches. In general, autogenic ecosystem engineers are expected to be substantially less resilient to fishing than an equivalent species that does not rely on itself for habitat. Historical fishing mortality rates in upper Chesapeake Bay for oysters were above the levels that would lead to extirpation. Reductions in fishing or closure of the fishery were projected to lead to long-term increases in abundance and habitat. For fisheries to become sustainable outside of sanctuaries, a substantial larval subsidy would be required from oysters within sanctuaries. Restoration efforts using high-relief reefs were predicted to allow recovery within a shorter period of time than low-relief reefs. Models such as ours, that allow for feedbacks between population and habitat dynamics, can be effective tools for guiding management and restoration of autogenic ecosystem engineers.

  11. Mechanistic quantitative structure-activity relationship model for the photoinduced toxicity of polycyclic aromatic hydrocarbons. 2: An empirical model for the toxicity of 16 polycyclic aromatic hydrocarbons to the duckweed Lemna gibba L. G-3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, X.D.; Krylov, S.N.; Ren, L.

    1997-11-01

    Photoinduced toxicity of polycyclic aromatic hydrocarbons (PAHs) occurs via photosensitization reactions (e.g., generation of singlet-state oxygen) and by photomodification (photooxidation and/or photolysis) of the chemicals to more toxic species. The quantitative structure-activity relationship (QSAR) described in the companion paper predicted, in theory, that photosensitization and photomodification additively contribute to toxicity. To substantiate this QSAR modeling exercise it was necessary to show that toxicity can be described by empirically derived parameters. The toxicity of 16 PAHs to the duckweed Lemna gibba was measured as inhibition of leaf production in simulated solar radiation (a light source with a spectrum similar to thatmore » of sunlight). A predictive model for toxicity was generated based on the theoretical model developed in the companion paper. The photophysical descriptors required of each PAH for modeling were efficiency of photon absorbance, relative uptake, quantum yield for triplet-state formation, and the rate of photomodification. The photomodification rates of the PAHs showed a moderate correlation to toxicity, whereas a derived photosensitization factor (PSF; based on absorbance, triplet-state quantum yield, and uptake) for each PAH showed only a weak, complex correlation to toxicity. However, summing the rate of photomodification and the PSF resulted in a strong correlation to toxicity that had predictive value. When the PSF and a derived photomodification factor (PMF; based on the photomodification rate and toxicity of the photomodified PAHs) were summed, an excellent explanatory model of toxicity was produced, substantiating the additive contributions of the two factors.« less

  12. Derivation and Validation of a Risk Standardization Model for Benchmarking Hospital Performance for Health-Related Quality of Life Outcomes after Acute Myocardial Infarction

    PubMed Central

    Arnold, Suzanne V.; Masoudi, Frederick A.; Rumsfeld, John S.; Li, Yan; Jones, Philip G.; Spertus, John A.

    2014-01-01

    Background Before outcomes-based measures of quality can be used to compare and improve care, they must be risk-standardized to account for variations in patient characteristics. Despite the importance of health-related quality of life (HRQL) outcomes among patients with acute myocardial infarction (AMI), no risk-standardized models have been developed. Methods and Results We assessed disease-specific HRQL using the Seattle Angina Questionnaire at baseline and 1 year later in 2693 unselected AMI patients from 24 hospitals enrolled in the TRIUMPH registry. Using 57 candidate sociodemographic, economic, and clinical variables present on admission, we developed a parsimonious, hierarchical linear regression model to predict HRQL. Eleven variables were independently associated with poor HRQL after AMI, including younger age, prior CABG, depressive symptoms, and financial difficulties (R2=20%). The model demonstrated excellent internal calibration and reasonable calibration in an independent sample of 1890 AMI patients in a separate registry, although the model slightly over-predicted HRQL scores in the higher deciles. Among the 24 TRIUMPH hospitals, 1-year unadjusted HRQL scores ranged from 67–89. After risk-standardization, HRQL scores variability narrowed substantially (range=79–83), and the group of hospital performance (bottom 20%/middle 60%/top 20%) changed in 14 of the 24 hospitals (58% reclassification with risk-standardization). Conclusions In this predictive model for HRQL after AMI, we identified risk factors, including economic and psychological characteristics, associated with HRQL outcomes. Adjusting for these factors substantially altered the rankings of hospitals as compared with unadjusted comparisons. Using this model to compare risk-standardized HRQL outcomes across hospitals may identify processes of care that maximize this important patient-centered outcome. PMID:24163068

  13. Nonidentifiability of population size from capture-recapture data with heterogeneous detection probabilities

    USGS Publications Warehouse

    Link, W.A.

    2003-01-01

    Heterogeneity in detection probabilities has long been recognized as problematic in mark-recapture studies, and numerous models developed to accommodate its effects. Individual heterogeneity is especially problematic, in that reasonable alternative models may predict essentially identical observations from populations of substantially different sizes. Thus even with very large samples, the analyst will not be able to distinguish among reasonable models of heterogeneity, even though these yield quite distinct inferences about population size. The problem is illustrated with models for closed and open populations.

  14. Peer Group Socialization of Homophobic Attitudes and Behavior during Adolescence

    ERIC Educational Resources Information Center

    Poteat, V. Paul

    2007-01-01

    A social developmental framework was applied to test for the socialization of homophobic attitudes and behavior within adolescent peer groups (Grades 7-11; aged 12-17 years). Substantial similarity within and differences across groups were documented. Multilevel models identified a group socializing contextual effect, predicting homophobic…

  15. Ecosystem resilience despite large-scale altered hydroclimatic conditions

    Treesearch

    Guillermo E. Ponce Campos; M. Susan Moran; Alfredo Huete; Yongguang Zhang; Cynthia Bresloff; Travis E. Huxman; Derek Eamus; David D. Bosch; Anthony R. Buda; Stacey A. Gunter; Tamara Heartsill Scalley; Stanley G. Kitchen; Mitchel P. McClaran; W. Henry McNab; Diane S. Montoya; Jack A. Morgan; Debra P. C. Peters; E. John Sadler; Mark S. Seyfried; Patrick J. Starks

    2013-01-01

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological model for many regions. Largescale, warm droughts have recently occurred in North America, Africa, Europe, Amazonia and Australia, resulting in major effects on terrestrial ecosystems, carbon balance and food...

  16. The effects of streamline curvature and swirl on turbulent flows in curved ducts

    NASA Technical Reports Server (NTRS)

    Cheng, Chih-Hsiung; Farokhi, Saeed

    1990-01-01

    A technique for improving the numerical predictions of turbulent flows with the effect of streamline curvature is developed. Separated flows, the flow in a curved duct, and swirling flows are examples of flow fields where streamline curvature plays a dominant role. A comprehensive literature review on the effect of streamline curvature was conducted. New algebraic formulations for the eddy viscosity incorporating the kappa-epsilon turbulence model are proposed to account for various effects of streamline curvature. The loci of flow reversal of the separated flows over various backward-facing steps are employed to test the capability of the proposed turbulence model in capturing the effect of local curvature. The inclusion of the effect of longitudinal curvature in the proposed turbulence model is validated by predicting the distributions of the static pressure coefficients in an S-bend duct and in 180 degree turn-around ducts. The proposed turbulence model embedded with transverse curvature modification is substantiated by predicting the decay of the axial velocities in the confined swirling flows. The numerical predictions of different curvature effects by the proposed turbulence models are also reported.

  17. Fractional Brownian motion and multivariate-t models for longitudinal biomedical data, with application to CD4 counts in HIV-positive patients.

    PubMed

    Stirrup, Oliver T; Babiker, Abdel G; Carpenter, James R; Copas, Andrew J

    2016-04-30

    Longitudinal data are widely analysed using linear mixed models, with 'random slopes' models particularly common. However, when modelling, for example, longitudinal pre-treatment CD4 cell counts in HIV-positive patients, the incorporation of non-stationary stochastic processes such as Brownian motion has been shown to lead to a more biologically plausible model and a substantial improvement in model fit. In this article, we propose two further extensions. Firstly, we propose the addition of a fractional Brownian motion component, and secondly, we generalise the model to follow a multivariate-t distribution. These extensions are biologically plausible, and each demonstrated substantially improved fit on application to example data from the Concerted Action on SeroConversion to AIDS and Death in Europe study. We also propose novel procedures for residual diagnostic plots that allow such models to be assessed. Cohorts of patients were simulated from the previously reported and newly developed models in order to evaluate differences in predictions made for the timing of treatment initiation under different clinical management strategies. A further simulation study was performed to demonstrate the substantial biases in parameter estimates of the mean slope of CD4 decline with time that can occur when random slopes models are applied in the presence of censoring because of treatment initiation, with the degree of bias found to depend strongly on the treatment initiation rule applied. Our findings indicate that researchers should consider more complex and flexible models for the analysis of longitudinal biomarker data, particularly when there are substantial missing data, and that the parameter estimates from random slopes models must be interpreted with caution. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  18. Genetic determinants of freckle occurrence in the Spanish population: Towards ephelides prediction from human DNA samples.

    PubMed

    Hernando, Barbara; Ibañez, Maria Victoria; Deserio-Cuesta, Julio Alberto; Soria-Navarro, Raquel; Vilar-Sastre, Inca; Martinez-Cadenas, Conrado

    2018-03-01

    Prediction of human pigmentation traits, one of the most differentiable externally visible characteristics among individuals, from biological samples represents a useful tool in the field of forensic DNA phenotyping. In spite of freckling being a relatively common pigmentation characteristic in Europeans, little is known about the genetic basis of this largely genetically determined phenotype in southern European populations. In this work, we explored the predictive capacity of eight freckle and sunlight sensitivity-related genes in 458 individuals (266 non-freckled controls and 192 freckled cases) from Spain. Four loci were associated with freckling (MC1R, IRF4, ASIP and BNC2), and female sex was also found to be a predictive factor for having a freckling phenotype in our population. After identifying the most informative genetic variants responsible for human ephelides occurrence in our sample set, we developed a DNA-based freckle prediction model using a multivariate regression approach. Once developed, the capabilities of the prediction model were tested by a repeated 10-fold cross-validation approach. The proportion of correctly predicted individuals using the DNA-based freckle prediction model was 74.13%. The implementation of sex into the DNA-based freckle prediction model slightly improved the overall prediction accuracy by 2.19% (76.32%). Further evaluation of the newly-generated prediction model was performed by assessing the model's performance in a new cohort of 212 Spanish individuals, reaching a classification success rate of 74.61%. Validation of this prediction model may be carried out in larger populations, including samples from different European populations. Further research to validate and improve this newly-generated freckle prediction model will be needed before its forensic application. Together with DNA tests already validated for eye and hair colour prediction, this freckle prediction model may lead to a substantially more detailed physical description of unknown individuals from DNA found at the crime scene. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. 2016 International Land Model Benchmarking (ILAMB) Workshop Report

    NASA Technical Reports Server (NTRS)

    Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen; Lawrence, David M.; Riley, William J.; Randerson, James T.; Ahlstrom, Anders; Abramowitz, Gabriel; Baldocchi, Dennis D.; Best, Martin J.; hide

    2016-01-01

    As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of terrestrial biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistryclimate feedbacks and ecosystem processes in these models are essential for reducing the acknowledged substantial uncertainties in 21st century climate change projections.

  20. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  1. Uncertainty analysis of the nonideal competitive adsorption-donnan model: effects of dissolved organic matter variability on predicted metal speciation in soil solution.

    PubMed

    Groenenberg, Jan E; Koopmans, Gerwin F; Comans, Rob N J

    2010-02-15

    Ion binding models such as the nonideal competitive adsorption-Donnan model (NICA-Donnan) and model VI successfully describe laboratory data of proton and metal binding to purified humic substances (HS). In this study model performance was tested in more complex natural systems. The speciation predicted with the NICA-Donnan model and the associated uncertainty were compared with independent measurements in soil solution extracts, including the free metal ion activity and fulvic (FA) and humic acid (HA) fractions of dissolved organic matter (DOM). Potentially important sources of uncertainty are the DOM composition and the variation in binding properties of HS. HS fractions of DOM in soil solution extracts varied between 14 and 63% and consisted mainly of FA. Moreover, binding parameters optimized for individual FA samples show substantial variation. Monte Carlo simulations show that uncertainties in predicted metal speciation, for metals with a high affinity for FA (Cu, Pb), are largely due to the natural variation in binding properties (i.e., the affinity) of FA. Predictions for metals with a lower affinity (Cd) are more prone to uncertainties in the fraction FA in DOM and the maximum site density (i.e., the capacity) of the FA. Based on these findings, suggestions are provided to reduce uncertainties in model predictions.

  2. Persistent Organic Pollutants in Norwegian Men from 1979 to 2007: Intraindividual Changes, Age–Period–Cohort Effects, and Model Predictions

    PubMed Central

    Breivik, Knut; Fuskevåg, Ole-Martin; Nieboer, Evert; Odland, Jon Øyvind; Sandanger, Torkjel Manning

    2013-01-01

    Background: Longitudinal monitoring studies of persistent organic pollutants (POPs) in human populations are important to better understand changes with time and age, and for future predictions. Objectives: We sought to describe serum POP time trends on an individual level, investigate age–period–cohort effects, and compare predicted polychlorinated biphenyl (PCB) concentrations to measured values. Methods: Serum was sampled in 1979, 1986, 1994, 2001, and 2007 from a cohort of 53 men in Northern Norway and analyzed for 41 POPs. Time period, age, and birth cohort effects were assessed by graphical analyses and mixed-effect models. We derived the predicted concentrations of four PCBs for each sampling year using the CoZMoMAN model. Results: The median decreases in summed serum POP concentrations (lipid-adjusted) in 1986, 1994, 2001, and 2007 relative to 1979 were –22%, –52%, –54%, and –68%, respectively. We observed substantial declines in all POP groups with the exception of chlordanes. Time period (reflected by sampling year) was the strongest descriptor of changes in PCB-153 concentrations. Predicted PCB-153 concentrations were consistent with measured concentrations in the study population. Conclusions: Our results suggest substantial intraindividual declines in serum concentrations of legacy POPs from 1979 to 2007 in men from Northern Norway. These changes are consistent with reduced environmental exposure during these 30 years and highlight the relation between historic emissions and POP concentrations measured in humans. Observed data and interpretations are supported by estimates from the CoZMoMAN emission-based model. A longitudinal decrease in concentrations with age was evident for all birth cohorts. Overall, our findings support the relevance of age–period–cohort effects to human biomonitoring of environmental contaminants. Citation: Nøst TH, Breivik K, Fuskevåg OM, Nieboer E, Odland JØ, Sandanger TM. 2013. Persistent organic pollutants in Norwegian men from 1979 to 2007: intraindividual changes, age–period–cohort effects, and model predictions. Environ Health Perspect 121:1292–1298; http://dx.doi.org/10.1289/ehp.1206317 PMID:24007675

  3. A nanomaterial release model for waste shredding using a Bayesian belief network

    NASA Astrophysics Data System (ADS)

    Shandilya, Neeraj; Ligthart, Tom; van Voorde, Imelda; Stahlmecke, Burkhard; Clavaguera, Simon; Philippot, Cecile; Ding, Yaobo; Goede, Henk

    2018-02-01

    The shredding of waste of electrical and electronic equipment (WEEE) and other products, incorporated with nanomaterials, can lead to a substantial release of nanomaterials. Considering the uncertainty, complexity, and scarcity of experimental data on release, we present the development of a Bayesian belief network (BBN) model. This baseline model aims to give a first prediction of the release of nanomaterials (excluding nanofibers) during their mechanical shredding. With a focus on the description of the model development methodology, we characterize nanomaterial release in terms of number, size, mass, and composition of released particles. Through a sensitivity analysis of the model, we find the material-specific parameters like affinity of nanomaterials to the matrix of the composite and their state of dispersion inside the matrix to reduce the nanomaterial release up to 50%. The shredder-specific parameters like number of shafts in a shredder and input and output size of the material for shredding could minimize it up to 98%. The comparison with two experimental test cases shows promising outcome on the prediction capacity of the model. As additional experimental data on nanomaterial release becomes available, the model is able to further adapt and update risk forecasts. When adapting the model with additional expert beliefs, experts should be selected using criteria, e.g., substantial contribution to nanomaterial and/or particulate matter release-related scientific literature, the capacity and willingness to contribute to further development of the BBN model, and openness to accepting deviating opinions. [Figure not available: see fulltext.

  4. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  5. Automatic Prediction of Conversion from Mild Cognitive Impairment to Probable Alzheimer’s Disease using Structural Magnetic Resonance Imaging

    PubMed Central

    Nho, Kwangsik; Shen, Li; Kim, Sungeun; Risacher, Shannon L.; West, John D.; Foroud, Tatiana; Jack, Clifford R.; Weiner, Michael W.; Saykin, Andrew J.

    2010-01-01

    Mild Cognitive Impairment (MCI) is thought to be a precursor to the development of early Alzheimer’s disease (AD). For early diagnosis of AD, the development of a model that is able to predict the conversion of amnestic MCI to AD is challenging. Using automatic whole-brain MRI analysis techniques and pattern classification methods, we developed a model to differentiate AD from healthy controls (HC), and then applied it to the prediction of MCI conversion to AD. Classification was performed using support vector machines (SVMs) together with a SVM-based feature selection method, which selected a set of most discriminating predictors for optimizing prediction accuracy. We obtained 90.5% cross-validation accuracy for classifying AD and HC, and 72.3% accuracy for predicting MCI conversion to AD. These analyses suggest that a classifier trained to separate HC vs. AD has substantial potential for predicting MCI conversion to AD. PMID:21347037

  6. Health-based risk adjustment: is inpatient and outpatient diagnostic information sufficient?

    PubMed

    Lamers, L M

    Adequate risk adjustment is critical to the success of market-oriented health care reforms in many countries. Currently used risk adjusters based on demographic and diagnostic cost groups (DCGs) do not reflect expected costs accurately. This study examines the simultaneous predictive accuracy of inpatient and outpatient morbidity measures and prior costs. DCGs, pharmacy cost groups (PCGs), and prior year's costs improve the predictive accuracy of the demographic model substantially. DCGs and PCGs seem complementary in their ability to predict future costs. However, this study shows that the combination of DCGs and PCGs still leaves room for cream skimming.

  7. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model

    PubMed Central

    Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.

    2014-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  8. The effect of capturing the correct turbulence dissipation rate in BHR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, John Dennis; Ristorcelli, Raymond

    In this manuscript, we discuss the shortcoming of a quasi-equilibrium assumption made in the BHR closure model. Turbulence closure models generally assume fully developed turbulence, which is not applicable to 1) non-equilibrium turbulence (e.g. change in mean pressure gradient) or 2) laminar-turbulence transition flows. Based on DNS data, we show that the current BHR dissipation equation [modeled based on the fully developed turbulence phenomenology] does not capture important features of nonequilibrium flows. To demonstrate our thesis, we use the BHR equations to predict a non-equilibrium flow both with the BHR dissipation and the dissipation from DNS. We find that themore » prediction can be substantially improved, both qualitatively and quantitatively, with the correct dissipation rate. We conclude that a new set of nonequilibrium phenomenological assumptions must be used to develop a new model equation for the dissipation to accurately predict the turbulence time scale used by other models.« less

  9. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    PubMed

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.

  10. Underestimated AMOC Variability and Implications for AMV and Predictability in CMIP Models

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoqin; Zhang, Rong; Knutson, Thomas R.

    2018-05-01

    The Atlantic Meridional Overturning Circulation (AMOC) has profound impacts on various climate phenomena. Using both observations and simulations from the Coupled Model Intercomparison Project Phase 3 and 5, here we show that most models underestimate the amplitude of low-frequency AMOC variability. We further show that stronger low-frequency AMOC variability leads to stronger linkages between the AMOC and key variables associated with the Atlantic multidecadal variability (AMV), and between the subpolar AMV signal and northern hemisphere surface air temperature. Low-frequency extratropical northern hemisphere surface air temperature variability might increase with the amplitude of low-frequency AMOC variability. Atlantic decadal predictability is much higher in models with stronger low-frequency AMOC variability and much lower in models with weaker or without AMOC variability. Our results suggest that simulating realistic low-frequency AMOC variability is very important, both for simulating realistic linkages between AMOC and AMV-related variables and for achieving substantially higher Atlantic decadal predictability.

  11. Subtropical high predictability establishes a promising way for monsoon and tropical storm predictions.

    PubMed

    Wang, Bin; Xiang, Baoqiang; Lee, June-Yi

    2013-02-19

    Monsoon rainfall and tropical storms (TSs) impose great impacts on society, yet their seasonal predictions are far from successful. The western Pacific Subtropical High (WPSH) is a prime circulation system affecting East Asian summer monsoon (EASM) and western North Pacific TS activities, but the sources of its variability and predictability have not been established. Here we show that the WPSH variation faithfully represents fluctuations of EASM strength (r = -0.92), the total TS days over the subtropical western North Pacific (r = -0.81), and the total number of TSs impacting East Asian coasts (r = -0.76) during 1979-2009. Our numerical experiment results establish that the WPSH variation is primarily controlled by central Pacific cooling/warming and a positive atmosphere-ocean feedback between the WPSH and the Indo-Pacific warm pool oceans. With a physically based empirical model and the state-of-the-art dynamical models, we demonstrate that the WPSH is highly predictable; this predictability creates a promising way for prediction of monsoon and TS. The predictions using the WPSH predictability not only yields substantially improved skills in prediction of the EASM rainfall, but also enables skillful prediction of the TS activities that the current dynamical models fail. Our findings reveal that positive WPSH-ocean interaction can provide a source of climate predictability and highlight the importance of subtropical dynamics in understanding monsoon and TS predictability.

  12. Subtropical High predictability establishes a promising way for monsoon and tropical storm predictions

    PubMed Central

    Wang, Bin; Xiang, Baoqiang; Lee, June-Yi

    2013-01-01

    Monsoon rainfall and tropical storms (TSs) impose great impacts on society, yet their seasonal predictions are far from successful. The western Pacific Subtropical High (WPSH) is a prime circulation system affecting East Asian summer monsoon (EASM) and western North Pacific TS activities, but the sources of its variability and predictability have not been established. Here we show that the WPSH variation faithfully represents fluctuations of EASM strength (r = –0.92), the total TS days over the subtropical western North Pacific (r = –0.81), and the total number of TSs impacting East Asian coasts (r = –0.76) during 1979–2009. Our numerical experiment results establish that the WPSH variation is primarily controlled by central Pacific cooling/warming and a positive atmosphere-ocean feedback between the WPSH and the Indo-Pacific warm pool oceans. With a physically based empirical model and the state-of-the-art dynamical models, we demonstrate that the WPSH is highly predictable; this predictability creates a promising way for prediction of monsoon and TS. The predictions using the WPSH predictability not only yields substantially improved skills in prediction of the EASM rainfall, but also enables skillful prediction of the TS activities that the current dynamical models fail. Our findings reveal that positive WPSH–ocean interaction can provide a source of climate predictability and highlight the importance of subtropical dynamics in understanding monsoon and TS predictability. PMID:23341624

  13. Limits to Cloud Susceptibility

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.

    2002-01-01

    1-kilometer AVHRR observations of ship tracks in low-level clouds off the west coast of the U S. were used to determine limits for the degree to which clouds might be altered by increases in anthropogenic aerosols. Hundreds of tracks were analyzed to determine whether the changes in droplet radii, visible optical depths, and cloud top altitudes that result from the influx of particles from underlying ships were consistent with expectations based on simple models for the indirect effect of aerosols. The models predict substantial increases in sunlight reflected by polluted clouds due to the increases in droplet numbers and cloud liquid water that result from the elevated particle concentrations. Contrary to the model predictions, the analysis of ship tracks revealed a 15-20% reduction in liquid water for the polluted clouds. Studies performed with a large-eddy cloud simulation model suggested that the shortfall in cloud liquid water found in the satellite observations might be attributed to the restriction that the 1-kilometer pixels be completely covered by either polluted or unpolluted cloud. The simulation model revealed that a substantial fraction of the indirect effect is caused by a horizontal redistribution of cloud water in the polluted clouds. Cloud-free gaps in polluted clouds fill in with cloud water while the cloud-free gaps in the surrounding unpolluted clouds remain cloud-free. By limiting the analysis to only overcast pixels, the current study failed to account for the gap-filling predicted by the simulation model. This finding and an analysis of the spatial variability of marine stratus suggest new ways to analyze ship tracks to determine the limit to which particle pollution will alter the amount of sunlight reflected by clouds.

  14. Conformational dynamics of a crystalline protein from microsecond-scale molecular dynamics simulations and diffuse X-ray scattering.

    PubMed

    Wall, Michael E; Van Benschoten, Andrew H; Sauter, Nicholas K; Adams, Paul D; Fraser, James S; Terwilliger, Thomas C

    2014-12-16

    X-ray diffraction from protein crystals includes both sharply peaked Bragg reflections and diffuse intensity between the peaks. The information in Bragg scattering is limited to what is available in the mean electron density. The diffuse scattering arises from correlations in the electron density variations and therefore contains information about collective motions in proteins. Previous studies using molecular-dynamics (MD) simulations to model diffuse scattering have been hindered by insufficient sampling of the conformational ensemble. To overcome this issue, we have performed a 1.1-μs MD simulation of crystalline staphylococcal nuclease, providing 100-fold more sampling than previous studies. This simulation enables reproducible calculations of the diffuse intensity and predicts functionally important motions, including transitions among at least eight metastable states with different active-site geometries. The total diffuse intensity calculated using the MD model is highly correlated with the experimental data. In particular, there is excellent agreement for the isotropic component of the diffuse intensity, and substantial but weaker agreement for the anisotropic component. Decomposition of the MD model into protein and solvent components indicates that protein-solvent interactions contribute substantially to the overall diffuse intensity. We conclude that diffuse scattering can be used to validate predictions from MD simulations and can provide information to improve MD models of protein motions.

  15. A numerical study of wave-current interaction through surface and bottom stresses: Coastal ocean response to Hurricane Fran of 1996

    NASA Astrophysics Data System (ADS)

    Xie, L.; Pietrafesa, L. J.; Wu, K.

    2003-02-01

    A three-dimensional wave-current coupled modeling system is used to examine the influence of waves on coastal currents and sea level. This coupled modeling system consists of the wave model-WAM (Cycle 4) and the Princeton Ocean Model (POM). The results from this study show that it is important to incorporate surface wave effects into coastal storm surge and circulation models. Specifically, we find that (1) storm surge models without coupled surface waves generally under estimate not only the peak surge but also the coastal water level drop which can also cause substantial impact on the coastal environment, (2) introducing wave-induced surface stress effect into storm surge models can significantly improve storm surge prediction, (3) incorporating wave-induced bottom stress into the coupled wave-current model further improves storm surge prediction, and (4) calibration of the wave module according to minimum error in significant wave height does not necessarily result in an optimum wave module in a wave-current coupled system for current and storm surge prediction.

  16. Molecular Modeling of Lipid Membrane Curvature Induction by a Peptide: More than Simply Shape

    PubMed Central

    Sodt, Alexander J.; Pastor, Richard W.

    2014-01-01

    Molecular dynamics simulations of an amphipathic helix embedded in a lipid bilayer indicate that it will induce substantial positive curvature (e.g., a tube of diameter 20 nm at 16% surface coverage). The induction is twice that of a continuum model prediction that only considers the shape of the inclusion. The discrepancy is explained in terms of the additional presence of specific interactions described only by the molecular model. The conclusion that molecular shape alone is insufficient to quantitatively model curvature is supported by contrasting molecular and continuum models of lipids with large and small headgroups (choline and ethanolamine, respectively), and of the removal of a lipid tail (modeling a lyso-lipid). For the molecular model, curvature propensity is analyzed by computing the derivative of the free energy with respect to bending. The continuum model predicts that the inclusion will soften the bilayer near the headgroup region, an effect that may weaken curvature induction. The all-atom predictions are consistent with experimental observations of the degree of tubulation by amphipathic helices and variation of the free energy of binding to liposomes. PMID:24806928

  17. Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico.

    PubMed

    Johansson, Michael A; Reich, Nicholas G; Hota, Aditi; Brownstein, John S; Santillana, Mauricio

    2016-09-26

    Dengue viruses, which infect millions of people per year worldwide, cause large epidemics that strain healthcare systems. Despite diverse efforts to develop forecasting tools including autoregressive time series, climate-driven statistical, and mechanistic biological models, little work has been done to understand the contribution of different components to improved prediction. We developed a framework to assess and compare dengue forecasts produced from different types of models and evaluated the performance of seasonal autoregressive models with and without climate variables for forecasting dengue incidence in Mexico. Climate data did not significantly improve the predictive power of seasonal autoregressive models. Short-term and seasonal autocorrelation were key to improving short-term and long-term forecasts, respectively. Seasonal autoregressive models captured a substantial amount of dengue variability, but better models are needed to improve dengue forecasting. This framework contributes to the sparse literature of infectious disease prediction model evaluation, using state-of-the-art validation techniques such as out-of-sample testing and comparison to an appropriate reference model.

  18. Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico

    PubMed Central

    Johansson, Michael A.; Reich, Nicholas G.; Hota, Aditi; Brownstein, John S.; Santillana, Mauricio

    2016-01-01

    Dengue viruses, which infect millions of people per year worldwide, cause large epidemics that strain healthcare systems. Despite diverse efforts to develop forecasting tools including autoregressive time series, climate-driven statistical, and mechanistic biological models, little work has been done to understand the contribution of different components to improved prediction. We developed a framework to assess and compare dengue forecasts produced from different types of models and evaluated the performance of seasonal autoregressive models with and without climate variables for forecasting dengue incidence in Mexico. Climate data did not significantly improve the predictive power of seasonal autoregressive models. Short-term and seasonal autocorrelation were key to improving short-term and long-term forecasts, respectively. Seasonal autoregressive models captured a substantial amount of dengue variability, but better models are needed to improve dengue forecasting. This framework contributes to the sparse literature of infectious disease prediction model evaluation, using state-of-the-art validation techniques such as out-of-sample testing and comparison to an appropriate reference model. PMID:27665707

  19. The nonlinear dynamics of a spacecraft coupled to the vibration of a contained fluid

    NASA Technical Reports Server (NTRS)

    Peterson, Lee D.; Crawley, Edward F.; Hansman, R. John

    1988-01-01

    The dynamics of a linear spacecraft mode coupled to a nonlinear low gravity slosh of a fluid in a cylindrical tank is investigated. Coupled, nonlinear equations of motion for the fluid-spacecraft dynamics are derived through an assumed mode Lagrangian method. Unlike linear fluid slosh models, this nonlinear slosh model retains two fundamental slosh modes and three secondary modes. An approximate perturbation solution of the equations of motion indicates that the nonlinear coupled system response involves fluid-spacecraft modal resonances not predicted by either a linear, or a nonlinear, uncoupled slosh analysis. Experimental results substantiate the analytical predictions.

  20. Prediction of indoor radon/thoron concentration in a model room from exhalation rates of building materials for different ventilation rates

    NASA Astrophysics Data System (ADS)

    Kumar, Manish; Sharma, Navjeet; Sarin, Amit

    2018-05-01

    Studies have confirmed that elevated levels of radon/thoron in the human-environments can substantially increase the risk of lung cancer in general population. The building materials are the second largest contributors to indoor radon/thoron after soil and bedrock beneath dwellings. In present investigation, the exhalation rates of radon/thoron from different building materials samples have been analysed using active technique. Radon/thoron concentrations in a model room have been predicted based on the exhalation rates from walls, floor and roof. The indoor concentrations show significant variations depending upon the ventilation rate and type of building materials used.

  1. Extended charge banking model of dual path shocks for implantable cardioverter defibrillators

    PubMed Central

    Dosdall, Derek J; Sweeney, James D

    2008-01-01

    Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561

  2. The extension of total gain (TG) statistic in survival models: properties and applications.

    PubMed

    Choodari-Oskooei, Babak; Royston, Patrick; Parmar, Mahesh K B

    2015-07-01

    The results of multivariable regression models are usually summarized in the form of parameter estimates for the covariates, goodness-of-fit statistics, and the relevant p-values. These statistics do not inform us about whether covariate information will lead to any substantial improvement in prediction. Predictive ability measures can be used for this purpose since they provide important information about the practical significance of prognostic factors. R (2)-type indices are the most familiar forms of such measures in survival models, but they all have limitations and none is widely used. In this paper, we extend the total gain (TG) measure, proposed for a logistic regression model, to survival models and explore its properties using simulations and real data. TG is based on the binary regression quantile plot, otherwise known as the predictiveness curve. Standardised TG ranges from 0 (no explanatory power) to 1 ('perfect' explanatory power). The results of our simulations show that unlike many of the other R (2)-type predictive ability measures, TG is independent of random censoring. It increases as the effect of a covariate increases and can be applied to different types of survival models, including models with time-dependent covariate effects. We also apply TG to quantify the predictive ability of multivariable prognostic models developed in several disease areas. Overall, TG performs well in our simulation studies and can be recommended as a measure to quantify the predictive ability in survival models.

  3. Clustering of European winter storms: A multi-model perspective

    NASA Astrophysics Data System (ADS)

    Renggli, Dominik; Buettner, Annemarie; Scherb, Anke; Straub, Daniel; Zimmerli, Peter

    2016-04-01

    The storm series over Europe in 1990 (Daria, Vivian, Wiebke, Herta) and 1999 (Anatol, Lothar, Martin) are very well known. Such clusters of severe events strongly affect the seasonally accumulated damage statistics. The (re)insurance industry has quantified clustering by using distribution assumptions deduced from the historical storm activity of the last 30 to 40 years. The use of storm series simulated by climate models has only started recently. Climate model runs can potentially represent 100s to 1000s of years, allowing a more detailed quantification of clustering than the history of the last few decades. However, it is unknown how sensitive the representation of clustering is to systematic biases. Using a multi-model ensemble allows quantifying that uncertainty. This work uses CMIP5 decadal ensemble hindcasts to study clustering of European winter storms from a multi-model perspective. An objective identification algorithm extracts winter storms (September to April) in the gridded 6-hourly wind data. Since the skill of European storm predictions is very limited on the decadal scale, the different hindcast runs are interpreted as independent realizations. As a consequence, the available hindcast ensemble represents several 1000 simulated storm seasons. The seasonal clustering of winter storms is quantified using the dispersion coefficient. The benchmark for the decadal prediction models is the 20th Century Reanalysis. The decadal prediction models are able to reproduce typical features of the clustering characteristics observed in the reanalysis data. Clustering occurs in all analyzed models over the North Atlantic and European region, in particular over Great Britain and Scandinavia as well as over Iberia (i.e. the exit regions of the North Atlantic storm track). Clustering is generally weaker in the models compared to reanalysis, although the differences between different models are substantial. In contrast to existing studies, clustering is driven by weak and moderate events, and not by extreme storms. Thus, the decision which climate model to use to quantify clustering can have a substantial impact on the risk assessment in the (re)insurance business.

  4. The role of thermal and lubricant boundary layers in the transient thermal analysis of spur gears

    NASA Technical Reports Server (NTRS)

    El-Bayoumy, L. E.; Akin, L. S.; Townsend, D. P.; Choy, F. C.

    1989-01-01

    An improved convection heat-transfer model has been developed for the prediction of the transient tooth surface temperature of spur gears. The dissipative quality of the lubricating fluid is shown to be limited to the capacity extent of the thermal boundary layer. This phenomenon can be of significance in the determination of the thermal limit of gears accelerating to the point where gear scoring occurs. Steady-state temperature prediction is improved considerably through the use of a variable integration time step that substantially reduces computer time. Computer-generated plots of temperature contours enable the user to animate the propagation of the thermal wave as the gears come into and out of contact, thus contributing to better understanding of this complex problem. This model has a much better capability at predicting gear-tooth temperatures than previous models.

  5. A Model for Oil-Gas Pipelines Cost Prediction Based on a Data Mining Process

    NASA Astrophysics Data System (ADS)

    Batzias, Fragiskos A.; Spanidis, Phillip-Mark P.

    2009-08-01

    This paper addresses the problems associated with the cost estimation of oil/gas pipelines during the elaboration of feasibility assessments. Techno-economic parameters, i.e., cost, length and diameter, are critical for such studies at the preliminary design stage. A methodology for the development of a cost prediction model based on Data Mining (DM) process is proposed. The design and implementation of a Knowledge Base (KB), maintaining data collected from various disciplines of the pipeline industry, are presented. The formulation of a cost prediction equation is demonstrated by applying multiple regression analysis using data sets extracted from the KB. Following the methodology proposed, a learning context is inductively developed as background pipeline data are acquired, grouped and stored in the KB, and through a linear regression model provide statistically substantial results, useful for project managers or decision makers.

  6. Dermal uptake of phthalates from clothing: Comparison of model to human participant results.

    PubMed

    Morrison, G C; Weschler, C J; Bekö, G

    2017-05-01

    In this research, we extend a model of transdermal uptake of phthalates to include a layer of clothing. When compared with experimental results, this model better estimates dermal uptake of diethylphthalate and di-n-butylphthalate (DnBP) than a previous model. The model predictions are consistent with the observation that previously exposed clothing can increase dermal uptake over that observed in bare-skin participants for the same exposure air concentrations. The model predicts that dermal uptake from clothing of DnBP is a substantial fraction of total uptake from all sources of exposure. For compounds that have high dermal permeability coefficients, dermal uptake is increased for (i) thinner clothing, (ii) a narrower gap between clothing and skin, and (iii) longer time intervals between laundering and wearing. Enhanced dermal uptake is most pronounced for compounds with clothing-air partition coefficients between 10 4 and 10 7 . In the absence of direct measurements of cotton cloth-air partition coefficients, dermal exposure may be predicted using equilibrium data for compounds in equilibrium with cellulose and water, in combination with computational methods of predicting partition coefficients. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Ecosystem resilience despite large-scale altered hydroclimatic conditions

    Treesearch

    G. E. Ponce Campos; M. S. Moran; A. Huete; Y. Zhang; C. Bresloff; T.E. Huxman; D. Eamus; D. D. Bosch; A. R. Buda; S. A. Gunter; T. Heartsill Scalley; S. G. Kitchen; M. P. McClaran; W. H. McNab; D. S. Montoya; J. A. Morgan; D. P. C. Peters; E. J. Sadler; M. S. Seyfried; P. J. Starks

    2013-01-01

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological model for many regions1. Largescale, warm droughts have recently occurred in North America, Africa, Europe, Amazonia and Australia, resulting in major effects on terrestrial ecosystems, carbon balance and food...

  8. The Research of the Personality Qualities of Future Educational Psychologists

    ERIC Educational Resources Information Center

    Dolgova, V. I.; Salamatov, A. A.; Potapova, M. V.; Yakovleva, N. O.

    2016-01-01

    In this article, the authors substantiate the existence of the personality qualities of future educational psychologists (PQFEP) that are, in fact, a sum of knowledge, skills, abilities, socially required qualities of personality allowing the psychologist to solve problems in all the fields of professional activities. A model of PQFEP predicts the…

  9. The Factor Content of Bilateral Trade: An Empirical Test.

    ERIC Educational Resources Information Center

    Choi, Yong-Seok; Krishna, Pravin

    2004-01-01

    The factor proportions model of international trade is one of the most influential theories in international economics. Its central standing in this field has appropriately prompted, particularly recently, intense empirical scrutiny. A substantial and growing body of empirical work has tested the predictions of the theory on the net factor content…

  10. Combining Satellite Measurements and Numerical Flood Prediction Models to Save Lives and Property from Flooding

    NASA Astrophysics Data System (ADS)

    Saleh, F.; Garambois, P. A.; Biancamaria, S.

    2017-12-01

    Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.

  11. On the predictability of land surface fluxes from meteorological variables

    NASA Astrophysics Data System (ADS)

    Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.

    2018-01-01

    Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.

  12. Combination of multiple model population analysis and mid-infrared technology for the estimation of copper content in Tegillarca granosa

    NASA Astrophysics Data System (ADS)

    Hu, Meng-Han; Chen, Xiao-Jing; Ye, Peng-Chao; Chen, Xi; Shi, Yi-Jian; Zhai, Guang-Tao; Yang, Xiao-Kang

    2016-11-01

    The aim of this study was to use mid-infrared spectroscopy coupled with multiple model population analysis based on Monte Carlo-uninformative variable elimination for rapidly estimating the copper content of Tegillarca granosa. Copper-specific wavelengths were first extracted from the whole spectra, and subsequently, a least square-support vector machine was used to develop the prediction models. Compared with the prediction model based on full wavelengths, models that used 100 multiple MC-UVE selected wavelengths without and with bin operation showed comparable performances with Rp (root mean square error of Prediction) of 0.97 (14.60 mg/kg) and 0.94 (20.85 mg/kg) versus 0.96 (17.27 mg/kg), as well as ratio of percent deviation (number of wavelength) of 2.77 (407) and 1.84 (45) versus 2.32 (1762). The obtained results demonstrated that the mid-infrared technique could be used for estimating copper content in T. granosa. In addition, the proposed multiple model population analysis can eliminate uninformative, weakly informative and interfering wavelengths effectively, that substantially reduced the model complexity and computation time.

  13. A Coarse-Grained Biophysical Model of E. coli and Its Application to Perturbation of the rRNA Operon Copy Number

    PubMed Central

    Tadmor, Arbel D.; Tlusty, Tsvi

    2008-01-01

    We propose a biophysical model of Escherichia coli that predicts growth rate and an effective cellular composition from an effective, coarse-grained representation of its genome. We assume that E. coli is in a state of balanced exponential steady-state growth, growing in a temporally and spatially constant environment, rich in resources. We apply this model to a series of past measurements, where the growth rate and rRNA-to-protein ratio have been measured for seven E. coli strains with an rRNA operon copy number ranging from one to seven (the wild-type copy number). These experiments show that growth rate markedly decreases for strains with fewer than six copies. Using the model, we were able to reproduce these measurements. We show that the model that best fits these data suggests that the volume fraction of macromolecules inside E. coli is not fixed when the rRNA operon copy number is varied. Moreover, the model predicts that increasing the copy number beyond seven results in a cytoplasm densely packed with ribosomes and proteins. Assuming that under such overcrowded conditions prolonged diffusion times tend to weaken binding affinities, the model predicts that growth rate will not increase substantially beyond the wild-type growth rate, as indicated by other experiments. Our model therefore suggests that changing the rRNA operon copy number of wild-type E. coli cells growing in a constant rich environment does not substantially increase their growth rate. Other observations regarding strains with an altered rRNA operon copy number, such as nucleoid compaction and the rRNA operon feedback response, appear to be qualitatively consistent with this model. In addition, we discuss possible design principles suggested by the model and propose further experiments to test its validity. PMID:18437222

  14. Temporal and geographical external validation study and extension of the Mayo Clinic prediction model to predict eGFR in the younger population of Swiss ADPKD patients.

    PubMed

    Girardat-Rotar, Laura; Braun, Julia; Puhan, Milo A; Abraham, Alison G; Serra, Andreas L

    2017-07-17

    Prediction models in autosomal dominant polycystic kidney disease (ADPKD) are useful in clinical settings to identify patients with greater risk of a rapid disease progression in whom a treatment may have more benefits than harms. Mayo Clinic investigators developed a risk prediction tool for ADPKD patients using a single kidney value. Our aim was to perform an independent geographical and temporal external validation as well as evaluate the potential for improving the predictive performance by including additional information on total kidney volume. We used data from the on-going Swiss ADPKD study from 2006 to 2016. The main analysis included a sample size of 214 patients with Typical ADPKD (Class 1). We evaluated the Mayo Clinic model performance calibration and discrimination in our external sample and assessed whether predictive performance could be improved through the addition of subsequent kidney volume measurements beyond the baseline assessment. The calibration of both versions of the Mayo Clinic prediction model using continuous Height adjusted total kidney volume (HtTKV) and using risk subclasses was good, with R 2 of 78% and 70%, respectively. Accuracy was also good with 91.5% and 88.7% of the predicted within 30% of the observed, respectively. Additional information regarding kidney volume did not substantially improve the model performance. The Mayo Clinic prediction models are generalizable to other clinical settings and provide an accurate tool based on available predictors to identify patients at high risk for rapid disease progression.

  15. Risk Prediction Models for Acute Kidney Injury in Critically Ill Patients: Opus in Progressu.

    PubMed

    Neyra, Javier A; Leaf, David E

    2018-05-31

    Acute kidney injury (AKI) is a complex systemic syndrome associated with high morbidity and mortality. Among critically ill patients admitted to intensive care units (ICUs), the incidence of AKI is as high as 50% and is associated with dismal outcomes. Thus, the development and validation of clinical risk prediction tools that accurately identify patients at high risk for AKI in the ICU is of paramount importance. We provide a comprehensive review of 3 clinical risk prediction tools that have been developed for incident AKI occurring in the first few hours or days following admission to the ICU. We found substantial heterogeneity among the clinical variables that were examined and included as significant predictors of AKI in the final models. The area under the receiver operating characteristic curves was ∼0.8 for all 3 models, indicating satisfactory model performance, though positive predictive values ranged from only 23 to 38%. Hence, further research is needed to develop more accurate and reproducible clinical risk prediction tools. Strategies for improved assessment of AKI susceptibility in the ICU include the incorporation of dynamic (time-varying) clinical parameters, as well as biomarker, functional, imaging, and genomic data. © 2018 S. Karger AG, Basel.

  16. Predictive models of safety based on audit findings: Part 2: Measurement of model validity.

    PubMed

    Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor

    2013-07-01

    Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Modeling Forest Biomass and Growth: Coupling Long-Term Inventory and Lidar Data

    NASA Technical Reports Server (NTRS)

    Babcock, Chad; Finley, Andrew O.; Cook, Bruce D.; Weiskittel, Andrew; Woodall, Christopher W.

    2016-01-01

    Combining spatially-explicit long-term forest inventory and remotely sensed information from Light Detection and Ranging (LiDAR) datasets through statistical models can be a powerful tool for predicting and mapping above-ground biomass (AGB) at a range of geographic scales. We present and examine a novel modeling approach to improve prediction of AGB and estimate AGB growth using LiDAR data. The proposed model accommodates temporal misalignment between field measurements and remotely sensed data-a problem pervasive in such settings-by including multiple time-indexed measurements at plot locations to estimate AGB growth. We pursue a Bayesian modeling framework that allows for appropriately complex parameter associations and uncertainty propagation through to prediction. Specifically, we identify a space-varying coefficients model to predict and map AGB and its associated growth simultaneously. The proposed model is assessed using LiDAR data acquired from NASA Goddard's LiDAR, Hyper-spectral & Thermal imager and field inventory data from the Penobscot Experimental Forest in Bradley, Maine. The proposed model outperformed the time-invariant counterpart models in predictive performance as indicated by a substantial reduction in root mean squared error. The proposed model adequately accounts for temporal misalignment through the estimation of forest AGB growth and accommodates residual spatial dependence. Results from this analysis suggest that future AGB models informed using remotely sensed data, such as LiDAR, may be improved by adapting traditional modeling frameworks to account for temporal misalignment and spatial dependence using random effects.

  18. Predicting Time to Hospital Discharge for Extremely Preterm Infants

    PubMed Central

    Hintz, Susan R.; Bann, Carla M.; Ambalavanan, Namasivayam; Cotten, C. Michael; Das, Abhik; Higgins, Rosemary D.

    2010-01-01

    As extremely preterm infant mortality rates have decreased, concerns regarding resource utilization have intensified. Accurate models to predict time to hospital discharge could aid in resource planning, family counseling, and perhaps stimulate quality improvement initiatives. Objectives For infants <27 weeks estimated gestational age (EGA), to develop, validate and compare several models to predict time to hospital discharge based on time-dependent covariates, and based on the presence of 5 key risk factors as predictors. Patients and Methods This was a retrospective analysis of infants <27 weeks EGA, born 7/2002-12/2005 and surviving to discharge from a NICHD Neonatal Research Network site. Time to discharge was modeled as continuous (postmenstrual age at discharge, PMAD), and categorical variables (“Early” and “Late” discharge). Three linear and logistic regression models with time-dependent covariate inclusion were developed (perinatal factors only, perinatal+early neonatal factors, perinatal+early+later factors). Models for Early and Late discharge using the cumulative presence of 5 key risk factors as predictors were also evaluated. Predictive capabilities were compared using coefficient of determination (R2) for linear models, and AUC of ROC curve for logistic models. Results Data from 2254 infants were included. Prediction of PMAD was poor, with only 38% of variation explained by linear models. However, models incorporating later clinical characteristics were more accurate in predicting “Early” or “Late” discharge (full models: AUC 0.76-0.83 vs. perinatal factor models: AUC 0.56-0.69). In simplified key risk factors models, predicted probabilities for Early and Late discharge compared favorably with observed rates. Furthermore, the AUC (0.75-0.77) were similar to those of models including the full factor set. Conclusions Prediction of Early or Late discharge is poor if only perinatal factors are considered, but improves substantially with knowledge of later-occurring morbidities. Prediction using a few key risk factors is comparable to full models, and may offer a clinically applicable strategy. PMID:20008430

  19. Instrumental record of debris flow initiation during natural rainfall: Implications for modeling slope stability

    USGS Publications Warehouse

    Montgomery, D.R.; Schmidt, K.M.; Dietrich, W.E.; McKean, J.

    2009-01-01

    The middle of a hillslope hollow in the Oregon Coast Range failed and mobilized as a debris flow during heavy rainfall in November 1996. Automated pressure transducers recorded high spatial variability of pore water pressure within the area that mobilized as a debris flow, which initiated where local upward flow from bedrock developed into overlying colluvium. Postfailure observations of the bedrock surface exposed in the debris flow scar reveal a strong spatial correspondence between elevated piezometric response and water discharging from bedrock fractures. Measurements of apparent root cohesion on the basal (Cb) and lateral (Cl) scarp demonstrate substantial local variability, with areally weighted values of Cb = 0.1 and Cl = 4.6 kPa. Using measured soil properties and basal root strength, the widely used infinite slope model, employed assuming slope parallel groundwater flow, provides a poor prediction of hydrologie conditions at failure. In contrast, a model including lateral root strength (but neglecting lateral frictional strength) gave a predicted critical value of relative soil saturation that fell within the range defined by the arithmetic and geometric mean values at the time of failure. The 3-D slope stability model CLARA-W, used with locally observed pore water pressure, predicted small areas with lower factors of safety within the overall slide mass at sites consistent with field observations of where the failure initiated. This highly variable and localized nature of small areas of high pore pressure that can trigger slope failure means, however, that substantial uncertainty appears inevitable for estimating hydrologie conditions within incipient debris flows under natural conditions. Copyright 2009 by the American Geophysical Union.

  20. A Simple Model Predicting Individual Weight Change in Humans

    PubMed Central

    Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.

    2010-01-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  1. Comparative Longterm Mortality Trends in Cancer vs. Ischemic Heart Disease in Puerto Rico.

    PubMed

    Torres, David; Pericchi, Luis R; Mattei, Hernando; Zevallos, Juan C

    2017-06-01

    Although contemporary mortality data are important for health assessment and planning purposes, their availability lag several years. Statistical projection techniques can be employed to obtain current estimates. This study aimed to assess annual trends of mortality in Puerto Rico due to cancer and Ischemic Heart Disease (IHD), and to predict shorterm and longterm cancer and IHD mortality figures. Age-adjusted mortality per 100,000 population projections with a 50% interval probability were calculated utilizing a Bayesian statistical approach of Age-Period-Cohort dynamic model. Multiple cause-of-death annual files for years 1994-2010 for Puerto Rico were used to calculate shortterm (2011-2012) predictions. Longterm (2013-2022) predictions were based on quinquennial data. We also calculated gender differences in rates (men-women) for each study period. Mortality rates for women were similar for cancer and IHD in the 1994-1998 period, but changed substantially in the projected 2018-2022 period. Cancer mortality rates declined gradually overtime, and the gender difference remained constant throughout the historical and projected trends. A consistent declining trend for IHD historical annual mortality rate was observed for both genders, with a substantial changepoint around 2004-2005 for men. The initial gender difference of 33% (80/100,00 vs. 60/100,000) in mortality rates observed between cancer and IHD in the 1994-1998 period increased to 300% (60/100,000 vs. 20/100,000) for the 2018-2022 period. The APC projection model accurately projects shortterm and longterm mortality trends for cancer and IHD in this population: The steady historical and projected cancer mortality rates contrasts with the substantial decline in IHD mortality rates, especially in men.

  2. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    PubMed

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  3. Does the Current Minimum Validate (or Invalidate) Cycle Prediction Methods?

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.

    2010-01-01

    This deep, extended solar minimum and the slow start to Cycle 24 strongly suggest that Cycle 24 will be a small cycle. A wide array of solar cycle prediction techniques have been applied to predicting the amplitude of Cycle 24 with widely different results. Current conditions and new observations indicate that some highly regarded techniques now appear to have doubtful utility. Geomagnetic precursors have been reliable in the past and can be tested with 12 cycles of data. Of the three primary geomagnetic precursors only one (the minimum level of geomagnetic activity) suggests a small cycle. The Sun's polar field strength has also been used to successfully predict the last three cycles. The current weak polar fields are indicative of a small cycle. For the first time, dynamo models have been used to predict the size of a solar cycle but with opposite predictions depending on the model and the data assimilation. However, new measurements of the surface meridional flow indicate that the flow was substantially faster on the approach to Cycle 24 minimum than at Cycle 23 minimum. In both dynamo predictions a faster meridional flow should have given a shorter cycle 23 with stronger polar fields. This suggests that these dynamo models are not yet ready for solar cycle prediction.

  4. Predictions of heading date in bread wheat (Triticum aestivum L.) using QTL-based parameters of an ecophysiological model

    PubMed Central

    Bogard, Matthieu; Ravel, Catherine; Paux, Etienne; Bordes, Jacques; Balfourier, François; Chapman, Scott C.; Le Gouis, Jacques; Allard, Vincent

    2014-01-01

    Prediction of wheat phenology facilitates the selection of cultivars with specific adaptations to a particular environment. However, while QTL analysis for heading date can identify major genes controlling phenology, the results are limited to the environments and genotypes tested. Moreover, while ecophysiological models allow accurate predictions in new environments, they may require substantial phenotypic data to parameterize each genotype. Also, the model parameters are rarely related to all underlying genes, and all the possible allelic combinations that could be obtained by breeding cannot be tested with models. In this study, a QTL-based model is proposed to predict heading date in bread wheat (Triticum aestivum L.). Two parameters of an ecophysiological model (V sat and P base, representing genotype vernalization requirements and photoperiod sensitivity, respectively) were optimized for 210 genotypes grown in 10 contrasting location × sowing date combinations. Multiple linear regression models predicting V sat and P base with 11 and 12 associated genetic markers accounted for 71 and 68% of the variance of these parameters, respectively. QTL-based V sat and P base estimates were able to predict heading date of an independent validation data set (88 genotypes in six location × sowing date combinations) with a root mean square error of prediction of 5 to 8.6 days, explaining 48 to 63% of the variation for heading date. The QTL-based model proposed in this study may be used for agronomic purposes and to assist breeders in suggesting locally adapted ideotypes for wheat phenology. PMID:25148833

  5. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    PubMed Central

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  6. Drivers and seasonal predictability of extreme wind speeds in the ECMWF System 4 and a statistical model

    NASA Astrophysics Data System (ADS)

    Walz, M. A.; Donat, M.; Leckebusch, G. C.

    2017-12-01

    As extreme wind speeds are responsible for large socio-economic losses in Europe, a skillful prediction would be of great benefit for disaster prevention as well as for the actuarial community. Here we evaluate patterns of large-scale atmospheric variability and the seasonal predictability of extreme wind speeds (e.g. >95th percentile) in the European domain in the dynamical seasonal forecast system ECMWF System 4, and compare to the predictability based on a statistical prediction model. The dominant patterns of atmospheric variability show distinct differences between reanalysis and ECMWF System 4, with most patterns in System 4 extended downstream in comparison to ERA-Interim. The dissimilar manifestations of the patterns within the two models lead to substantially different drivers associated with the occurrence of extreme winds in the respective model. While the ECMWF System 4 is shown to provide some predictive power over Scandinavia and the eastern Atlantic, only very few grid cells in the European domain have significant correlations for extreme wind speeds in System 4 compared to ERA-Interim. In contrast, a statistical model predicts extreme wind speeds during boreal winter in better agreement with the observations. Our results suggest that System 4 does not seem to capture the potential predictability of extreme winds that exists in the real world, and therefore fails to provide reliable seasonal predictions for lead months 2-4. This is likely related to the unrealistic representation of large-scale patterns of atmospheric variability. Hence our study points to potential improvements of dynamical prediction skill by improving the simulation of large-scale atmospheric dynamics.

  7. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  8. Determination of Ion Atmosphere Effects on the Nucleic Acid Electrostatic Potential and Ligand Association Using AH+·C Wobble Formation in Double-Stranded DNA

    PubMed Central

    2017-01-01

    The high charge density of nucleic acids and resulting ion atmosphere profoundly influence the conformational landscape of RNA and DNA and their association with small molecules and proteins. Electrostatic theories have been applied to quantitatively model the electrostatic potential surrounding nucleic acids and the effects of the surrounding ion atmosphere, but experimental measures of the potential and tests of these models have often been complicated by conformational changes and multisite binding equilibria, among other factors. We sought a simple system to further test the basic predictions from electrostatics theory and to measure the energetic consequences of the nucleic acid electrostatic field. We turned to a DNA system developed by Bevilacqua and co-workers that involves a proton as a ligand whose binding is accompanied by formation of an internal AH+·C wobble pair [Siegfried, N. A., et al. Biochemistry, 2010, 49, 3225]. Consistent with predictions from polyelectrolyte models, we observed logarithmic dependences of proton affinity versus salt concentration of −0.96 ± 0.03 and −0.52 ± 0.01 with monovalent and divalent cations, respectively, and these results help clarify prior results that appeared to conflict with these fundamental models. Strikingly, quantitation of the ion atmosphere content indicates that divalent cations are preferentially lost over monovalent cations upon A·C protonation, providing experimental indication of the preferential localization of more highly charged cations to the inner shell of the ion atmosphere. The internal AH+·C wobble system further allowed us to parse energetic contributions and extract estimates for the electrostatic potential at the position of protonation. The results give a potential near the DNA surface at 20 mM Mg2+ that is much less substantial than at 20 mM K+ (−120 mV vs −210 mV). These values and difference are similar to predictions from theory, and the potential is substantially reduced at higher salt, also as predicted; however, even at 1 M K+ the potential remains substantial, counter to common assumptions. The A·C protonation module allows extraction of new properties of the ion atmosphere and provides an electrostatic meter that will allow local electrostatic potential and energetics to be measured within nucleic acids and their complexes with proteins. PMID:28489947

  9. Risk prediction models for graft failure in kidney transplantation: a systematic review.

    PubMed

    Kaboré, Rémi; Haller, Maria C; Harambat, Jérôme; Heinze, Georg; Leffondré, Karen

    2017-04-01

    Risk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005-15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  10. Summer Precipitation Predicts Spatial Distributions of Semiaquatic Mammals

    PubMed Central

    Ahlers, Adam A.; Cotner, Lisa A.; Wolff, Patrick J.; Mitchell, Mark A.; Heske, Edward J.; Schooley, Robert L.

    2015-01-01

    Climate change is predicted to increase the frequency of droughts and intensity of seasonal precipitation in many regions. Semiaquatic mammals should be vulnerable to this increased variability in precipitation, especially in human-modified landscapes where dispersal to suitable habitat or temporary refugia may be limited. Using six years of presence-absence data (2007–2012) spanning years of record-breaking drought and flood conditions, we evaluated regional occupancy dynamics of American mink (Neovison vison) and muskrats (Ondatra zibethicus) in a highly altered agroecosystem in Illinois, USA. We used noninvasive sign surveys and a multiseason occupancy modeling approach to estimate annual occupancy rates for both species and related these rates to summer precipitation. We also tracked radiomarked individuals to assess mortality risk for both species when moving in terrestrial areas. Annual model-averaged estimates of occupancy for mink and muskrat were correlated positively to summer precipitation. Mink and muskrats were widespread during a year (2008) with above-average precipitation. However, estimates of site occupancy declined substantially for mink (0.56) and especially muskrats (0.09) during the severe drought of 2012. Mink are generalist predators that probably use terrestrial habitat during droughts. However, mink had substantially greater risk of mortality away from streams. In comparison, muskrats are more restricted to aquatic habitats and likely suffered high mortality during the drought. Our patterns are striking, but a more mechanistic understanding is needed of how semiaquatic species in human-modified ecosystems will respond ecologically in situ to extreme weather events predicted by climate-change models. PMID:26284916

  11. Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City

    PubMed Central

    2016-01-01

    The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast. PMID:27855155

  12. Summer Precipitation Predicts Spatial Distributions of Semiaquatic Mammals.

    PubMed

    Ahlers, Adam A; Cotner, Lisa A; Wolff, Patrick J; Mitchell, Mark A; Heske, Edward J; Schooley, Robert L

    2015-01-01

    Climate change is predicted to increase the frequency of droughts and intensity of seasonal precipitation in many regions. Semiaquatic mammals should be vulnerable to this increased variability in precipitation, especially in human-modified landscapes where dispersal to suitable habitat or temporary refugia may be limited. Using six years of presence-absence data (2007-2012) spanning years of record-breaking drought and flood conditions, we evaluated regional occupancy dynamics of American mink (Neovison vison) and muskrats (Ondatra zibethicus) in a highly altered agroecosystem in Illinois, USA. We used noninvasive sign surveys and a multiseason occupancy modeling approach to estimate annual occupancy rates for both species and related these rates to summer precipitation. We also tracked radiomarked individuals to assess mortality risk for both species when moving in terrestrial areas. Annual model-averaged estimates of occupancy for mink and muskrat were correlated positively to summer precipitation. Mink and muskrats were widespread during a year (2008) with above-average precipitation. However, estimates of site occupancy declined substantially for mink (0.56) and especially muskrats (0.09) during the severe drought of 2012. Mink are generalist predators that probably use terrestrial habitat during droughts. However, mink had substantially greater risk of mortality away from streams. In comparison, muskrats are more restricted to aquatic habitats and likely suffered high mortality during the drought. Our patterns are striking, but a more mechanistic understanding is needed of how semiaquatic species in human-modified ecosystems will respond ecologically in situ to extreme weather events predicted by climate-change models.

  13. How ecology shapes exploitation: a framework to predict the behavioural response of human and animal foragers along exploration-exploitation trade-offs.

    PubMed

    Monk, Christopher T; Barbier, Matthieu; Romanczuk, Pawel; Watson, James R; Alós, Josep; Nakayama, Shinnosuke; Rubenstein, Daniel I; Levin, Simon A; Arlinghaus, Robert

    2018-06-01

    Understanding how humans and other animals behave in response to changes in their environments is vital for predicting population dynamics and the trajectory of coupled social-ecological systems. Here, we present a novel framework for identifying emergent social behaviours in foragers (including humans engaged in fishing or hunting) in predator-prey contexts based on the exploration difficulty and exploitation potential of a renewable natural resource. A qualitative framework is introduced that predicts when foragers should behave territorially, search collectively, act independently or switch among these states. To validate it, we derived quantitative predictions from two models of different structure: a generic mathematical model, and a lattice-based evolutionary model emphasising exploitation and exclusion costs. These models independently identified that the exploration difficulty and exploitation potential of the natural resource controls the social behaviour of resource exploiters. Our theoretical predictions were finally compared to a diverse set of empirical cases focusing on fisheries and aquatic organisms across a range of taxa, substantiating the framework's predictions. Understanding social behaviour for given social-ecological characteristics has important implications, particularly for the design of governance structures and regulations to move exploited systems, such as fisheries, towards sustainability. Our framework provides concrete steps in this direction. © 2018 John Wiley & Sons Ltd/CNRS.

  14. Residual Strength Prediction of Fuselage Structures with Multiple Site Damage

    NASA Technical Reports Server (NTRS)

    Chen, Chuin-Shan; Wawrzynek, Paul A.; Ingraffea, Anthony R.

    1999-01-01

    This paper summarizes recent results on simulating full-scale pressure tests of wide body, lap-jointed fuselage panels with multiple site damage (MSD). The crack tip opening angle (CTOA) fracture criterion and the FRANC3D/STAGS software program were used to analyze stable crack growth under conditions of general yielding. The link-up of multiple cracks and residual strength of damaged structures were predicted. Elastic-plastic finite element analysis based on the von Mises yield criterion and incremental flow theory with small strain assumption was used. A global-local modeling procedure was employed in the numerical analyses. Stress distributions from the numerical simulations are compared with strain gage measurements. Analysis results show that accurate representation of the load transfer through the rivets is crucial for the model to predict the stress distribution accurately. Predicted crack growth and residual strength are compared with test data. Observed and predicted results both indicate that the occurrence of small MSD cracks substantially reduces the residual strength. Modeling fatigue closure is essential to capture the fracture behavior during the early stable crack growth. Breakage of a tear strap can have a major influence on residual strength prediction.

  15. Learning complex temporal patterns with resource-dependent spike timing-dependent plasticity.

    PubMed

    Hunzinger, Jason F; Chan, Victor H; Froemke, Robert C

    2012-07-01

    Studies of spike timing-dependent plasticity (STDP) have revealed that long-term changes in the strength of a synapse may be modulated substantially by temporal relationships between multiple presynaptic and postsynaptic spikes. Whereas long-term potentiation (LTP) and long-term depression (LTD) of synaptic strength have been modeled as distinct or separate functional mechanisms, here, we propose a new shared resource model. A functional consequence of our model is fast, stable, and diverse unsupervised learning of temporal multispike patterns with a biologically consistent spiking neural network. Due to interdependencies between LTP and LTD, dendritic delays, and proactive homeostatic aspects of the model, neurons are equipped to learn to decode temporally coded information within spike bursts. Moreover, neurons learn spike timing with few exposures in substantial noise and jitter. Surprisingly, despite having only one parameter, the model also accurately predicts in vitro observations of STDP in more complex multispike trains, as well as rate-dependent effects. We discuss candidate commonalities in natural long-term plasticity mechanisms.

  16. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  17. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  18. Alterations in choice behavior by manipulations of world model

    PubMed Central

    Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.

    2010-01-01

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507

  19. Brain mechanisms in religion and spirituality: An integrative predictive processing framework.

    PubMed

    van Elk, Michiel; Aleman, André

    2017-02-01

    We present the theory of predictive processing as a unifying framework to account for the neurocognitive basis of religion and spirituality. Our model is substantiated by discussing four different brain mechanisms that play a key role in religion and spirituality: temporal brain areas are associated with religious visions and ecstatic experiences; multisensory brain areas and the default mode network are involved in self-transcendent experiences; the Theory of Mind-network is associated with prayer experiences and over attribution of intentionality; top-down mechanisms instantiated in the anterior cingulate cortex and the medial prefrontal cortex could be involved in acquiring and maintaining intuitive supernatural beliefs. We compare the predictive processing model with two-systems accounts of religion and spirituality, by highlighting the central role of prediction error monitoring. We conclude by presenting novel predictions for future research and by discussing the philosophical and theological implications of neuroscientific research on religion and spirituality. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Does the covariance structure matter in longitudinal modelling for the prediction of future CD4 counts?

    PubMed

    Taylor, J M; Law, N

    1998-10-30

    We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated Ornstein-Uhlenbeck stochastic process; one based on Brownian motion, and two derived from standard linear and quadratic random-effects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study, we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in efficiency. The quadratic random-effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on the integrated Ornstein-Uhlenbeck stochastic process is the preferred one of the four considered because of its efficiency and robustness properties. We also use the difference between the future predicted and observed CD4 counts to assess an appropriate transformation of CD4 counts; a fourth root, cube root and square root all appear reasonable choices.

  1. A neuromathematical model of human information processing and its application to science content acquisition

    NASA Astrophysics Data System (ADS)

    Anderson, O. Roger

    The rate of information processing during science learning and the efficiency of the learner in mobilizing relevant information in long-term memory as an aid in transmitting newly acquired information to stable storage in long-term memory are fundamental aspects of science content acquisition. These cognitive processes, moreover, may be substantially related in tempo and quality of organization to the efficiency of higher thought processes such as divergent thinking and problem-solving ability that characterize scientific thought. As a contribution to our quantitative understanding of these fundamental information processes, a mathematical model of information acquisition is presented and empirically evaluated in comparison to evidence obtained from experimental studies of science content acquisition. Computer-based models are used to simulate variations in learning parameters and to generate the theoretical predictions to be empirically tested. The initial tests of the predictive accuracy of the model show close agreement between predicted and actual mean recall scores in short-term learning tasks. Implications of the model for human information acquisition and possible future research are discussed in the context of the unique theoretical framework of the model.

  2. Predicting the natural flow regime: Models for assessing hydrological alteration in streams

    USGS Publications Warehouse

    Carlisle, D.M.; Falcone, J.; Wolock, D.M.; Meador, M.R.; Norris, R.H.

    2009-01-01

    Understanding the extent to which natural streamflow characteristics have been altered is an important consideration for ecological assessments of streams. Assessing hydrologic condition requires that we quantify the attributes of the flow regime that would be expected in the absence of anthropogenic modifications. The objective of this study was to evaluate whether selected streamflow characteristics could be predicted at regional and national scales using geospatial data. Long-term, gaged river basins distributed throughout the contiguous US that had streamflow characteristics representing least disturbed or near pristine conditions were identified. Thirteen metrics of the magnitude, frequency, duration, timing and rate of change of streamflow were calculated using a 20-50 year period of record for each site. We used random forests (RF), a robust statistical modelling approach, to develop models that predicted the value for each streamflow metric using natural watershed characteristics. We compared the performance (i.e. bias and precision) of national- and regional-scale predictive models to that of models based on landscape classifications, including major river basins, ecoregions and hydrologic landscape regions (HLR). For all hydrologic metrics, landscape stratification models produced estimates that were less biased and more precise than a null model that accounted for no natural variability. Predictive models at the national and regional scale performed equally well, and substantially improved predictions of all hydrologic metrics relative to landscape stratification models. Prediction error rates ranged from 15 to 40%, but were 25% for most metrics. We selected three gaged, non-reference sites to illustrate how predictive models could be used to assess hydrologic condition. These examples show how the models accurately estimate predisturbance conditions and are sensitive to changes in streamflow variability associated with long-term land-use change. We also demonstrate how the models can be applied to predict expected natural flow characteristics at ungaged sites. ?? 2009 John Wiley & Sons, Ltd.

  3. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    NASA Astrophysics Data System (ADS)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  4. Modelling obesity trends in Australia: unravelling the past and predicting the future.

    PubMed

    Hayes, A J; Lung, T W C; Bauman, A; Howard, K

    2017-01-01

    Modelling is increasingly being used to predict the epidemiology of obesity progression and its consequences. The aims of this study were: (a) to present and validate a model for prediction of obesity among Australian adults and (b) to use the model to project the prevalence of obesity and severe obesity by 2025. Individual level simulation combined with survey estimation techniques to model changing population body mass index (BMI) distribution over time. The model input population was derived from a nationally representative survey in 1995, representing over 12 million adults. Simulations were run for 30 years. The model was validated retrospectively and then used to predict obesity and severe obesity by 2025 among different aged cohorts and at a whole population level. The changing BMI distribution over time was well predicted by the model and projected prevalence of weight status groups agreed with population level data in 2008, 2012 and 2014.The model predicts more growth in obesity among younger than older adult cohorts. Projections at a whole population level, were that healthy weight will decline, overweight will remain steady, but obesity and severe obesity prevalence will continue to increase beyond 2016. Adult obesity prevalence was projected to increase from 19% in 1995 to 35% by 2025. Severe obesity (BMI>35), which was only around 5% in 1995, was projected to be 13% by 2025, two to three times the 1995 levels. The projected rise in obesity severe obesity will have more substantial cost and healthcare system implications than in previous decades. Having a robust epidemiological model is key to predicting these long-term costs and health outcomes into the future.

  5. Role of socioeconomic status measures in long-term mortality risk prediction after myocardial infarction.

    PubMed

    Molshatzki, Noa; Drory, Yaacov; Myers, Vicki; Goldbourt, Uri; Benyamini, Yael; Steinberg, David M; Gerber, Yariv

    2011-07-01

    The relationship of risk factors to outcomes has traditionally been assessed by measures of association such as odds ratio or hazard ratio and their statistical significance from an adjusted model. However, a strong, highly significant association does not guarantee a gain in stratification capacity. Using recently developed model performance indices, we evaluated the incremental discriminatory power of individual and neighborhood socioeconomic status (SES) measures after myocardial infarction (MI). Consecutive patients aged ≤65 years (N=1178) discharged from 8 hospitals in central Israel after incident MI in 1992 to 1993 were followed-up through 2005. A basic model (demographic variables, traditional cardiovascular risk factors, and disease severity indicators) was compared with an extended model including SES measures (education, income, employment, living with a steady partner, and neighborhood SES) in terms of Harrell c statistic, integrated discrimination improvement (IDI), and net reclassification improvement (NRI). During the 13-year follow-up, 326 (28%) patients died. Cox proportional hazards models showed that all SES measures were significantly and independently associated with mortality. Furthermore, compared with the basic model, the extended model yielded substantial gains (all P<0.001) in c statistic (0.723 to 0.757), NRI (15.2%), IDI (5.9%), and relative IDI (32%). Improvement was observed both for sensitivity (classification of events) and specificity (classification of nonevents). This study illustrates the additional insights that can be gained from considering the IDI and NRI measures of model performance and suggests that, among community patients with incident MI, incorporating SES measures into a clinical-based model substantially improves long-term mortality risk prediction.

  6. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    USGS Publications Warehouse

    Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.

    2009-01-01

    Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional vegetation changes on area burned to account more fully for interactions among fire, climate, and vegetation dynamics. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing Ltd.

  7. Utilization of satellite data and regional scale numerical models in short range weather forecasting

    NASA Technical Reports Server (NTRS)

    Kreitzberg, C. W.

    1985-01-01

    Overwhelming evidence was developed in a number of studies of satellite data impact on numerical weather prediction that it is unrealistic to expect satellite temperature soundings to improve detailed regional numerical weather prediction. It is likely that satellite data over the United States would substantially impact mesoscale dynamical predictions if the effort were made to develop a composite moisture analysis system. The horizontal variability of moisture, most clearly depicited in images from satellite water vapor channels, would not be determined from conventional rawinsondes even if that network were increased by a doubling of both the number of sites and the time frequency.

  8. Modeling groundwater nitrate concentrations in private wells in Iowa

    USGS Publications Warehouse

    Wheeler, David C.; Nolan, Bernard T.; Flory, Abigail R.; DellaValle, Curt T.; Ward, Mary H.

    2015-01-01

    Contamination of drinking water by nitrate is a growing problem in many agricultural areas of the country. Ingested nitrate can lead to the endogenous formation of N-nitroso compounds, potent carcinogens. We developed a predictive model for nitrate concentrations in private wells in Iowa. Using 34,084 measurements of nitrate in private wells, we trained and tested random forest models to predict log nitrate levels by systematically assessing the predictive performance of 179 variables in 36 thematic groups (well depth, distance to sinkholes, location, land use, soil characteristics, nitrogen inputs, meteorology, and other factors). The final model contained 66 variables in 17 groups. Some of the most important variables were well depth, slope length within 1 km of the well, year of sample, and distance to nearest animal feeding operation. The correlation between observed and estimated nitrate concentrations was excellent in the training set (r-square = 0.77) and was acceptable in the testing set (r-square = 0.38). The random forest model had substantially better predictive performance than a traditional linear regression model or a regression tree. Our model will be used to investigate the association between nitrate levels in drinking water and cancer risk in the Iowa participants of the Agricultural Health Study cohort.

  9. Modeling groundwater nitrate concentrations in private wells in Iowa.

    PubMed

    Wheeler, David C; Nolan, Bernard T; Flory, Abigail R; DellaValle, Curt T; Ward, Mary H

    2015-12-01

    Contamination of drinking water by nitrate is a growing problem in many agricultural areas of the country. Ingested nitrate can lead to the endogenous formation of N-nitroso compounds, potent carcinogens. We developed a predictive model for nitrate concentrations in private wells in Iowa. Using 34,084 measurements of nitrate in private wells, we trained and tested random forest models to predict log nitrate levels by systematically assessing the predictive performance of 179 variables in 36 thematic groups (well depth, distance to sinkholes, location, land use, soil characteristics, nitrogen inputs, meteorology, and other factors). The final model contained 66 variables in 17 groups. Some of the most important variables were well depth, slope length within 1 km of the well, year of sample, and distance to nearest animal feeding operation. The correlation between observed and estimated nitrate concentrations was excellent in the training set (r-square=0.77) and was acceptable in the testing set (r-square=0.38). The random forest model had substantially better predictive performance than a traditional linear regression model or a regression tree. Our model will be used to investigate the association between nitrate levels in drinking water and cancer risk in the Iowa participants of the Agricultural Health Study cohort. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Root architecture simulation improves the inference from seedling root phenotyping towards mature root systems

    PubMed Central

    Zhao, Jiangsan; Rewald, Boris; Leitner, Daniel; Nagel, Kerstin A.; Nakhforoosh, Alireza

    2017-01-01

    Abstract Root phenotyping provides trait information for plant breeding. A shortcoming of high-throughput root phenotyping is the limitation to seedling plants and failure to make inferences on mature root systems. We suggest root system architecture (RSA) models to predict mature root traits and overcome the inference problem. Sixteen pea genotypes were phenotyped in (i) seedling (Petri dishes) and (ii) mature (sand-filled columns) root phenotyping platforms. The RSA model RootBox was parameterized with seedling traits to simulate the fully developed root systems. Measured and modelled root length, first-order lateral number, and root distribution were compared to determine key traits for model-based prediction. No direct relationship in root traits (tap, lateral length, interbranch distance) was evident between phenotyping systems. RootBox significantly improved the inference over phenotyping platforms. Seedling plant tap and lateral root elongation rates and interbranch distance were sufficient model parameters to predict genotype ranking in total root length with an RSpearman of 0.83. Parameterization including uneven lateral spacing via a scaling function substantially improved the prediction of architectures underlying the differently sized root systems. We conclude that RSA models can solve the inference problem of seedling root phenotyping. RSA models should be included in the phenotyping pipeline to provide reliable information on mature root systems to breeding research. PMID:28168270

  11. Risk terrain modeling predicts child maltreatment.

    PubMed

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Ranking site vulnerability to increasing temperatures in southern Appalachian brook trout streams in Virginia: An exposure-sensitivity approach

    Treesearch

    Bradly A. Trumbo; Keith H. Nislow; Jonathan Stallings; Mark Hudy; Eric P. Smith; Dong-Yun Kim; Bruce Wiggins; Charles A. Dolloff

    2014-01-01

    Models based on simple air temperature–water temperature relationships have been useful in highlighting potential threats to coldwater-dependent species such as Brook Trout Salvelinus fontinalis by predicting major losses of habitat and substantial reductions in geographic distribution. However, spatial variability in the relationship between changes...

  13. Voluntary Midlife Career Change: Integrating the Transtheoretical Model and the Life-Span, Life-Space Approach

    ERIC Educational Resources Information Center

    Barclay, Susan R.; Stoltz, Kevin B.; Chung, Y. Barry

    2011-01-01

    Frequent career change is the predicted experience of workers in the global economy. Self initiating career changers are a substantial subset of the total population of career changers. There is currently a dearth of theory and research to help career counselors conceptualize the career change process for the application of appropriate…

  14. Using kinetic models to predict thermal degradation of fire-retardant-treated plywood roof sheathing

    Treesearch

    Patricia Lebow; Jerrold E. Winandy; Patricia K. Lebow

    2003-01-01

    Between 1985-1995 a substantial number of multifamily housing units in the Eastern and Southern U.S. experienced problems with thermally degraded fire-retardant-treated (FRT) plywood roof sheathing. A series of studies conducted at the USDA Forest Service, Forest Products Laboratory (FPL), examined the materials, chemical mechanisms, and process implications and has...

  15. Factors That Affect South African Reading Literacy Achievement: Evidence from prePIRLS 2011

    ERIC Educational Resources Information Center

    van Staden, Surette; Bosker, Roel

    2014-01-01

    This study aims to identify factors that predict reading literacy achievement among Grade 4 learners in South Africa by utilising aspects of Carroll's model of school learning. The study draws on the preProgress in International Reading Literacy Study (prePIRLS) 2011 data, which places South African Grade 4 learners' results substantially below…

  16. Generalized Partial Least Squares Approach for Nominal Multinomial Logit Regression Models with a Functional Covariate

    ERIC Educational Resources Information Center

    Albaqshi, Amani Mohammed H.

    2017-01-01

    Functional Data Analysis (FDA) has attracted substantial attention for the last two decades. Within FDA, classifying curves into two or more categories is consistently of interest to scientists, but multi-class prediction within FDA is challenged in that most classification tools have been limited to binary response applications. The functional…

  17. Comparison of the predictions of two road dust emission models with the measurements of a mobile van

    NASA Astrophysics Data System (ADS)

    Kauhaniemi, M.; Stojiljkovic, A.; Pirjola, L.; Karppinen, A.; Härkönen, J.; Kupiainen, K.; Kangas, L.; Aarnio, M. A.; Omstedt, G.; Denby, B. R.; Kukkonen, J.

    2014-02-01

    The predictions of two road dust suspension emission models were compared with the on-site mobile measurements of suspension emission factors. Such a quantitative comparison has not previously been reported in the reviewed literature. The models used were the Nordic collaboration model NORTRIP (NOn-exhaust Road TRaffic Induced Particle emissions) and the Swedish-Finnish FORE model (Forecasting Of Road dust Emissions). These models describe particulate matter generated by the wear of road surface due to traction control methods and processes that control the suspension of road dust particles into the air. An experimental measurement campaign was conducted using a mobile laboratory called SNIFFER, along two selected road segments in central Helsinki in 2007 and 2008. The suspended PM10 concentration was measured behind the left rear tyre and the street background PM10 concentration in front of the van. Both models reproduced the measured seasonal variation of suspension emission factors fairly well during both years at both measurement sites. However, both models substantially under-predicted the measured emission values. The results indicate that road dust emission models can be directly compared with mobile measurements; however, more extensive and versatile measurement campaigns will be needed in the future.

  18. Comparison of Turbulence Models for Nozzle-Afterbody Flows with Propulsive Jets

    NASA Technical Reports Server (NTRS)

    Compton, William B., III

    1996-01-01

    A numerical investigation was conducted to assess the accuracy of two turbulence models when computing non-axisymmetric nozzle-afterbody flows with propulsive jets. Navier-Stokes solutions were obtained for a Convergent-divergent non-axisymmetric nozzle-afterbody and its associated jet exhaust plume at free-stream Mach numbers of 0.600 and 0.938 at an angle of attack of 0 deg. The Reynolds number based on model length was approximately 20 x 10(exp 6). Turbulent dissipation was modeled by the algebraic Baldwin-Lomax turbulence model with the Degani-Schiff modification and by the standard Jones-Launder kappa-epsilon turbulence model. At flow conditions without strong shocks and with little or no separation, both turbulence models predicted the pressures on the surfaces of the nozzle very well. When strong shocks and massive separation existed, both turbulence models were unable to predict the flow accurately. Mixing of the jet exhaust plume and the external flow was underpredicted. The differences in drag coefficients for the two turbulence models illustrate that substantial development is still required for computing very complex flows before nozzle performance can be predicted accurately for all external flow conditions.

  19. The regionalization of national-scale SPARROW models for stream nutrients

    USGS Publications Warehouse

    Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.

    2011-01-01

    This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.

  20. Validating models of target acquisition performance in the dismounted soldier context

    NASA Astrophysics Data System (ADS)

    Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.

    2018-04-01

    The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.

  1. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    PubMed Central

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  2. Analysis of significant factors for dengue fever incidence prediction.

    PubMed

    Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak

    2016-04-16

    Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting models, as confirmed by AIC, BIC, and MAPE.

  3. Empathy and nonattachment independently predict peer nominations of prosocial behavior of adolescents

    PubMed Central

    Sahdra, Baljinder K.; Ciarrochi, Joseph; Parker, Philip D.; Marshall, Sarah; Heaven, Patrick

    2015-01-01

    There is a plethora of research showing that empathy promotes prosocial behavior among young people. We examined a relatively new construct in the mindfulness literature, nonattachment, defined as a flexible way of relating to one's experiences without clinging to or suppressing them. We tested whether nonattachment could predict prosociality above and beyond empathy. Nonattachment implies high cognitive flexibility and sufficient mental resources to step out of excessive self-cherishing to be there for others in need. Multilevel Poisson models using a sample of 15-year olds (N = 1831) showed that empathy and nonattachment independently predicted prosocial behaviors of helpfulness and kindness, as judged by same-sex and opposite-sex peers, except for when boys nominated girls. The effects of nonattachment remained substantial in more conservative models including self-esteem and peer nominations of liking. PMID:25852590

  4. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  5. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  6. An Assessment of ECMWF Analyses and Model Forecasts over the North Slope of Alaska Using Observations from the ARM Mixed-Phase Arctic Cloud Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Shaocheng; Klein, Stephen A.; Yio, J. John

    2006-03-11

    European Centre for Medium-Range Weather Forecasts (ECMWF) analysis and model forecast data are evaluated using observations collected during the Atmospheric Radiation Measurement (ARM) October 2004 Mixed-Phase Arctic Cloud Experiment (M-PACE) at its North Slope of Alaska (NSA) site. It is shown that the ECMWF analysis reasonably represents the dynamic and thermodynamic structures of the large-scale systems that affected the NSA during M-PACE. The model-analyzed near-surface horizontal winds, temperature, and relative humidity also agree well with the M-PACE surface measurements. Given the well-represented large-scale fields, the model shows overall good skill in predicting various cloud types observed during M-PACE; however, themore » physical properties of single-layer boundary layer clouds are in substantial error. At these times, the model substantially underestimates the liquid water path in these clouds, with the concomitant result that the model largely underpredicts the downwelling longwave radiation at the surface and overpredicts the outgoing longwave radiation at the top of the atmosphere. The model also overestimates the net surface shortwave radiation, mainly because of the underestimation of the surface albedo. The problem in the surface albedo is primarily associated with errors in the surface snow prediction. Principally because of the underestimation of the surface downwelling longwave radiation at the times of single-layer boundary layer clouds, the model shows a much larger energy loss (-20.9 W m-2) than the observation (-9.6 W m-2) at the surface during the M-PACE period.« less

  7. Examination of a sociocultural model of excessive exercise among male and female adolescents.

    PubMed

    White, James; Halliwell, Emma

    2010-06-01

    There is substantial evidence that sociocultural pressures and body image disturbances can lead to disordered eating, yet few studies have examined their impact on excessive exercise. The study adapted a sociocultural model for disordered eating to predict excessive exercise using data from boys and girls in early adolescence (N=421). Perceived sociocultural pressures to lose weight and build muscle, body image disturbance and appearance investment were associated with a compulsive need to exercise. Adolescents' investment in appearance and body image disturbance fully mediated the relationship between sociocultural pressures and a compulsive need for exercise. There was no support for the meditational model in predicting adolescents' frequency or duration of exercise. Results support the sociocultural model as an explanatory model for excessive exercise, but suggest appearance investment and body image disturbance are important mediators of sociocultural pressures. 2010 Elsevier Ltd. All rights reserved.

  8. The ACC/AHA 2013 pooled cohort equations compared to a Korean Risk Prediction Model for atherosclerotic cardiovascular disease.

    PubMed

    Jung, Keum Ji; Jang, Yangsoo; Oh, Dong Joo; Oh, Byung-Hee; Lee, Sang Hoon; Park, Seong-Wook; Seung, Ki-Bae; Kim, Hong-Kyu; Yun, Young Duk; Choi, Sung Hee; Sung, Jidong; Lee, Tae-Yong; Kim, Sung Hi; Koh, Sang Baek; Kim, Moon Chan; Chang Kim, Hyeon; Kimm, Heejin; Nam, Chungmo; Park, Sungha; Jee, Sun Ha

    2015-09-01

    To evaluate the performance of the American College of Cardiology/American Heart Association (ACC/AHA) 2013 Pooled Cohort Equations in the Korean Heart Study (KHS) population and to develop a Korean Risk Prediction Model (KRPM) for atherosclerotic cardiovascular disease (ASCVD) events. The KHS cohort included 200,010 Korean adults aged 40-79 years who were free from ASCVD at baseline. Discrimination, calibration, and recalibration of the ACC/AHA Equations in predicting 10-year ASCVD risk in the KHS cohort were evaluated. The KRPM was derived using Cox model coefficients, mean risk factor values, and mean incidences from the KHS cohort. In the discriminatory analysis, the ACC/AHA Equations' White and African-American (AA) models moderately distinguished cases from non-cases, and were similar to the KRPM: For men, the area under the receiver operating characteristic curve (AUROCs) were 0.727 (White model), 0.725 (AA model), and 0.741 (KRPM); for women, the corresponding AUROCs were 0.738, 0.739, and 0.745. Absolute 10-year ASCVD risk for men in the KHS cohort was overestimated by 56.5% (White model) and 74.1% (AA model), while the risk for women was underestimated by 27.9% (White model) and overestimated by 29.1% (AA model). Recalibration of the ACC/AHA Equations did not affect discriminatory ability but improved calibration substantially, especially in men in the White model. Of the three ASCVD risk prediction models, the KRPM showed best calibration. The ACC/AHA Equations should not be directly applied for ASCVD risk prediction in a Korean population. The KRPM showed best predictive ability for ASCVD risk. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Improving satellite-driven PM2.5 models with Moderate Resolution Imaging Spectroradiometer fire counts in the southeastern U.S.

    PubMed

    Hu, Xuefei; Waller, Lance A; Lyapustin, Alexei; Wang, Yujie; Liu, Yang

    2014-10-16

    Multiple studies have developed surface PM 2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM 2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM 2.5 . In this paper, we examined whether remotely sensed fire count data could improve PM 2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM 2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM 2.5 across the models considered. Cross validation (CV) generated an R 2 of 0.69, a mean prediction error of 2.75 µg/m 3 , and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m 3 , indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m 3 , exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM 2.5 concentration estimation, especially in areas and seasons prone to fire events.

  10. Improving satellite-driven PM2.5 models with Moderate Resolution Imaging Spectroradiometer fire counts in the southeastern U.S

    PubMed Central

    Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Liu, Yang

    2017-01-01

    Multiple studies have developed surface PM2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM2.5. In this paper, we examined whether remotely sensed fire count data could improve PM2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM2.5 across the models considered. Cross validation (CV) generated an R2 of 0.69, a mean prediction error of 2.75 µg/m3, and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m3, indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m3, exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM2.5 concentration estimation, especially in areas and seasons prone to fire events. PMID:28967648

  11. Numerical Analysis of an Impinging Jet Reactor for the CVD and Gas-Phase Nucleation of Titania

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.; Stewart, Gregory D.; Collins, Joshua; Rosner, Daniel E.

    1994-01-01

    We model a cold-wall atmospheric pressure impinging jet reactor to study the CVD and gas-phase nucleation of TiO2 from a titanium tetra-iso-propoxide (TTIP)/oxygen dilute source gas mixture in nitrogen. The mathematical model uses the computational code FIDAP and complements our recent asymptotic theory for high activation energy gas-phase reactions in thin chemically reacting sublayers. The numerical predictions highlight deviations from ideality in various regions inside the experimental reactor. Model predictions of deposition rates and the onset of gas-phase nucleation compare favorably with experiments. Although variable property effects on deposition rates are not significant (approximately 11 percent at 1000 K), the reduction rates due to Soret transport is substantial (approximately 75 percent at 1000 K).

  12. Lung function parameters improve prediction of VO2peak in an elderly population: The Generation 100 study.

    PubMed

    Hassel, Erlend; Stensvold, Dorthe; Halvorsen, Thomas; Wisløff, Ulrik; Langhammer, Arnulf; Steinshamn, Sigurd

    2017-01-01

    Peak oxygen uptake (VO2peak) is an indicator of cardiovascular health and a useful tool for risk stratification. Direct measurement of VO2peak is resource-demanding and may be contraindicated. There exist several non-exercise models to estimate VO2peak that utilize easily obtainable health parameters, but none of them includes lung function measures or hemoglobin concentrations. We aimed to test whether addition of these parameters could improve prediction of VO2peak compared to an established model that includes age, waist circumference, self-reported physical activity and resting heart rate. We included 1431 subjects aged 69-77 years that completed a laboratory test of VO2peak, spirometry, and a gas diffusion test. Prediction models for VO2peak were developed with multiple linear regression, and goodness of fit was evaluated. Forced expiratory volume in one second (FEV1), diffusing capacity of the lung for carbon monoxide and blood hemoglobin concentration significantly improved the ability of the established model to predict VO2peak. The explained variance of the model increased from 31% to 48% for men and from 32% to 38% for women (p<0.001). FEV1, diffusing capacity of the lungs for carbon monoxide and hemoglobin concentration substantially improved the accuracy of VO2peak prediction when added to an established model in an elderly population.

  13. Using quadratic mean diameter and relative spacing index to enhance height-diameter and crown ratio models fitted to longitudinal data

    Treesearch

    Pradip Saud; Thomas B. Lynch; Anup K. C.; James M. Guldin

    2016-01-01

    The inclusion of quadratic mean diameter (QMD) and relative spacing index (RSI) substantially improved the predictive capacity of height–diameter at breast height (d.b.h.) and crown ratio models (CR), respectively. Data were obtained from 208 permanent plots established in western Arkansas and eastern Oklahoma during 1985–1987 and remeasured for the sixth time (2012–...

  14. Conformational dynamics of a crystalline protein from microsecond-scale molecular dynamics simulations and diffuse X-ray scattering

    DOE PAGES

    Wall, Michael E.; Van Benschoten, Andrew H.; Sauter, Nicholas K.; ...

    2014-12-01

    X-ray diffraction from protein crystals includes both sharply peaked Bragg reflections and diffuse intensity between the peaks. The information in Bragg scattering is limited to what is available in the mean electron density. The diffuse scattering arises from correlations in the electron density variations and therefore contains information about collective motions in proteins. Previous studies using molecular-dynamics (MD) simulations to model diffuse scattering have been hindered by insufficient sampling of the conformational ensemble. To overcome this issue, we have performed a 1.1-μs MD simulation of crystalline staphylococcal nuclease, providing 100-fold more sampling than previous studies. This simulation enables reproducible calculationsmore » of the diffuse intensity and predicts functionally important motions, including transitions among at least eight metastable states with different active-site geometries. The total diffuse intensity calculated using the MD model is highly correlated with the experimental data. In particular, there is excellent agreement for the isotropic component of the diffuse intensity, and substantial but weaker agreement for the anisotropic component. The decomposition of the MD model into protein and solvent components indicates that protein–solvent interactions contribute substantially to the overall diffuse intensity. In conclusion, diffuse scattering can be used to validate predictions from MD simulations and can provide information to improve MD models of protein motions.« less

  15. Conformational dynamics of a crystalline protein from microsecond-scale molecular dynamics simulations and diffuse X-ray scattering

    PubMed Central

    Wall, Michael E.; Van Benschoten, Andrew H.; Sauter, Nicholas K.; Adams, Paul D.; Fraser, James S.; Terwilliger, Thomas C.

    2014-01-01

    X-ray diffraction from protein crystals includes both sharply peaked Bragg reflections and diffuse intensity between the peaks. The information in Bragg scattering is limited to what is available in the mean electron density. The diffuse scattering arises from correlations in the electron density variations and therefore contains information about collective motions in proteins. Previous studies using molecular-dynamics (MD) simulations to model diffuse scattering have been hindered by insufficient sampling of the conformational ensemble. To overcome this issue, we have performed a 1.1-μs MD simulation of crystalline staphylococcal nuclease, providing 100-fold more sampling than previous studies. This simulation enables reproducible calculations of the diffuse intensity and predicts functionally important motions, including transitions among at least eight metastable states with different active-site geometries. The total diffuse intensity calculated using the MD model is highly correlated with the experimental data. In particular, there is excellent agreement for the isotropic component of the diffuse intensity, and substantial but weaker agreement for the anisotropic component. Decomposition of the MD model into protein and solvent components indicates that protein–solvent interactions contribute substantially to the overall diffuse intensity. We conclude that diffuse scattering can be used to validate predictions from MD simulations and can provide information to improve MD models of protein motions. PMID:25453071

  16. Turbulence Model Effects on RANS Simulations of the HIFiRE Flight 2 Ground Test Configurations

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Mankbadi, Mina R.; Vyas, Manan A.

    2014-01-01

    The Wind-US Reynolds-averaged Navier-Stokes solver was applied to the Hypersonic International Flight Research Experimentation (HIFiRE) Flight 2 scramjet ground test configuration. Two test points corresponding to flight Mach numbers of 5.9 and 8.9 were examined. The emphasis was examining turbulence model effects on the prediction of flow path pressures. Three variants of the Menter k-omega turbulence model family were investigated. These include the baseline (BSL) and shear stress transport (SST) as well as a modified SST model where the shear stress limiter was altered. Variations in the turbulent Schmidt number were also considered. Choice of turbulence model had a substantial effect on prediction of the flow path pressures. The BSL model produced the highest pressures and the SST model produced the lowest pressures. As expected, the settings for the turbulent Schmidt number also had significant effects on predicted pressures. Small values for the turbulent Schmidt number enabled more rapid mass transfer, faster combustion, and in turn higher flowpath pressures. Optimal settings for turbulence model and turbulent Schmidt number were found to be rather case dependent, as has been concluded in other scramjet investigations.

  17. Testing and extension of a sea lamprey feeding model

    USGS Publications Warehouse

    Cochran, Philip A.; Swink, William D.; Kinziger, Andrew P.

    1999-01-01

    A previous model of feeding by sea lamprey Petromyzon marinus predicted energy intake and growth by lampreys as a function of lamprey size, host size, and duration of feeding attachments, but it was applicable only to lampreys feeding at 10°C and it was tested against only a single small data set of limited scope. We extended the model to other temperatures and tested it against an extensive data set (more than 700 feeding bouts) accumulated during experiments with captive sea lampreys. Model predictions of instantaneous growth were highly correlated with observed growth, and a partitioning of mean squared error between model predictions and observed results showed that 88.5% of the variance was due to random variation rather than to systematic errors. However, deviations between observed and predicted values varied substantially, especially for short feeding bouts. Predicted and observed growth trajectories of individual lampreys during multiple feeding bouts during the summer tended to correspond closely, but predicted growth was generally much higher than observed growth late in the year. This suggests the possibility that large overwintering lampreys reduce their feeding rates while attached to hosts. Seasonal or size-related shifts in the fate of consumed energy may provide an alternative explanation. The lamprey feeding model offers great flexibility in assessing growth of captive lampreys within various experimental protocols (e.g., different host species or thermal regimes) because it controls for individual differences in feeding history.

  18. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  19. Robustness of intra urban land-use regression models for ultrafine particles and black carbon based on mobile monitoring.

    PubMed

    Kerckhoffs, Jules; Hoek, Gerard; Vlaanderen, Jelle; van Nunen, Erik; Messier, Kyle; Brunekreef, Bert; Gulliver, John; Vermeulen, Roel

    2017-11-01

    Land-use regression (LUR) models for ultrafine particles (UFP) and Black Carbon (BC) in urban areas have been developed using short-term stationary monitoring or mobile platforms in order to capture the high variability of these pollutants. However, little is known about the comparability of predictions of mobile and short-term stationary models and especially the validity of these models for assessing residential exposures and the robustness of model predictions developed in different campaigns. We used an electric car to collect mobile measurements (n = 5236 unique road segments) and short-term stationary measurements (3 × 30min, n = 240) of UFP and BC in three Dutch cities (Amsterdam, Utrecht, Maastricht) in 2014-2015. Predictions of LUR models based on mobile measurements were compared to (i) measured concentrations at the short-term stationary sites, (ii) LUR model predictions based on short-term stationary measurements at 1500 random addresses in the three cities, (iii) externally obtained home outdoor measurements (3 × 24h samples; n = 42) and (iv) predictions of a LUR model developed based upon a 2013 mobile campaign in two cities (Amsterdam, Rotterdam). Despite the poor model R 2 of 15%, the ability of mobile UFP models to predict measurements with longer averaging time increased substantially from 36% for short-term stationary measurements to 57% for home outdoor measurements. In contrast, the mobile BC model only predicted 14% of the variation in the short-term stationary sites and also 14% of the home outdoor sites. Models based upon mobile and short-term stationary monitoring provided fairly high correlated predictions of UFP concentrations at 1500 randomly selected addresses in the three Dutch cities (R 2 = 0.64). We found higher UFP predictions (of about 30%) based on mobile models opposed to short-term model predictions and home outdoor measurements with no clear geospatial patterns. The mobile model for UFP was stable over different settings as the model predicted concentration levels highly correlated to predictions made by a previously developed LUR model with another spatial extent and in a different year at the 1500 random addresses (R 2 = 0.80). In conclusion, mobile monitoring provided robust LUR models for UFP, valid to use in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Modeling the 21 August 2017 Total Solar Eclipse: Prediction Results and New Techniques

    NASA Astrophysics Data System (ADS)

    Downs, C.; Mikic, Z.; Caplan, R. M.; Linker, J.; Lionello, R.; Torok, T.; Titov, V. S.; Riley, P.; MacKay, D.; Upton, L.

    2017-12-01

    As has been our tradition for past solar eclipses, we conducted a high resolution magnetohydrodynamic (MHD) simulation of the corona to predict the appearance of the 21 August 2017 solar eclipse. In this presentation, we discuss our model setup and our forward modeled predictions for the corona's appearance, including images of polarized brightness and EUV/soft X-Ray emission. We show how the combination of forward modeled observables and knowledge of the underlying magnetic field from the model can be used to interpret the structures seen during the eclipse. We also discuss two new features added to this year's prediction. First, in an attempt to improve the morphological shape of streamers in the low corona, we energize the large-scale magnetic field by emerging shear and canceling flux within filament channels. The handedness of the shear is deduced from a magnetofrictional model, which is driven by the evolving photospheric field produced by the Advective Flux Transport model. Second, we apply our new wave-turbulence-driven (WTD) model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the magnetic field--a key property for modeling the thermal-magnetic structure of the corona. We examine the effect of these considerations on forward modeled observables, and present them in the context of our final 2017 eclipse prediction (www.predsci.com/corona/aug2017eclipse). Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.

  1. Glomerular structural-functional relationship models of diabetic nephropathy are robust in type 1 diabetic patients.

    PubMed

    Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad

    2015-06-01

    Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  2. Using prediction uncertainty analysis to design hydrologic monitoring networks: Example applications from the Great Lakes water availability pilot project

    USGS Publications Warehouse

    Fienen, Michael N.; Doherty, John E.; Hunt, Randall J.; Reeves, Howard W.

    2010-01-01

    The importance of monitoring networks for resource-management decisions is becoming more recognized, in both theory and application. Quantitative computer models provide a science-based framework to evaluate the efficacy and efficiency of existing and possible future monitoring networks. In the study described herein, two suites of tools were used to evaluate the worth of new data for specific predictions, which in turn can support efficient use of resources needed to construct a monitoring network. The approach evaluates the uncertainty of a model prediction and, by using linear propagation of uncertainty, estimates how much uncertainty could be reduced if the model were calibrated with addition information (increased a priori knowledge of parameter values or new observations). The theoretical underpinnings of the two suites of tools addressing this technique are compared, and their application to a hypothetical model based on a local model inset into the Great Lakes Water Availability Pilot model are described. Results show that meaningful guidance for monitoring network design can be obtained by using the methods explored. The validity of this guidance depends substantially on the parameterization as well; hence, parameterization must be considered not only when designing the parameter-estimation paradigm but also-importantly-when designing the prediction-uncertainty paradigm.

  3. Ability of commercially available dairy ration programs to predict duodenal flows of protein and essential amino acids in dairy cows.

    PubMed

    Pacheco, D; Patton, R A; Parys, C; Lapierre, H

    2012-02-01

    The objective of this analysis was to compare the rumen submodel predictions of 4 commonly used dairy ration programs to observed values of duodenal flows of crude protein (CP), protein fractions, and essential AA (EAA). The literature was searched and 40 studies, including 154 diets, were used to compare observed values with those predicted by AminoCow (AC), Agricultural Modeling and Training Systems (AMTS), Cornell-Penn-Miner (CPM), and National Research Council 2001 (NRC) models. The models were evaluated based on their ability to predict the mean, their root mean square prediction error (RMSPE), error bias, and adequacy of regression equations for each protein fraction. The models predicted the mean duodenal CP flow within 5%, with more than 90% of the variation due to random disturbance. The models also predicted within 5% the mean microbial CP flow except CPM, which overestimated it by 27%. Only NRC, however, predicted mean rumen-undegraded protein (RUP) flows within 5%, whereas AC and AMTS underpredicted it by 8 to 9% and CPM by 24%. Regarding duodenal flows of individual AA, across all diets, CPM predicted substantially greater (>10%) mean flows of Arg, His, Ile, Met, and Lys; AMTS predicted greater flow for Arg and Met, whereas AC and NRC estimations were, on average, within 10% of observed values. Overpredictions by the CPM model were mainly related to mean bias, whereas the NRC model had the highest proportion of bias in random disturbance for flows of EAA. Models tended to predict mean flows of EAA more accurately on corn silage and alfalfa diets than on grass-based diets, more accurately on corn grain-based diets than on non-corn-based diets, and finally more accurately in the mid range of diet types. The 4 models were accurate at predicting mean dry matter intake. The AC, AMTS, and NRC models were all sufficiently accurate to be used for balancing EAA in dairy rations under field conditions. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Development of braided rope seals for hypersonic engine applications. Part 2: Flow modeling

    NASA Technical Reports Server (NTRS)

    Mutharasan, Rajakkannu; Steinetz, Bruce M.; Tao, Xiaoming; Ko, Frank

    1991-01-01

    Two models based on the Kozeny-Carmen equation were developed to analyze the fluid flow through a new class of braided rope seals under development for advanced hypersonic engines. A hybrid seal geometry consisting of a braided sleeve and a substantial amount of longitudinal fibers with high packing density was selected for development based on its low leakage rates. The models developed allow prediction of the gas leakage rate as a function of fiber diameter, fiber packing density, gas properties, and pressure drop across the seal.

  5. Sociodemographic, perceived and objective need indicators of mental health treatment use and treatment-seeking intentions among primary care medical patients.

    PubMed

    Elhai, Jon D; Voorhees, Summer; Ford, Julian D; Min, Kyeong Sam; Frueh, B Christopher

    2009-01-30

    We explored sociodemographic and illness/need associations with both recent mental healthcare utilization intensity and self-reported behavioral intentions to seek treatment. Data were examined from a community sample of 201 participants presenting for medical appointments at a Midwestern U.S. primary care clinic, in a cross-sectional survey study. Using non-linear regression analyses accounting for the excess of zero values in treatment visit counts, we found that both sociodemographic and illness/need models were significantly predictive of both recent treatment utilization intensity and intentions to seek treatment. Need models added substantial variance in prediction, above and beyond sociodemographic models. Variables with the greatest predictive role in explaining past treatment utilization intensity were greater depression severity, perceived need for treatment, older age, and lower income. Robust variables in predicting intentions to seek treatment were greater depression severity, perceived need for treatment, and more positive treatment attitudes. This study extends research findings on mental health treatment utilization, specifically addressing medical patients and using statistical methods appropriate to examining treatment visit counts, and demonstrates the importance of both objective and subjective illness/need variables in predicting recent service use intensity and intended future utilization.

  6. Tropical and Extratropical Cyclone Damages under Climate Change

    NASA Astrophysics Data System (ADS)

    Ranson, M.; Kousky, C.; Ruth, M.; Jantarasami, L.; Crimmins, A.; Tarquinio, L.

    2014-12-01

    This paper provides the first quantitative synthesis of the rapidly growing literature on future tropical and extratropical cyclone losses under climate change. We estimate a probability distribution for the predicted impact of changes in global surface air temperatures on future storm damages, using an ensemble of 296 estimates of the temperature-damage relationship from twenty studies. Our analysis produces three main empirical results. First, we find strong but not conclusive support for the hypothesis that climate change will cause damages from tropical cyclones and wind storms to increase, with most models (84 and 92 percent, respectively) predicting higher future storm damages due to climate change. Second, there is substantial variation in projected changes in losses across regions. Potential changes in damages are greatest in the North Atlantic basin, where the multi-model average predicts that a 2.5°C increase in global surface air temperature would cause hurricane damages to increase by 62 percent. The ensemble predictions for Western North Pacific tropical cyclones and European wind storms (extratropical cyclones) are approximately one third of that magnitude. Finally, our analysis shows that existing models of storm damages under climate change generate a wide range of predictions, ranging from moderate decreases to very large increases in losses.

  7. Potential Predictability of U.S. Summer Climate with "Perfect" Soil Moisture

    NASA Technical Reports Server (NTRS)

    Yang, Fanglin; Kumar, Arun; Lau, K.-M.

    2004-01-01

    The potential predictability of surface-air temperature and precipitation over the United States continent was assessed for a GCM forced by observed sea surface temperatures and an estimate of observed ground soil moisture contents. The latter was obtained by substituting the GCM simulated precipitation, which is used to drive the GCM's land-surface component, with observed pentad-mean precipitation at each time step of the model's integration. With this substitution, the simulated soil moisture correlates well with an independent estimate of observed soil moisture in all seasons over the entire US continent. Significant enhancements on the predictability of surface-air temperature and precipitation were found in boreal late spring and summer over the US continent. Anomalous pattern correlations of precipitation and surface-air temperature over the US continent in the June-July-August season averaged for the 1979-2000 period increased from 0.01 and 0.06 for the GCM simulations without precipitation substitution to 0.23 and 0.3 1, respectively, for the simulations with precipitation substitution. Results provide an estimate for the limits of potential predictability if soil moisture variability is to be perfectly predicted. However, this estimate may be model dependent, and needs to be substantiated by other modeling groups.

  8. Predicting cognitive function of the Malaysian elderly: a structural equation modelling approach.

    PubMed

    Foong, Hui Foh; Hamid, Tengku Aizan; Ibrahim, Rahimah; Haron, Sharifah Azizah; Shahar, Suzana

    2018-01-01

    The aim of this study was to identify the predictors of elderly's cognitive function based on biopsychosocial and cognitive reserve perspectives. The study included 2322 community-dwelling elderly in Malaysia, randomly selected through a multi-stage proportional cluster random sampling from Peninsular Malaysia. The elderly were surveyed on socio-demographic information, biomarkers, psychosocial status, disability, and cognitive function. A biopsychosocial model of cognitive function was developed to test variables' predictive power on cognitive function. Statistical analyses were performed using SPSS (version 15.0) in conjunction with Analysis of Moment Structures Graphics (AMOS 7.0). The estimated theoretical model fitted the data well. Psychosocial stress and metabolic syndrome (MetS) negatively predicted cognitive function and psychosocial stress appeared as a main predictor. Socio-demographic characteristics, except gender, also had significant effects on cognitive function. However, disability failed to predict cognitive function. Several factors together may predict cognitive function in the Malaysian elderly population, and the variance accounted for it is large enough to be considered substantial. Key factor associated with the elderly's cognitive function seems to be psychosocial well-being. Thus, psychosocial well-being should be included in the elderly assessment, apart from medical conditions, both in clinical and community setting.

  9. Predicting locations of rare aquatic species’ habitat with a combination of species-specific and assemblage-based models

    USGS Publications Warehouse

    McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.

    2013-01-01

    Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species. This hybrid approach may assist managers with the prioritization of habitats to be examined or conserved for rare species.

  10. Ensemble Cannonical Correlation Prediction of Seasonal Precipitation Over the US

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Kim, Kyu-Myong; Shen, Samuel; Einaudi, Franco (Technical Monitor)

    2001-01-01

    This paper presents preliminary results of an ensemble cannonical correlation (ECC) prediction scheme developed at the Climate and Radiation Branch, NASA/Goddard Space Flight Center for determining the potential predictability of regional precipitation, and for climate downscaling studies. The scheme is tested on seasonal hindcasts of anomalous precipitation over the continental United States using global sea surface temperature (SST) for 1951-2000. To maximize the forecast skill derived from SST, the world ocean is divided into nonoverlapping sectors. The cannonical SST modes for each sector are used as the predictor for the ensemble hindcasts. Results show that the ECC yields a substantial (10-25%) increase in prediction skills for all regions of the US and for all seasonal compared to traditional CCA prediction schemes. For the boreal winter, the tropical Pacific contributes the largest potential predictability to precipitation in the southwestern and southeastern regions, while the North Pacific and the North Atlantic are responsible for enhanced forecast skills in the Pacific Northwest, the northern Great Plains and Ohio Valley. Most importantly, the ECC increases skill for summertime precipitation prediction and substantially reduced the spring predictability barrier over all regions of the US continent. Besides SST, the ECC is designed with the flexibility to include any number of predictor fields, such as soil moisture, snow cover and regional regional data. Moreover, the ECC forecasts can be applied to other climate subsystems and, in conjunction with further diagnostic or model studies will enables a better understanding of the dynamic links between climate variations and precipitation, not only for the US, but also for other regions of the world.

  11. Dynamic wake prediction and visualization with uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Holforty, Wendy L. (Inventor); Powell, J. David (Inventor)

    2005-01-01

    A dynamic wake avoidance system utilizes aircraft and atmospheric parameters readily available in flight to model and predict airborne wake vortices in real time. A novel combination of algorithms allows for a relatively simple yet robust wake model to be constructed based on information extracted from a broadcast. The system predicts the location and movement of the wake based on the nominal wake model and correspondingly performs an uncertainty analysis on the wake model to determine a wake hazard zone (no fly zone), which comprises a plurality of wake planes, each moving independently from another. The system selectively adjusts dimensions of each wake plane to minimize spatial and temporal uncertainty, thereby ensuring that the actual wake is within the wake hazard zone. The predicted wake hazard zone is communicated in real time directly to a user via a realistic visual representation. In an example, the wake hazard zone is visualized on a 3-D flight deck display to enable a pilot to visualize or see a neighboring aircraft as well as its wake. The system substantially enhances the pilot's situational awareness and allows for a further safe decrease in spacing, which could alleviate airport and airspace congestion.

  12. Development of a Melanoma Risk Prediction Model Incorporating MC1R Genotype and Indoor Tanning Exposure: Impact of Mole Phenotype on Model Performance

    PubMed Central

    Penn, Lauren A.; Qian, Meng; Zhang, Enhan; Ng, Elise; Shao, Yongzhao; Berwick, Marianne; Lazovich, DeAnn; Polsky, David

    2014-01-01

    Background Identifying individuals at increased risk for melanoma could potentially improve public health through targeted surveillance and early detection. Studies have separately demonstrated significant associations between melanoma risk, melanocortin receptor (MC1R) polymorphisms, and indoor ultraviolet light (UV) exposure. Existing melanoma risk prediction models do not include these factors; therefore, we investigated their potential to improve the performance of a risk model. Methods Using 875 melanoma cases and 765 controls from the population-based Minnesota Skin Health Study we compared the predictive ability of a clinical melanoma risk model (Model A) to an enhanced model (Model F) using receiver operating characteristic (ROC) curves. Model A used self-reported conventional risk factors including mole phenotype categorized as “none”, “few”, “some” or “many” moles. Model F added MC1R genotype and measures of indoor and outdoor UV exposure to Model A. We also assessed the predictive ability of these models in subgroups stratified by mole phenotype (e.g. nevus-resistant (“none” and “few” moles) and nevus-prone (“some” and “many” moles)). Results Model A (the reference model) yielded an area under the ROC curve (AUC) of 0.72 (95% CI = 0.69, 0.74). Model F was improved with an AUC = 0.74 (95% CI = 0.71–0.76, p<0.01). We also observed substantial variations in the AUCs of Models A & F when examined in the nevus-prone and nevus-resistant subgroups. Conclusions These results demonstrate that adding genotypic information and environmental exposure data can increase the predictive ability of a clinical melanoma risk model, especially among nevus-prone individuals. PMID:25003831

  13. Modeling the dissipation rate in rotating turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Raj, Rishi; Gatski, Thomas B.

    1990-01-01

    A variety of modifications to the modeled dissipation rate transport equation that have been proposed during the past two decades to account for rotational strains are examined. The models are subjected to two crucial test cases: the decay of isotropic turbulence in a rotating frame and homogeneous shear flow in a rotating frame. It is demonstrated that these modifications do not yield substantially improved predictions for these two test cases and in many instances give rise to unphysical behavior. An alternative proposal, based on the use of the tensor dissipation rate, is made for the development of improved models.

  14. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    NASA Technical Reports Server (NTRS)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  15. Blind predictions of protein interfaces by docking calculations in CAPRI.

    PubMed

    Lensink, Marc F; Wodak, Shoshana J

    2010-11-15

    Reliable prediction of the amino acid residues involved in protein-protein interfaces can provide valuable insight into protein function, and inform mutagenesis studies, and drug design applications. A fast-growing number of methods are being proposed for predicting protein interfaces, using structural information, energetic criteria, or sequence conservation or by integrating multiple criteria and approaches. Overall however, their performance remains limited, especially when applied to nonobligate protein complexes, where the individual components are also stable on their own. Here, we evaluate interface predictions derived from protein-protein docking calculations. To this end we measure the overlap between the interfaces in models of protein complexes submitted by 76 participants in CAPRI (Critical Assessment of Predicted Interactions) and those of 46 observed interfaces in 20 CAPRI targets corresponding to nonobligate complexes. Our evaluation considers multiple models for each target interface, submitted by different participants, using a variety of docking methods. Although this results in a substantial variability in the prediction performance across participants and targets, clear trends emerge. Docking methods that perform best in our evaluation predict interfaces with average recall and precision levels of about 60%, for a small majority (60%) of the analyzed interfaces. These levels are significantly higher than those obtained for nonobligate complexes by most extant interface prediction methods. We find furthermore that a sizable fraction (24%) of the interfaces in models ranked as incorrect in the CAPRI assessment are actually correctly predicted (recall and precision ≥50%), and that these models contribute to 70% of the correct docking-based interface predictions overall. Our analysis proves that docking methods are much more successful in identifying interfaces than in predicting complexes, and suggests that these methods have an excellent potential of addressing the interface prediction challenge. © 2010 Wiley-Liss, Inc.

  16. Afterbody External Aerodynamic and Performance Prediction at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1999-01-01

    This CFD experiment concludes that the potential difference between the flow between a flight Reynolds number test and a sub-scale wind tunnel test are substantial for this particular nozzle boattail geometry. The early study was performed using a linear k-epsilon turbulence model. The present study was performed using the Girimaji formulation of a algebraic Reynolds stress turbulent simulation.

  17. Height-growth response to climatic changes differs among populations of Douglas-fir: A novel analysis of historic data

    Treesearch

    Laura P. Leites; Andrew P. Robinson; Gerald E. Rehfeldt; John D. Marshall; Nicholas L. Crookston

    2012-01-01

    Projected climate change will affect existing forests, as substantial changes are predicted to occur during their life spans. Species that have ample intraspecific genetic differentiation, such as Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco), are expected to display population-specific growth responses to climate change. Using a mixed-effects modeling approach,...

  18. Nonlinear Recurrent Neural Network Predictive Control for Energy Distribution of a Fuel Cell Powered Robot

    PubMed Central

    Chen, Qihong; Long, Rong; Quan, Shuhai

    2014-01-01

    This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell. PMID:24707206

  19. Forecasting state-level premature deaths from alcohol, drugs, and suicides using Google Trends data.

    PubMed

    Parker, Jason; Cuthbertson, Courtney; Loveridge, Scott; Skidmore, Mark; Dyar, Will

    2017-04-15

    Vital statistics on the number of, alcohol-induced death (AICD) drug-induced death (DICD), and suicides at the local-level are only available after a substantial lag of up to two years after the events occur. We (1) investigate how well Google Trends search data explain variation in state-level rates in the US, and (2) use this method to forecast these rates of death for 2015 as official data are not yet available. We tested the degree to which Google Trends data on 27 terms can be fit to CDC data using L 1 -regularization on AICD, DICD, and suicide. Using Google Trends data, we forecast 2015 AICD, DICD, and suicide rates. L 1 -regularization fit the pre-2015 data much better than the alternative model using state-level unemployment and income variables. Google Trends data account for substantial variation in growth of state-level rates of death: 30.9% for AICD, 23.9% for DICD, and 21.8% for suicide rates. Every state except Hawaii is forecasted to increase in all three of these rates in 2015. The model predicts state, not local or individual behavior, and is dependent on continued availability of Google Trends data. The method predicts state-level AICD, DICD, and suicide rates better than the alternative model. The study findings suggest that this methodology can be developed into a public health surveillance system for behavioral health-related causes of death. State-level predictions could be used to inform state interventions aimed at reducing AICD, DICD, and suicide. Copyright © 2017. Published by Elsevier B.V.

  20. Premixed flame propagation in combustible particle cloud mixtures

    NASA Technical Reports Server (NTRS)

    Seshadri, K.; Yang, B.

    1993-01-01

    The structures of premixed flames propagating in combustible systems, containing uniformly distributed volatile fuel particles, in an oxidizing gas mixtures is analyzed. The experimental results show that steady flame propagation occurs even if the initial equivalence ratio of the combustible mixture based on the gaseous fuel available in the particles, phi(u) is substantially larger than unity. A model is developed to explain these experimental observations. In the model it is presumed that the fuel particles vaporize first to yield a gaseous fuel of known chemical composition which then reacts with oxygen in a one-step overall process. It is shown that the interplay of vaporization kinetics and oxidation process, can result in steady flame propagation in combustible mixtures where the value of phi(u) is substantially larger than unity. This prediction is in agreement with experimental observations.

  1. Reduction of initial shock in decadal predictions using a new initialization strategy

    NASA Astrophysics Data System (ADS)

    He, Yujun; Wang, Bin

    2017-04-01

    Initial shock is a well-known problem occurring in the early years of a decadal prediction when assimilating full-field observations into a coupled model, which directly affects the prediction skill. For the purpose to alleviate this problem, we propose a novel full-field initialization method based on dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar). Different from the available solution strategies including anomaly assimilation and bias correction, it substantially reduces the initial shock through generating more consistent initial conditions for the coupled model, which, along with the model trajectory in one-month windows, best fit the monthly mean analysis data of oceanic temperature and salinity. We evaluate the performance of initialized hindcast experiments according to three proposed indices to measure the intensity of the initial shock. The results indicate that this strategy can obviously reduce the initial shock in decadal predictions by FGOALS-g2 (the Flexible Global Ocean-Atmosphere-Land System model, Grid-point Version 2) compared with the commonly-used nudging full-field initialization for the same model as well as the different full-field initialization strategies for other CMIP5 (the fifth phase of the Coupled Model Intercomparison Project) models whose decadal prediction results are available. It is also comparable to or even better than the anomaly initialization methods. Better hindcasts of global mean surface air temperature anomaly are obtained due to the reduction of initial shock by the new initialization scheme.

  2. A new computational strategy for predicting essential genes.

    PubMed

    Cheng, Jian; Wu, Wenwu; Zhang, Yinwen; Li, Xiangchen; Jiang, Xiaoqian; Wei, Gehong; Tao, Shiheng

    2013-12-21

    Determination of the minimum gene set for cellular life is one of the central goals in biology. Genome-wide essential gene identification has progressed rapidly in certain bacterial species; however, it remains difficult to achieve in most eukaryotic species. Several computational models have recently been developed to integrate gene features and used as alternatives to transfer gene essentiality annotations between organisms. We first collected features that were widely used by previous predictive models and assessed the relationships between gene features and gene essentiality using a stepwise regression model. We found two issues that could significantly reduce model accuracy: (i) the effect of multicollinearity among gene features and (ii) the diverse and even contrasting correlations between gene features and gene essentiality existing within and among different species. To address these issues, we developed a novel model called feature-based weighted Naïve Bayes model (FWM), which is based on Naïve Bayes classifiers, logistic regression, and genetic algorithm. The proposed model assesses features and filters out the effects of multicollinearity and diversity. The performance of FWM was compared with other popular models, such as support vector machine, Naïve Bayes model, and logistic regression model, by applying FWM to reciprocally predict essential genes among and within 21 species. Our results showed that FWM significantly improves the accuracy and robustness of essential gene prediction. FWM can remarkably improve the accuracy of essential gene prediction and may be used as an alternative method for other classification work. This method can contribute substantially to the knowledge of the minimum gene sets required for living organisms and the discovery of new drug targets.

  3. Experimental validation of finite element modelling of a modular metal-on-polyethylene total hip replacement.

    PubMed

    Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John

    2014-07-01

    Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.

  4. Evaluation of the phototoxicity of unsubstituted and alkylated polycyclic aromatic hydrocarbons to mysid shrimp (Americamysis bahia): Validation of predictive models.

    PubMed

    Finch, Bryson E; Marzooghi, Solmaz; Di Toro, Dominic M; Stubblefield, William A

    2017-08-01

    Crude oils are composed of an assortment of hydrocarbons, some of which are polycyclic aromatic hydrocarbons (PAHs). Polycyclic aromatic hydrocarbons are of particular interest due to their narcotic and potential phototoxic effects. Several studies have examined the phototoxicity of individual PAHs and fresh and weathered crude oils, and several models have been developed to predict PAH toxicity. Fingerprint analyses of oils have shown that PAHs in crude oils are predominantly alkylated. However, current models for estimating PAH phototoxicity assume toxic equivalence between unsubstituted (i.e., parent) and alkyl-substituted compounds. This approach may be incorrect if substantial differences in toxic potency exist between unsubstituted and substituted PAHs. The objective of the present study was to examine the narcotic and photo-enhanced toxicity of commercially available unsubstituted and alkylated PAHs to mysid shrimp (Americamysis bahia). Data were used to validate predictive models of phototoxicity based on the highest occupied molecular orbital-lowest unoccupied molecular orbital (HOMO-LUMO) gap approach and to develop relative effect potencies. Results demonstrated that photo-enhanced toxicity increased with increasing methylation and that phototoxic PAH potencies vary significantly among unsubstituted compounds. Overall, predictive models based on the HOMO-LUMO gap were relatively accurate in predicting phototoxicity for unsubstituted PAHs but are limited to qualitative assessments. Environ Toxicol Chem 2017;36:2043-2049. © 2017 SETAC. © 2017 SETAC.

  5. Design of optimal hyperthermia protocols for prostate cancer by controlling HSP expression through computer modeling (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Rylander, Marissa N.; Feng, Yusheng; Diller, Kenneth; Bass, J.

    2005-04-01

    Heat shock proteins (HSP) are critical components of a complex defense mechanism essential for preserving cell survival under adverse environmental conditions. It is inevitable that hyperthermia will enhance tumor tissue viability, due to HSP expression in regions where temperatures are insufficient to coagulate proteins, and would likely increase the probability of cancer recurrence. Although hyperthermia therapy is commonly used in conjunction with radiotherapy, chemotherapy, and gene therapy to increase therapeutic effectiveness, the efficacy of these therapies can be substantially hindered due to HSP expression when hyperthermia is applied prior to these procedures. Therefore, in planning hyperthermia protocols, prediction of the HSP response of the tumor must be incorporated into the treatment plan to optimize the thermal dose delivery and permit prediction of overall tissue response. In this paper, we present a highly accurate, adaptive, finite element tumor model capable of predicting the HSP expression distribution and tissue damage region based on measured cellular data when hyperthermia protocols are specified. Cubic spline representations of HSP27 and HSP70, and Arrhenius damage models were integrated into the finite element model to enable prediction of the HSP expression and damage distribution in the tissue following laser heating. Application of the model can enable optimized treatment planning by controlling of the tissue response to therapy based on accurate prediction of the HSP expression and cell damage distribution.

  6. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  7. Dielectric Properties of Piezoelectric Polyimides

    NASA Technical Reports Server (NTRS)

    Ounaies, Z.; Young, J. A.; Simpson, J. O.; Farmer, B. L.

    1997-01-01

    Molecular modeling and dielectric measurements are being used to identify mechanisms governing piezoelectric behavior in polyimides such as dipole orientation during poling, as well as degree of piezoelectricity achievable. Molecular modeling on polyimides containing pendant, polar nitrile (CN) groups has been completed to determine their remanent polarization. Experimental investigation of their dielectric properties evaluated as a function of temperature and frequency has substantiated numerical predictions. With this information in hand, we are then able to suggest changes in the molecular structures, which will then improve upon the piezoelectric response.

  8. Computational analysis of the Phanerochaete chrysosporium v2.0 genome database and mass spectrometry identiWcation of peptides in ligninolytic cultures reveal complex mixtures of secreted proteins

    Treesearch

    Amber Vanden Wymelenberg; Patrick Minges; Grzegorz Sabat; Diego Martinez; Andrea Aerts; Asaf Salamov; Igor Grigoriev; Harris Shapiro; Nik Putnam; Paula Belinky; Carlos Dosoretz; Jill Gaskell; Phil Kersten; Dan Cullen

    2006-01-01

    The white-rot basidiomycete Phanerochaete chrysosporium employs extracellular enzymes to completely degrade the major polymers of wood: cellulose, hemicellulose, and lignin. Analysis of a total of 10,048 v2.1 gene models predicts 769 secreted proteins, a substantial increase over the 268 models identified in the earlier database (v1.0). Within the v2.1 ‘computational...

  9. Image analysis and machine learning in digital pathology: Challenges and opportunities.

    PubMed

    Madabhushi, Anant; Lee, George

    2016-10-01

    With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular "omics" features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Predicting the likelihood of altered streamflows at ungauged rivers across the conterminous United States

    USGS Publications Warehouse

    Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.

    2013-01-01

    An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.

  11. Preferential attachment and growth dynamics in complex systems

    NASA Astrophysics Data System (ADS)

    Yamasaki, Kazuko; Matia, Kaushik; Buldyrev, Sergey V.; Fu, Dongfeng; Pammolli, Fabio; Riccaboni, Massimo; Stanley, H. Eugene

    2006-09-01

    Complex systems can be characterized by classes of equivalency of their elements defined according to system specific rules. We propose a generalized preferential attachment model to describe the class size distribution. The model postulates preferential growth of the existing classes and the steady influx of new classes. According to the model, the distribution changes from a pure exponential form for zero influx of new classes to a power law with an exponential cut-off form when the influx of new classes is substantial. Predictions of the model are tested through the analysis of a unique industrial database, which covers both elementary units (products) and classes (markets, firms) in a given industry (pharmaceuticals), covering the entire size distribution. The model’s predictions are in good agreement with the data. The paper sheds light on the emergence of the exponent τ≈2 observed as a universal feature of many biological, social and economic problems.

  12. Theories of Giant Planet Formation

    NASA Technical Reports Server (NTRS)

    Lissauer, Jack J.; Young, Richard E. (Technical Monitor)

    1998-01-01

    An overview of current theories of planetary formation, with emphasis on giant planets, is presented. The most detailed models are based upon observations of our own Solar System and of young stars and their environments. While these models predict that rocky planets should form around most single stars, the frequency of formation of gas giant planets is more difficult to predict theoretically. Terrestrial planets are believed to grow via pairwise accretion until the spacing of planetary orbits becomes large enough that the configuration is stable for the age of the system. Giant planets begin their growth as do terrestrial planets, but they become massive enough that they are able to accumulate substantial amounts of gas before the protoplanetary disk dissipates. Most models for extrasolar giant planets suggest that they formed as did Jupiter and Saturn (in nearly circular orbits, far enough from the star that ice could), and subsequently migrated to their current positions, although some models suggest in situ formation.

  13. Ontology-based tools to expedite predictive model construction.

    PubMed

    Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey

    2014-01-01

    Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.

  14. Genomic selection in a commercial winter wheat population.

    PubMed

    He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong

    2016-03-01

    Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.

  15. An Online Prediction Platform to Support the Environmental ...

    EPA Pesticide Factsheets

    Historical QSAR models are currently utilized across a broad range of applications within the U.S. Environmental Protection Agency (EPA). These models predict basic physicochemical properties (e.g., logP, aqueous solubility, vapor pressure), which are then incorporated into exposure, fate and transport models. Whereas the classical manner of publishing results in peer-reviewed journals remains appropriate, there are substantial benefits to be gained by providing enhanced, open access to the training data sets and resulting models. Benefits include improved transparency, more flexibility to expand training sets and improve model algorithms, and greater ability to independently characterize model performance both globally and in local areas of chemistry. We have developed a web-based prediction platform that uses open-source descriptors and modeling algorithms, employs modern cheminformatics technologies, and is tailored for ease of use by the toxicology and environmental regulatory community. This tool also provides web-services to meet both EPA’s projects and the modeling community at-large. The platform hosts models developed within EPA’s National Center for Computational Toxicology, as well as those developed by other EPA scientists and the outside scientific community. Recognizing that there are other on-line QSAR model platforms currently available which have additional capabilities, we connect to such services, where possible, to produce an integrated

  16. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  17. Fast, scalable prediction of deleterious noncoding variants from functional and population genomic data.

    PubMed

    Huang, Yi-Fei; Gulko, Brad; Siepel, Adam

    2017-04-01

    Many genetic variants that influence phenotypes of interest are located outside of protein-coding genes, yet existing methods for identifying such variants have poor predictive power. Here we introduce a new computational method, called LINSIGHT, that substantially improves the prediction of noncoding nucleotide sites at which mutations are likely to have deleterious fitness consequences, and which, therefore, are likely to be phenotypically important. LINSIGHT combines a generalized linear model for functional genomic data with a probabilistic model of molecular evolution. The method is fast and highly scalable, enabling it to exploit the 'big data' available in modern genomics. We show that LINSIGHT outperforms the best available methods in identifying human noncoding variants associated with inherited diseases. In addition, we apply LINSIGHT to an atlas of human enhancers and show that the fitness consequences at enhancers depend on cell type, tissue specificity, and constraints at associated promoters.

  18. [Application of quantum-chemical methods to prediction of the carcinogenicity of chemical substances].

    PubMed

    Zholdikova, Z I; Kharchevnikova, N V

    2006-01-01

    A version of logical-combinatorial JSM type intelligent system was used to predict the presence and the degree of a carcinogenic effect. This version was based on combined description of chemical substances including both structural and numeric parameters. The new version allows for the fact that the toxicity and danger caused by chemical substances often depend on their biological activation in the organism. The authors substantiate classifying chemicals according to their carcinogenic activity, and illustrate the use of the system to predict the carcinogenicity of polycyclic aromatic hydrocarbons using a model of bioactivation via the formation of diolepoxides, and the carcinogenicity of halogenated alkanes using a model of bioactivation via oxidative dehalogenation. The paper defined the boundary level of an energetic parameter, the exceeding of which correlated with the inhibition of halogenated alkanes's metabolism and the absence of carcinogenic activity.

  19. Muscle dysmorphia, gender role stress, and sociocultural influences: an exploratory study.

    PubMed

    Tucker, Readdy; Watkins, Patti Lou; Cardinal, Bradley J

    2011-06-01

    Our study explored the contribution of gender role stress (GRS) and sociocultural appearance demands to symptoms of muscle dysmorphia (MD) in a college sample of 219 women and 154 men. For women, five GRS subscales, sociocultural appearance demands, age, and frequency of aerobic exercise predicted MD symptoms (model R2 = .33; F(8,210) = 12.81, p < . 001); for men, only one GRS subscale, age, and sociocultural appearance demands predicted MD symptoms (model R2 = .40; F(3,150) = 9.52, p < .001). Post hoc analyses revealed that a small number of items explained a substantial portion of the variation, suggesting that MD may be more related to specific perceptions of pressure to attain an attractive body than to global gender role stress.

  20. Will Deep Impact Make a Splash?

    NASA Technical Reports Server (NTRS)

    Sheldon, Robert B.; Hoover, Richard B.

    2005-01-01

    Recent cometary observations from spacecraft flybys support the hypothesis that short-period comets have been substantially modified by the presence of liquid water. Such a model can resolve many outstanding questions of cometary dynamics, as well as the differences between the flyby observations and the dirty snowball paradigm. The model also predicts that the Deep Impact mission, slated for a July 4, 2005 collision with Comet Temple-1, will encounter a layered, heterogenous nucleus with subsurface liquid water capped by dense crust. Collision ejecta will include not only vaporized material, but liquid water and large pieces of crust. Since the water will immediately boil, we predict that the water vapor signature of Deep Impact may be an order of magnitude larger than that expected from collisional vaporization alone.

  1. An analytics approach to designing patient centered medical homes.

    PubMed

    Ajorlou, Saeede; Shams, Issac; Yang, Kai

    2015-03-01

    Recently the patient centered medical home (PCMH) model has become a popular team based approach focused on delivering more streamlined care to patients. In current practices of medical homes, a clinical based prediction frame is recommended because it can help match the portfolio capacity of PCMH teams with the actual load generated by a set of patients. Without such balances in clinical supply and demand, issues such as excessive under and over utilization of physicians, long waiting time for receiving the appropriate treatment, and non-continuity of care will eliminate many advantages of the medical home strategy. In this paper, by using the hierarchical generalized linear model with multivariate responses, we develop a clinical workload prediction model for care portfolio demands in a Bayesian framework. The model allows for heterogeneous variances and unstructured covariance matrices for nested random effects that arise through complex hierarchical care systems. We show that using a multivariate approach substantially enhances the precision of workload predictions at both primary and non primary care levels. We also demonstrate that care demands depend not only on patient demographics but also on other utilization factors, such as length of stay. Our analyses of a recent data from Veteran Health Administration further indicate that risk adjustment for patient health conditions can considerably improve the prediction power of the model.

  2. Adsorption of selected pharmaceuticals and an endocrine disrupting compound by granular activated carbon. 2. Model prediction.

    PubMed

    Yu, Zirui; Peldszus, Sigrid; Huck, Peter M

    2009-03-01

    The adsorption of two representative pharmaceutically active compounds (PhACs)-naproxen and carbamazepine and one endocrine disrupting compound (EDC)-nonylphenol was studied in pilot-scale granular activated carbon (GAC) adsorbers using post-sedimentation (PS) water from a full-scale drinking water treatment plant. Acidic naproxen broke through fastest while nonylphenol was removed best, which was consistent with the degree to which fouling affected compound removals. Model predictions and experimental data were generally in good agreement for all three compounds, which demonstrated the effectiveness and robustness of the pore and surface diffusion model (PSDM) used in combination with the time-variable parameter approach for predicting removals at environmentally relevant concentrations (i.e., ng/L range). Sensitivity analyses suggested that accurate determination of film diffusion coefficients was critical for predicting breakthrough for naproxen and carbamazepine, in particular when high removals are targeted. Model simulations demonstrated that GAC carbon usage rates (CURs) for naproxen were substantially influenced by the empty bed contact time (EBCT) at the investigated conditions. Model-based comparisons between GAC CURs and minimum CURs for powdered activated carbon (PAC) applications suggested that PAC would be most appropriate for achieving 90% removal of naproxen, whereas GAC would be more suitable for nonylphenol.

  3. Validation of Shoulder Response of Human Body Finite-Element Model (GHBMC) Under Whole Body Lateral Impact Condition.

    PubMed

    Park, Gwansik; Kim, Taewung; Panzer, Matthew B; Crandall, Jeff R

    2016-08-01

    In previous shoulder impact studies, the 50th-percentile male GHBMC human body finite-element model was shown to have good biofidelity regarding impact force, but under-predicted shoulder deflection by 80% compared to those observed in the experiment. The goal of this study was to validate the response of the GHBMC M50 model by focusing on three-dimensional shoulder kinematics under a whole-body lateral impact condition. Five modifications, focused on material properties and modeling techniques, were introduced into the model and a supplementary sensitivity analysis was done to determine the influence of each modification to the biomechanical response of the body. The modified model predicted substantially improved shoulder response and peak shoulder deflection within 10% of the observed experimental data, and showed good correlation in the scapula kinematics on sagittal and transverse planes. The improvement in the biofidelity of the shoulder region was mainly due to the modifications of material properties of muscle, the acromioclavicular joint, and the attachment region between the pectoralis major and ribs. Predictions of rib fracture and chest deflection were also improved because of these modifications.

  4. Aeroacoustic Codes For Rotor Harmonic and BVI Noise--CAMRAD.Mod1/HIRES

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Boyd, D. Douglas, Jr.; Burley, Casey L.; Jolly, J. Ralph, Jr.

    1996-01-01

    This paper presents a status of non-CFD aeroacoustic codes at NASA Langley Research Center for the prediction of helicopter harmonic and Blade-Vortex Interaction (BVI) noise. The prediction approach incorporates three primary components: CAMRAD.Mod1 - a substantially modified version of the performance/trim/wake code CAMRAD; HIRES - a high resolution blade loads post-processor; and WOPWOP - an acoustic code. The functional capabilities and physical modeling in CAMRAD.Mod1/HIRES will be summarized and illustrated. A new multi-core roll-up wake modeling approach is introduced and validated. Predictions of rotor wake and radiated noise are compared with to the results of the HART program, a model BO-105 windtunnel test at the DNW in Europe. Additional comparisons are made to results from a DNW test of a contemporary design four-bladed rotor, as well as from a Langley test of a single proprotor (tiltrotor) three-bladed model configuration. Because the method is shown to help eliminate the necessity of guesswork in setting code parameters between different rotor configurations, it should prove useful as a rotor noise design tool.

  5. Polarization modeling and predictions for Daniel K. Inouye Solar Telescope part 1: telescope and example instrument configurations

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Sueoka, Stacey R.

    2017-01-01

    We outline polarization performance calculations and predictions for the Daniel K. Inouye Solar Telescope (DKIST) optics and show Mueller matrices for two of the first light instruments. Telescope polarization is due to polarization-dependent mirror reflectivity and rotations between groups of mirrors as the telescope moves in altitude and azimuth. The Zemax optical modeling software has polarization ray-trace capabilities and predicts system performance given a coating prescription. We develop a model coating formula that approximates measured witness sample polarization properties. Estimates show the DKIST telescope Mueller matrix as functions of wavelength, azimuth, elevation, and field angle for the cryogenic near infra-red spectro-polarimeter (CryoNIRSP) and visible spectro-polarimeter. Footprint variation is substantial and shows vignetted field points will have strong polarization effects. We estimate 2% variation of some Mueller matrix elements over the 5-arc min CryoNIRSP field. We validate the Zemax model by showing limiting cases for flat mirrors in collimated and powered designs that compare well with theoretical approximations and are testable with lab ellipsometers.

  6. Root architecture simulation improves the inference from seedling root phenotyping towards mature root systems.

    PubMed

    Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris; Leitner, Daniel; Nagel, Kerstin A; Nakhforoosh, Alireza

    2017-02-01

    Root phenotyping provides trait information for plant breeding. A shortcoming of high-throughput root phenotyping is the limitation to seedling plants and failure to make inferences on mature root systems. We suggest root system architecture (RSA) models to predict mature root traits and overcome the inference problem. Sixteen pea genotypes were phenotyped in (i) seedling (Petri dishes) and (ii) mature (sand-filled columns) root phenotyping platforms. The RSA model RootBox was parameterized with seedling traits to simulate the fully developed root systems. Measured and modelled root length, first-order lateral number, and root distribution were compared to determine key traits for model-based prediction. No direct relationship in root traits (tap, lateral length, interbranch distance) was evident between phenotyping systems. RootBox significantly improved the inference over phenotyping platforms. Seedling plant tap and lateral root elongation rates and interbranch distance were sufficient model parameters to predict genotype ranking in total root length with an RSpearman of 0.83. Parameterization including uneven lateral spacing via a scaling function substantially improved the prediction of architectures underlying the differently sized root systems. We conclude that RSA models can solve the inference problem of seedling root phenotyping. RSA models should be included in the phenotyping pipeline to provide reliable information on mature root systems to breeding research. © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  7. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: Insights into spatial variability using high-resolution satellite data

    PubMed Central

    Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.

    2016-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768

  8. Empirical estimates to reduce modeling uncertainties of soil organic carbon in permafrost regions: a review of recent progress and remaining challenges

    USGS Publications Warehouse

    Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.

    2013-01-01

    The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.

  9. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    PubMed

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  10. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  11. Tuning stochastic matrix models with hydrologic data to predict the population dynamics of a riverine fish.

    PubMed

    Sakaris, Peter C; Irwin, Elise R

    2010-03-01

    We developed stochastic matrix models to evaluate the effects of hydrologic alteration and variable mortality on the population dynamics of a lotic fish in a regulated river system. Models were applied to a representative lotic fish species, the flathead catfish (Pylodictis olivaris), for which two populations were examined: a native population from a regulated reach of the Coosa River (Alabama, USA) and an introduced population from an unregulated section of the Ocmulgee River (Georgia, USA). Size-classified matrix models were constructed for both populations, and residuals from catch-curve regressions were used as indices of year class strength (i.e., recruitment). A multiple regression model indicated that recruitment of flathead catfish in the Coosa River was positively related to the frequency of spring pulses between 283 and 566 m3/s. For the Ocmulgee River population, multiple regression models indicated that year class strength was negatively related to mean March discharge and positively related to June low flow. When the Coosa population was modeled to experience five consecutive years of favorable hydrologic conditions during a 50-year projection period, it exhibited a substantial spike in size and increased at an overall 0.2% annual rate. When modeled to experience five years of unfavorable hydrologic conditions, the Coosa population initially exhibited a decrease in size but later stabilized and increased at a 0.4% annual rate following the decline. When the Ocmulgee River population was modeled to experience five years of favorable conditions, it exhibited a substantial spike in size and increased at an overall 0.4% annual rate. After the Ocmulgee population experienced five years of unfavorable conditions, a sharp decline in population size was predicted. However, the population quickly recovered, with population size increasing at a 0.3% annual rate following the decline. In general, stochastic population growth in the Ocmulgee River was more erratic and variable than population growth in the Coosa River. We encourage ecologists to develop similar models for other lotic species, particularly in regulated river systems. Successful management of fish populations in regulated systems requires that we are able to predict how hydrology affects recruitment and will ultimately influence the population dynamics of fishes.

  12. Development and Evaluation of Season-ahead Precipitation and Streamflow Predictions for Sectoral Management in Western Ethiopia

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Alexander, S.; WU, S.

    2017-12-01

    Skillful season-ahead predictions conditioned on local and large-scale hydro-climate variables can provide valuable knowledge to farmers and reservoir operators, enabling informed water resource allocation and management decisions. In Ethiopia, the potential for advancing agriculture and hydropower management, and subsequently economic growth, is substantial, yet evidence suggests a weak adoption of prediction information by sectoral audiences. To address common critiques, including skill, scale, and uncertainty, probabilistic forecasts are developed at various scales - temporally and spatially - for the Finchaa hydropower dam and the Koga agricultural scheme in an attempt to promote uptake and application. Significant prediction skill is evident across scales, particularly for statistical models. This raises questions regarding other potential barriers to forecast utilization at community scales, which are also addressed.

  13. Comparison of the predictions of two road dust emission models with the measurements of a mobile van

    NASA Astrophysics Data System (ADS)

    Kauhaniemi, M.; Stojiljkovic, A.; Pirjola, L.; Karppinen, A.; Härkönen, J.; Kupiainen, K.; Kangas, L.; Aarnio, M. A.; Omstedt, G.; Denby, B. R.; Kukkonen, J.

    2014-09-01

    The predictions of two road dust suspension emission models were compared with the on-site mobile measurements of suspension emission factors. Such a quantitative comparison has not previously been reported in the reviewed literature. The models used were the Nordic collaboration model NORTRIP (NOn-exhaust Road TRaffic Induced Particle emissions) and the Swedish-Finnish FORE model (Forecasting Of Road dust Emissions). These models describe particulate matter generated by the wear of road surface due to traction control methods and processes that control the suspension of road dust particles into the air. An experimental measurement campaign was conducted using a mobile laboratory called SNIFFER, along two selected road segments in central Helsinki in 2007 and 2008. The suspended PM10 concentration was measured behind the left rear tyre and the street background PM10 concentration in front of the van. Both models reproduced the measured seasonal variation of suspension emission factors fairly well during both years at both measurement sites. However, both models substantially under-predicted the measured emission values. The article illustrates the challenges in conducting road suspension measurements in densely trafficked urban conditions, and the numerous requirements for input data that are needed for accurately applying road suspension emission models.

  14. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    PubMed

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  15. Consequences of land-cover misclassification in models of impervious surface

    USGS Publications Warehouse

    McMahon, G.

    2007-01-01

    Model estimates of impervious area as a function of landcover area may be biased and imprecise because of errors in the land-cover classification. This investigation of the effects of land-cover misclassification on impervious surface models that use National Land Cover Data (NLCD) evaluates the consequences of adjusting land-cover within a watershed to reflect uncertainty assessment information. Model validation results indicate that using error-matrix information to adjust land-cover values used in impervious surface models does not substantially improve impervious surface predictions. Validation results indicate that the resolution of the landcover data (Level I and Level II) is more important in predicting impervious surface accurately than whether the land-cover data have been adjusted using information in the error matrix. Level I NLCD, adjusted for land-cover misclassification, is preferable to the other land-cover options for use in models of impervious surface. This result is tied to the lower classification error rates for the Level I NLCD. ?? 2007 American Society for Photogrammetry and Remote Sensing.

  16. Identifying and modeling the structural discontinuities of human interactions

    NASA Astrophysics Data System (ADS)

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-04-01

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales.

  17. Identifying and modeling the structural discontinuities of human interactions

    PubMed Central

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-01-01

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales. PMID:28443647

  18. Computing Thermal Effects of Cavitation in Cryogenic Liquids

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Ahuja, Vineet; Dash, Sanford M.

    2005-01-01

    A computer program implements a numerical model of thermal effects of cavitation in cryogenic fluids. The model and program were developed for use in designing and predicting the performances of turbopumps for cryogenic fluids. Prior numerical models used for this purpose do not account for either the variability of properties of cryogenic fluids or the thermal effects (especially, evaporative cooling) involved in cavitation. It is important to account for both because in a cryogenic fluid, the thermal effects of cavitation are substantial, and the cavitation characteristics are altered by coupling between the variable fluid properties and the phase changes involved in cavitation. The present model accounts for both thermal effects and variability of properties by incorporating a generalized representation of the properties of cryogenic fluids into a generalized compressible-fluid formulation for a cavitating pump. The model has been extensively validated for liquid nitrogen and liquid hydrogen. Using the available data on the properties of these fluids, the model has been shown to predict accurate temperature-depression values.

  19. Identifying and modeling the structural discontinuities of human interactions.

    PubMed

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-04-26

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales.

  20. Genome-Wide Prediction of the Performance of Three-Way Hybrids in Barley.

    PubMed

    Li, Zuo; Philipp, Norman; Spiller, Monika; Stiewe, Gunther; Reif, Jochen C; Zhao, Yusheng

    2017-03-01

    Predicting the grain yield performance of three-way hybrids is challenging. Three-way crosses are relevant for hybrid breeding in barley ( L.) and maize ( L.) adapted to East Africa. The main goal of our study was to implement and evaluate genome-wide prediction approaches of the performance of three-way hybrids using data of single-cross hybrids for a scenario in which parental lines of the three-way hybrids originate from three genetically distinct subpopulations. We extended the ridge regression best linear unbiased prediction (RRBLUP) and devised a genomic selection model allowing for subpopulation-specific marker effects (GSA-RRBLUP: general and subpopulation-specific additive RRBLUP). Using an empirical barley data set, we showed that applying GSA-RRBLUP tripled the prediction ability of three-way hybrids from 0.095 to 0.308 compared with RRBLUP, modeling one additive effect for all three subpopulations. The experimental findings were further substantiated with computer simulations. Our results emphasize the potential of GSA-RRBLUP to improve genome-wide hybrid prediction of three-way hybrids for scenarios of genetically diverse parental populations. Because of the advantages of the GSA-RRBLUP model in dealing with hybrids from different parental populations, it may also be a promising approach to boost the prediction ability for hybrid breeding programs based on genetically diverse heterotic groups. Copyright © 2017 Crop Science Society of America.

  1. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  2. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke.

    PubMed

    Smith, Eric E; Shobha, Nandavar; Dai, David; Olson, DaiWai M; Reeves, Mathew J; Saver, Jeffrey L; Hernandez, Adrian F; Peterson, Eric D; Fonarow, Gregg C; Schwamm, Lee H

    2013-01-28

    We aimed to derive and validate a single risk score for predicting death from ischemic stroke (IS), intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH). Data from 333 865 stroke patients (IS, 82.4%; ICH, 11.2%; SAH, 2.6%; uncertain type, 3.8%) in the Get With The Guidelines-Stroke database were used. In-hospital mortality varied greatly according to stroke type (IS, 5.5%; ICH, 27.2%; SAH, 25.1%; unknown type, 6.0%; P<0.001). The patients were randomly divided into derivation (60%) and validation (40%) samples. Logistic regression was used to determine the independent predictors of mortality and to assign point scores for a prediction model in the overall population and in the subset with the National Institutes of Health Stroke Scale (NIHSS) recorded (37.1%). The c statistic, a measure of how well the models discriminate the risk of death, was 0.78 in the overall validation sample and 0.86 in the model including NIHSS. The model with NIHSS performed nearly as well in each stroke type as in the overall model including all types (c statistics for IS alone, 0.85; for ICH alone, 0.83; for SAH alone, 0.83; uncertain type alone, 0.86). The calibration of the model was excellent, as demonstrated by plots of observed versus predicted mortality. A single prediction score for all stroke types can be used to predict risk of in-hospital death following stroke admission. Incorporation of NIHSS information substantially improves this predictive accuracy.

  3. External Validation and Recalibration of Risk Prediction Models for Acute Traumatic Brain Injury among Critically Ill Adult Patients in the United Kingdom

    PubMed Central

    Griggs, Kathryn A.; Prabhu, Gita; Gomes, Manuel; Lecky, Fiona E.; Hutchinson, Peter J. A.; Menon, David K.; Rowan, Kathryn M.

    2015-01-01

    Abstract This study validates risk prediction models for acute traumatic brain injury (TBI) in critical care units in the United Kingdom and recalibrates the models to this population. The Risk Adjustment In Neurocritical care (RAIN) Study was a prospective, observational cohort study in 67 adult critical care units. Adult patients admitted to critical care following acute TBI with a last pre-sedation Glasgow Coma Scale score of less than 15 were recruited. The primary outcomes were mortality and unfavorable outcome (death or severe disability, assessed using the Extended Glasgow Outcome Scale) at six months following TBI. Of 3626 critical care unit admissions, 2975 were analyzed. Following imputation of missing outcomes, mortality at six months was 25.7% and unfavorable outcome 57.4%. Ten risk prediction models were validated from Hukkelhoven and colleagues, the Medical Research Council (MRC) Corticosteroid Randomisation After Significant Head Injury (CRASH) Trial Collaborators, and the International Mission for Prognosis and Analysis of Clinical Trials in TBI (IMPACT) group. The model with the best discrimination was the IMPACT “Lab” model (C index, 0.779 for mortality and 0.713 for unfavorable outcome). This model was well calibrated for mortality at six months but substantially under-predicted the risk of unfavorable outcome. Recalibration of the models resulted in small improvements in discrimination and excellent calibration for all models. The risk prediction models demonstrated sufficient statistical performance to support their use in research and audit but fell below the level required to guide individual patient decision-making. The published models for unfavorable outcome at six months had poor calibration in the UK critical care setting and the models recalibrated to this setting should be used in future research. PMID:25898072

  4. External Validation and Recalibration of Risk Prediction Models for Acute Traumatic Brain Injury among Critically Ill Adult Patients in the United Kingdom.

    PubMed

    Harrison, David A; Griggs, Kathryn A; Prabhu, Gita; Gomes, Manuel; Lecky, Fiona E; Hutchinson, Peter J A; Menon, David K; Rowan, Kathryn M

    2015-10-01

    This study validates risk prediction models for acute traumatic brain injury (TBI) in critical care units in the United Kingdom and recalibrates the models to this population. The Risk Adjustment In Neurocritical care (RAIN) Study was a prospective, observational cohort study in 67 adult critical care units. Adult patients admitted to critical care following acute TBI with a last pre-sedation Glasgow Coma Scale score of less than 15 were recruited. The primary outcomes were mortality and unfavorable outcome (death or severe disability, assessed using the Extended Glasgow Outcome Scale) at six months following TBI. Of 3626 critical care unit admissions, 2975 were analyzed. Following imputation of missing outcomes, mortality at six months was 25.7% and unfavorable outcome 57.4%. Ten risk prediction models were validated from Hukkelhoven and colleagues, the Medical Research Council (MRC) Corticosteroid Randomisation After Significant Head Injury (CRASH) Trial Collaborators, and the International Mission for Prognosis and Analysis of Clinical Trials in TBI (IMPACT) group. The model with the best discrimination was the IMPACT "Lab" model (C index, 0.779 for mortality and 0.713 for unfavorable outcome). This model was well calibrated for mortality at six months but substantially under-predicted the risk of unfavorable outcome. Recalibration of the models resulted in small improvements in discrimination and excellent calibration for all models. The risk prediction models demonstrated sufficient statistical performance to support their use in research and audit but fell below the level required to guide individual patient decision-making. The published models for unfavorable outcome at six months had poor calibration in the UK critical care setting and the models recalibrated to this setting should be used in future research.

  5. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    NASA Astrophysics Data System (ADS)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  6. WRF-TMH: predicting transmembrane helix by fusing composition index and physicochemical properties of amino acids.

    PubMed

    Hayat, Maqsood; Khan, Asifullah

    2013-05-01

    Membrane protein is the prime constituent of a cell, which performs a role of mediator between intra and extracellular processes. The prediction of transmembrane (TM) helix and its topology provides essential information regarding the function and structure of membrane proteins. However, prediction of TM helix and its topology is a challenging issue in bioinformatics and computational biology due to experimental complexities and lack of its established structures. Therefore, the location and orientation of TM helix segments are predicted from topogenic sequences. In this regard, we propose WRF-TMH model for effectively predicting TM helix segments. In this model, information is extracted from membrane protein sequences using compositional index and physicochemical properties. The redundant and irrelevant features are eliminated through singular value decomposition. The selected features provided by these feature extraction strategies are then fused to develop a hybrid model. Weighted random forest is adopted as a classification approach. We have used two benchmark datasets including low and high-resolution datasets. tenfold cross validation is employed to assess the performance of WRF-TMH model at different levels including per protein, per segment, and per residue. The success rates of WRF-TMH model are quite promising and are the best reported so far on the same datasets. It is observed that WRF-TMH model might play a substantial role, and will provide essential information for further structural and functional studies on membrane proteins. The accompanied web predictor is accessible at http://111.68.99.218/WRF-TMH/ .

  7. Prediction of early summer rainfall over South China by a physical-empirical model

    NASA Astrophysics Data System (ADS)

    Yim, So-Young; Wang, Bin; Xing, Wen

    2014-10-01

    In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.

  8. Suppression of Shear Banding and Transition to Necking and Homogeneous Flow in Nanoglass Nanopillars

    NASA Astrophysics Data System (ADS)

    Adibi, Sara; Branicio, Paulo S.; Joshi, Shailendra P.

    2015-10-01

    In order to improve the properties of metallic glasses (MG) a new type of MG structure, composed of nanoscale grains, referred to as nanoglass (NG), has been recently proposed. Here, we use large-scale molecular dynamics (MD) simulations of tensile loading to investigate the deformation and failure mechanisms of Cu64Zr36 NG nanopillars with large, experimentally accessible, 50 nm diameter. Our results reveal NG ductility and failure by necking below the average glassy grain size of 20 nm, in contrast to brittle failure by shear band propagation in MG nanopillars. Moreover, the results predict substantially larger ductility in NG nanopillars compared with previous predictions of MD simulations of bulk NG models with columnar grains. The results, in excellent agreement with experimental data, highlight the substantial enhancement of plasticity induced in experimentally relevant MG samples by the use of nanoglass architectures and point out to exciting novel applications of these materials.

  9. Development of a predictive limited sampling strategy for estimation of mycophenolic acid area under the concentration time curve in patients receiving concomitant sirolimus or cyclosporine.

    PubMed

    Figurski, Michal J; Nawrocki, Artur; Pescovitz, Mark D; Bouw, Rene; Shaw, Leslie M

    2008-08-01

    Limited sampling strategies for estimation of the area under the concentration time curve (AUC) for mycophenolic acid (MPA) co-administered with sirolimus (SRL) have not been previously evaluated. The authors developed and validated 68 regression models for estimation of MPA AUC for two groups of patients, one with concomitant SRL (n = 24) and the second with concomitant cyclosporine (n=14), using various combinations of time points between 0 and 4 hours after drug administration. To provide as robust a model as possible, a dataset-splitting method similar to a bootstrap was used. In this method, the dataset was randomly split in half 100 times. Each time, one half of the data was used to estimate the equation coefficients, and the other half was used to test and validate the models. Final models were obtained by calculating the median values of the coefficients. Substantial differences were found in the pharmacokinetics of MPA between these groups. The mean MPA AUC as well as the standard deviation was much greater in the SRL group, 56.4 +/- 23.5 mg.h/L, compared with 30.4 +/- 11.0 mg.h/L in the cyclosporine group (P < 0.001). Mean maximum concentration was also greater in the SRL group: 16.4 +/- 7.7 mg/L versus 11.7 +/- 7.1mg/L (P < 0.005). The second absorption peak in the pharmacokinetic profile, presumed to result from enterohepatic recycling of glucuronide MPA, was observed in 70% of the profiles in the SRL group and in 35% of profiles from the cyclosporine group. Substantial differences in the predictive performance of the regression models, based on the same time points, were observed between the two groups. The best model for the SRL group was based on 0 (trough) and 40 minutes and 4 hour time points with R2, root mean squared error, and predictive performance values of 0.82, 10.0, and 78%, respectively. In the cyclosporine group, the best model was 0 and 40 minutes and 2 hours, with R2, RMSE, and predictive performance values of 0.86, 4.1, and 83%, respectively. The model with 2 hours as the last time point is also recommended for the SRL group for practical reasons, with the above parameters of 0.77, 11.3, and 69%, respectively.

  10. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  11. Dynamic fMRI networks predict success in a behavioral weight loss program among older adults.

    PubMed

    Mokhtari, Fatemeh; Rejeski, W Jack; Zhu, Yingying; Wu, Guorong; Simpson, Sean L; Burdette, Jonathan H; Laurienti, Paul J

    2018-06-01

    More than one-third of adults in the United States are obese, with a higher prevalence among older adults. Obesity among older adults is a major cause of physical dysfunction, hypertension, diabetes, and coronary heart diseases. Many people who engage in lifestyle weight loss interventions fail to reach targeted goals for weight loss, and most will regain what was lost within 1-2 years following cessation of treatment. This variability in treatment efficacy suggests that there are important phenotypes predictive of success with intentional weight loss that could lead to tailored treatment regimen, an idea that is consistent with the concept of precision-based medicine. Although the identification of biochemical and metabolic phenotypes are one potential direction of research, neurobiological measures may prove useful as substantial behavioral change is necessary to achieve success in a lifestyle intervention. In the present study, we use dynamic brain networks from functional magnetic resonance imaging (fMRI) data to prospectively identify individuals most likely to succeed in a behavioral weight loss intervention. Brain imaging was performed in overweight or obese older adults (age: 65-79 years) who participated in an 18-month lifestyle weight loss intervention. Machine learning and functional brain networks were combined to produce multivariate prediction models. The prediction accuracy exceeded 95%, suggesting that there exists a consistent pattern of connectivity which correctly predicts success with weight loss at the individual level. Connectivity patterns that contributed to the prediction consisted of complex multivariate network components that substantially overlapped with known brain networks that are associated with behavior emergence, self-regulation, body awareness, and the sensory features of food. Future work on independent datasets and diverse populations is needed to corroborate our findings. Additionally, we believe that efforts can begin to examine whether these models have clinical utility in tailoring treatment. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Cheminformatics-aided pharmacovigilance: application to Stevens-Johnson Syndrome

    PubMed Central

    Low, Yen S; Caster, Ola; Bergvall, Tomas; Fourches, Denis; Zang, Xiaoling; Norén, G Niklas; Rusyn, Ivan; Edwards, Ralph

    2016-01-01

    Objective Quantitative Structure-Activity Relationship (QSAR) models can predict adverse drug reactions (ADRs), and thus provide early warnings of potential hazards. Timely identification of potential safety concerns could protect patients and aid early diagnosis of ADRs among the exposed. Our objective was to determine whether global spontaneous reporting patterns might allow chemical substructures associated with Stevens-Johnson Syndrome (SJS) to be identified and utilized for ADR prediction by QSAR models. Materials and Methods Using a reference set of 364 drugs having positive or negative reporting correlations with SJS in the VigiBase global repository of individual case safety reports (Uppsala Monitoring Center, Uppsala, Sweden), chemical descriptors were computed from drug molecular structures. Random Forest and Support Vector Machines methods were used to develop QSAR models, which were validated by external 5-fold cross validation. Models were employed for virtual screening of DrugBank to predict SJS actives and inactives, which were corroborated using knowledge bases like VigiBase, ChemoText, and MicroMedex (Truven Health Analytics Inc, Ann Arbor, Michigan). Results We developed QSAR models that could accurately predict if drugs were associated with SJS (area under the curve of 75%–81%). Our 10 most active and inactive predictions were substantiated by SJS reports (or lack thereof) in the literature. Discussion Interpretation of QSAR models in terms of significant chemical descriptors suggested novel SJS structural alerts. Conclusions We have demonstrated that QSAR models can accurately identify SJS active and inactive drugs. Requiring chemical structures only, QSAR models provide effective computational means to flag potentially harmful drugs for subsequent targeted surveillance and pharmacoepidemiologic investigations. PMID:26499102

  13. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    PubMed

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  14. Land-atmosphere coupling and climate prediction over the U.S. Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Williams, Ian N.; Lu, Yaqiong; Kueppers, Lara M.; Riley, William J.; Biraud, Sebastien C.; Bagley, Justin E.; Torn, Margaret S.

    2016-10-01

    Biases in land-atmosphere coupling in climate models can contribute to climate prediction biases, but land models are rarely evaluated in the context of this coupling. We tested land-atmosphere coupling and explored effects of land surface parameterizations on climate prediction in a single-column version of the National Center for Atmospheric Research Community Earth System Model (CESM1.2.2) and an off-line Community Land Model (CLM4.5). The correlation between leaf area index (LAI) and surface evaporative fraction (ratio of latent to total turbulent heat flux) was substantially underpredicted compared to observations in the U.S. Southern Great Plains, while the correlation between soil moisture and evaporative fraction was overpredicted by CLM4.5. To estimate the impacts of these errors on climate prediction, we modified CLM4.5 by prescribing observed LAI, increasing soil resistance to evaporation, increasing minimum stomatal conductance, and increasing leaf reflectance. The modifications improved the predicted soil moisture-evaporative fraction (EF) and LAI-EF correlations in off-line CLM4.5 and reduced the root-mean-square error in summer 2 m air temperature and precipitation in the coupled model. The modifications had the largest effect on prediction during a drought in summer 2006, when a warm bias in daytime 2 m air temperature was reduced from +6°C to a smaller cold bias of -1.3°C, and a corresponding dry bias in precipitation was reduced from -111 mm to -23 mm. The role of vegetation in droughts and heat waves is underpredicted in CESM1.2.2, and improvements in land surface models can improve prediction of climate extremes.

  15. Predicting Progression from Mild Cognitive Impairment to Alzheimer's Dementia Using Clinical, MRI, and Plasma Biomarkers via Probabilistic Pattern Classification

    PubMed Central

    Korolev, Igor O.; Symonds, Laura L.; Bozoki, Andrea C.

    2016-01-01

    Background Individuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level. Methods Using baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework. Results Cognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions. Conclusions We developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment. PMID:26901338

  16. Critical Features of Fragment Libraries for Protein Structure Prediction

    PubMed Central

    dos Santos, Karina Baptista

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction. PMID:28085928

  17. Critical Features of Fragment Libraries for Protein Structure Prediction.

    PubMed

    Trevizani, Raphael; Custódio, Fábio Lima; Dos Santos, Karina Baptista; Dardenne, Laurent Emmanuel

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  18. Predicting Urban Elementary Student Success and Passage on Ohio's High-Stakes Achievement Measures Using DIBELS Oral Reading Fluency and Informal Math Concepts and Applications: An Exploratory Study Employing Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Merkle, Erich Robert

    2011-01-01

    Contemporary education is experiencing substantial reform across legislative, pedagogical, and assessment dimensions. The increase in school-based accountability systems has brought forth a culture where states, school districts, teachers, and individual students are required to demonstrate their efficacy towards improvement of the educational…

  19. Using field data to assess model predictions of surface and ground fuel consumption by wildfire in coniferous forests of California

    Treesearch

    Jamie Lydersen; Brandon M. Collins; Carol Ewell; Alicia Reiner; Jo Ann Fites; Christopher Dow; Patrick Gonzalez; David Saah; John Battles

    2014-01-01

    Inventories of greenhouse gas (GHG) emissions from wildfire provide essential information to the state of California, USA, and other governments that have enacted emission reductions. Wildfires can release a substantial amount of GHGs and other compounds to the atmosphere, so recent increases in fire activity may be increasing GHG emissions. Quantifying wildfire...

  20. Ecosystem Predictions with Approximate vs. Exact Light Fields

    DTIC Science & Technology

    2009-03-27

    and optically shallow waters for which bottom reflectance can substantially increase the irradiance available for photosynthesis and water heating...primary productivity, heating of water, and photochemical reactions. When modeling photosynthesis , which depends on the number of photons oabsorbed, it...irradiance, W m nm , to quantum units,-2 -1 photons s m nm .-1 -2 -1 A wavelength-integrated measure of the total light available for photosynthesis

  1. Development of Multivariable Models to Predict and Benchmark Transfusion in Elective Surgery Supporting Patient Blood Management.

    PubMed

    Hayn, Dieter; Kreiner, Karl; Ebner, Hubert; Kastner, Peter; Breznik, Nada; Rzepka, Angelika; Hofmann, Axel; Gombotz, Hans; Schreier, Günter

    2017-06-14

    Blood transfusion is a highly prevalent procedure in hospitalized patients and in some clinical scenarios it has lifesaving potential. However, in most cases transfusion is administered to hemodynamically stable patients with no benefit, but increased odds of adverse patient outcomes and substantial direct and indirect cost. Therefore, the concept of Patient Blood Management has increasingly gained importance to pre-empt and reduce transfusion and to identify the optimal transfusion volume for an individual patient when transfusion is indicated. It was our aim to describe, how predictive modeling and machine learning tools applied on pre-operative data can be used to predict the amount of red blood cells to be transfused during surgery and to prospectively optimize blood ordering schedules. In addition, the data derived from the predictive models should be used to benchmark different hospitals concerning their blood transfusion patterns. 6,530 case records obtained for elective surgeries from 16 centers taking part in two studies conducted in 2004-2005 and 2009-2010 were analyzed. Transfused red blood cell volume was predicted using random forests. Separate models were trained for overall data, for each center and for each of the two studies. Important characteristics of different models were compared with one another. Our results indicate that predictive modeling applied prior surgery can predict the transfused volume of red blood cells more accurately (correlation coefficient cc = 0.61) than state of the art algorithms (cc = 0.39). We found significantly different patterns of feature importance a) in different hospitals and b) between study 1 and study 2. We conclude that predictive modeling can be used to benchmark the importance of different features on the models derived with data from different hospitals. This might help to optimize crucial processes in a specific hospital, even in other scenarios beyond Patient Blood Management.

  2. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  3. Effect of tumor amplitude and frequency on 4D modeling of Vero4DRT system.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Hayata, Masahiro; Tsuda, Shintaro; Yamada, Kiyoshi; Nagata, Yasushi

    2017-01-01

    An important issue in indirect dynamic tumor tracking with the Vero4DRT system is the accuracy of the model predictions of the internal target position based on surrogate infrared (IR) marker measurement. We investigated the predictive uncertainty of 4D modeling using an external IR marker, focusing on the effect of the target and surrogate amplitudes and periods. A programmable respiratory motion table was used to simulate breathing induced organ motion. Sinusoidal motion sequences were produced by a dynamic phantom with different amplitudes and periods. To investigate the 4D modeling error, the following amplitudes (peak-to-peak: 10-40 mm) and periods (2-8 s) were considered. The 95th percentile 4D modeling error (4D- E 95% ) between the detected and predicted target position ( μ  + 2SD) was calculated to investigate the 4D modeling error. 4D- E 95% was linearly related to the target motion amplitude with a coefficient of determination R 2  = 0.99 and ranged from 0.21 to 0.88 mm. The 4D modeling error ranged from 1.49 to 0.14 mm and gradually decreased with increasing target motion period. We analyzed the predictive error in 4D modeling and the error due to the amplitude and period of target. 4D modeling error substantially increased with increasing amplitude and decreasing period of the target motion.

  4. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  5. A multi-step reaction model for ignition of fully-dense Al-CuO nanocomposite powders

    NASA Astrophysics Data System (ADS)

    Stamatis, D.; Ermoline, A.; Dreizin, E. L.

    2012-12-01

    A multi-step reaction model is developed to describe heterogeneous processes occurring upon heating of an Al-CuO nanocomposite material prepared by arrested reactive milling. The reaction model couples a previously derived Cabrera-Mott oxidation mechanism describing initial, low temperature processes and an aluminium oxidation model including formation of different alumina polymorphs at increased film thicknesses and higher temperatures. The reaction model is tuned using traces measured by differential scanning calorimetry. Ignition is studied for thin powder layers and individual particles using respectively the heated filament (heating rates of 103-104 K s-1) and laser ignition (heating rate ∼106 K s-1) experiments. The developed heterogeneous reaction model predicts a sharp temperature increase, which can be associated with ignition when the laser power approaches the experimental ignition threshold. In experiments, particles ignited by the laser beam are observed to explode, indicating a substantial gas release accompanying ignition. For the heated filament experiments, the model predicts exothermic reactions at the temperatures, at which ignition is observed experimentally; however, strong thermal contact between the metal filament and powder prevents the model from predicting the thermal runaway. It is suggested that oxygen gas release from decomposing CuO, as observed from particles exploding upon ignition in the laser beam, disrupts the thermal contact of the powder and filament; this phenomenon must be included in the filament ignition model to enable prediction of the temperature runaway.

  6. Changing rates of adenocarcinoma and adenosquamous carcinoma of the cervix in England.

    PubMed

    Sasieni, P; Adams, J

    2001-05-12

    A recent analysis showed little or no effect of screening on the incidence of adenocarcinoma of the cervix between 1971 and 1992. We have used additional data on cancers diagnosed in 1993-94 in England and up to 1997 in five English cancer registries to investigate more recent trends. After inputing the number of adenocarcinomas in women with unknown histology, we fitted an age-cohort model to 8062 adenocarcinomas of the cervix diagnosed in England between 1971 and 1987. Predictions from this model were applied to the more recent data on 5854 cases. Residual effects were plotted against year of diagnosis in each of four age-groups. We estimated the underlying risk of cervical adenocarcinoma to be 14 times (95% CI 11-19) greater in women born in the early 1960s than in cohorts born before 1935. An age-cohort model fitted the data for England well up to 1987, but substantially overestimated the numbers of adenocarcinomas in young women from 1990 onwards. In 1996-97 the incidence rate in women aged 25-54 years was less than 40% of that predicted from the age-cohort model. The substantial increase in cervical adenocarcinoma in recent years is largely a birth-cohort effect presumably associated with greater exposure to human papillomavirus after the sexual revolution in the 1960s. The relative decline in younger women observed in more recent years suggests an effect of cervical screening.

  7. Universally Sloppy Parameter Sensitivities in Systems Biology Models

    PubMed Central

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-01-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568

  8. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  9. Developing global regression models for metabolite concentration prediction regardless of cell line.

    PubMed

    André, Silvère; Lagresle, Sylvain; Da Sliva, Anthony; Heimendinger, Pierre; Hannas, Zahia; Calvosa, Éric; Duponchel, Ludovic

    2017-11-01

    Following the Process Analytical Technology (PAT) of the Food and Drug Administration (FDA), drug manufacturers are encouraged to develop innovative techniques in order to monitor and understand their processes in a better way. Within this framework, it has been demonstrated that Raman spectroscopy coupled with chemometric tools allow to predict critical parameters of mammalian cell cultures in-line and in real time. However, the development of robust and predictive regression models clearly requires many batches in order to take into account inter-batch variability and enhance models accuracy. Nevertheless, this heavy procedure has to be repeated for every new line of cell culture involving many resources. This is why we propose in this paper to develop global regression models taking into account different cell lines. Such models are finally transferred to any culture of the cells involved. This article first demonstrates the feasibility of developing regression models, not only for mammalian cell lines (CHO and HeLa cell cultures), but also for insect cell lines (Sf9 cell cultures). Then global regression models are generated, based on CHO cells, HeLa cells, and Sf9 cells. Finally, these models are evaluated considering a fourth cell line(HEK cells). In addition to suitable predictions of glucose and lactate concentration of HEK cell cultures, we expose that by adding a single HEK-cell culture to the calibration set, the predictive ability of the regression models are substantially increased. In this way, we demonstrate that using global models, it is not necessary to consider many cultures of a new cell line in order to obtain accurate models. Biotechnol. Bioeng. 2017;114: 2550-2559. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Some aspects of core formation in Mercury

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.

    1976-01-01

    Some questions dealing with the nature and history of a large metallic core within Mercury are considered. These include the existence of a core, its size, whether it is fluid or solid, the timescale for core formation, the geological consequences of core formation, and whether such consequences are consistent with the surface geology. Several indirect lines of evidence are discussed which suggest the presence of a large iron-rich core. A core-formation model is examined in which core infall is accompanied by an increase of 17 km in planetary radius, an increase of 700 K in mean internal temperature, and substantial melting of the mantle. It is argued that if the core differentiated from an originally homogeneous planet, that event must have predated the oldest geological units comprising most of the planetary surface. A convective dynamo model for the source of Mercury's magnetic field is shown to conflict with cosmochemical models that do not predict a substantial radiogenic heat source in the core.

  11. Can routine chest radiography be used to diagnose mild COPD? A nested case-control study.

    PubMed

    den Harder, A M; Snoek, A M; Leiner, T; Suyker, W J; de Heer, L M; Budde, R P J; Lammers, J W J; de Jong, P A; Gondrie, M J A

    2017-07-01

    To determine whether mild stage chronic obstructive pulmonary disease (COPD) can be detected on chest radiography without substantial overdiagnosis. A retrospective nested case-control study (case:control, 1:1) was performed in 783 patients scheduled for cardiothoracic surgery who underwent both spirometry and a chest radiograph preoperative. Diagnostic accuracy of chest radiography for diagnosing mild COPD was investigated using objective measurements and overall appearance specific for COPD on chest radiography. Inter-observer variability was investigated and variables with a kappa >0.40 as well as baseline characteristics were used to make a diagnostic model which was aimed at achieving a high positive predictive value (PPV). Twenty percent (155/783) had COPD. The PPV of overall appearance specific for COPD alone was low (37-55%). Factors in the diagnostic model were age, type of surgery, gender, distance of the right diaphragm apex to the first rib, retrosternal space, sternodiaphragmatic angle, maximum height right diaphragm (lateral view) and subjective impression of COPD (using both views). The model resulted in a PPV of 100%, negative predictive value (NPV) of 82%, sensitivity of 10% and specificity of 100% with an area under the curve of 0.811. Detection of mild COPD without substantial overdiagnosis was not feasible on chest radiographs in our cohort. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Advancing coastal ocean modelling, analysis, and prediction for the US Integrated Ocean Observing System

    USGS Publications Warehouse

    Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane

    2017-01-01

    This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.

  13. Incorporating spatial autocorrelation into species distribution models alters forecasts of climate-mediated range shifts.

    PubMed

    Crase, Beth; Liedloff, Adam; Vesk, Peter A; Fukuda, Yusuke; Wintle, Brendan A

    2014-08-01

    Species distribution models (SDMs) are widely used to forecast changes in the spatial distributions of species and communities in response to climate change. However, spatial autocorrelation (SA) is rarely accounted for in these models, despite its ubiquity in broad-scale ecological data. While spatial autocorrelation in model residuals is known to result in biased parameter estimates and the inflation of type I errors, the influence of unmodeled SA on species' range forecasts is poorly understood. Here we quantify how accounting for SA in SDMs influences the magnitude of range shift forecasts produced by SDMs for multiple climate change scenarios. SDMs were fitted to simulated data with a known autocorrelation structure, and to field observations of three mangrove communities from northern Australia displaying strong spatial autocorrelation. Three modeling approaches were implemented: environment-only models (most frequently applied in species' range forecasts), and two approaches that incorporate SA; autologistic models and residuals autocovariate (RAC) models. Differences in forecasts among modeling approaches and climate scenarios were quantified. While all model predictions at the current time closely matched that of the actual current distribution of the mangrove communities, under the climate change scenarios environment-only models forecast substantially greater range shifts than models incorporating SA. Furthermore, the magnitude of these differences intensified with increasing increments of climate change across the scenarios. When models do not account for SA, forecasts of species' range shifts indicate more extreme impacts of climate change, compared to models that explicitly account for SA. Therefore, where biological or population processes induce substantial autocorrelation in the distribution of organisms, and this is not modeled, model predictions will be inaccurate. These results have global importance for conservation efforts as inaccurate forecasts lead to ineffective prioritization of conservation activities and potentially to avoidable species extinctions. © 2014 John Wiley & Sons Ltd.

  14. Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT-HD.

    PubMed

    Long, Jeffrey D; Paulsen, Jane S

    2015-10-01

    It is well known in Huntington's disease that cytosine-adenine-guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. One thousand seventy-eight Huntington's disease gene-expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right-censored data. Adding 34 variables along with cytosine-adenine-guanine and age substantially increased predictive accuracy relative to cytosine-adenine-guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5-y predictive accuracy than when using cytosine-adenine-guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine-adenine-guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society.

  15. BOADICEA breast cancer risk prediction model: updates to cancer incidences, tumour pathology and web interface

    PubMed Central

    Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C

    2014-01-01

    Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285

  16. ELECTROCHEMISTRY AND ON-CELL REFORMATION MODELING FOR SOLID OXIDE FUEL CELL STACKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Recknagle, Kurtis P.; Jarboe, Daniel T.; Johnson, Kenneth I.

    2007-01-16

    ABSTRACT Providing adequate and efficient cooling schemes for solid-oxide-fuel-cell (SOFC) stacks continues to be a challenge coincident with the development of larger, more powerful stacks. The endothermic steam-methane reformation reaction can provide cooling and improved system efficiency when performed directly on the electrochemically active anode. Rapid kinetics of the endothermic reaction typically causes a localized temperature depression on the anode near the fuel inlet. It is desirable to extend the endothermic effect over more of the cell area and mitigate the associated differences in temperature on the cell to alleviate subsequent thermal stresses. In this study, modeling tools validated formore » the prediction of fuel use, on-cell methane reforming, and the distribution of temperature within SOFC stacks, are employed to provide direction for modifying the catalytic activity of anode materials to control the methane conversion rate. Improvements in thermal management that can be achieved through on-cell reforming is predicted and discussed. Two operating scenarios are considered: one in which the methane fuel is fully pre-reformed, and another in which a substantial percentage of the methane is reformed on-cell. For the latter, a range of catalytic activity is considered and the predicted thermal effects on the cell are presented. Simulations of the cell electrochemical and thermal performance with and without on-cell reforming, including structural analyses, show a substantial decrease in thermal stresses for an on-cell reforming case with slowed methane conversion.« less

  17. Acquisition and extinction in autoshaping.

    PubMed

    Kakade, Sham; Dayan, Peter

    2002-07-01

    C. R. Gallistel and J. Gibbon (2000) presented quantitative data on the speed with which animals acquire behavioral responses during autoshaping, together with a statistical model of learning intended to account for them. Although this model captures the form of the dependencies among critical variables, its detailed predictions are substantially at variance with the data. In the present article, further key data on the speed of acquisition are used to motivate an alternative model of learning, in which animals can be interpreted as paying different amounts of attention to stimuli according to estimates of their differential reliabilities as predictors.

  18. Predicting self-reported research misconduct and questionable research practices in university students using an augmented Theory of Planned Behavior

    PubMed Central

    Rajah-Kanagasabai, Camilla J.; Roberts, Lynne D.

    2015-01-01

    This study examined the utility of the Theory of Planned Behavior model, augmented by descriptive norms and justifications, for predicting self-reported research misconduct and questionable research practices in university students. A convenience sample of 205 research active Western Australian university students (47 male, 158 female, ages 18–53 years, M = 22, SD = 4.78) completed an online survey. There was a low level of engagement in research misconduct, with approximately one in seven students reporting data fabrication and one in eight data falsification. Path analysis and model testing in LISREL supported a parsimonious two step mediation model, providing good fit to the data. After controlling for social desirability, the effect of attitudes, subjective norms, descriptive norms and perceived behavioral control on student engagement in research misconduct and questionable research practices was mediated by justifications and then intention. This revised augmented model accounted for a substantial 40.8% of the variance in student engagement in research misconduct and questionable research practices, demonstrating its predictive utility. The model can be used to target interventions aimed at reducing student engagement in research misconduct and questionable research practices. PMID:25983709

  19. Predicting self-reported research misconduct and questionable research practices in university students using an augmented Theory of Planned Behavior.

    PubMed

    Rajah-Kanagasabai, Camilla J; Roberts, Lynne D

    2015-01-01

    This study examined the utility of the Theory of Planned Behavior model, augmented by descriptive norms and justifications, for predicting self-reported research misconduct and questionable research practices in university students. A convenience sample of 205 research active Western Australian university students (47 male, 158 female, ages 18-53 years, M = 22, SD = 4.78) completed an online survey. There was a low level of engagement in research misconduct, with approximately one in seven students reporting data fabrication and one in eight data falsification. Path analysis and model testing in LISREL supported a parsimonious two step mediation model, providing good fit to the data. After controlling for social desirability, the effect of attitudes, subjective norms, descriptive norms and perceived behavioral control on student engagement in research misconduct and questionable research practices was mediated by justifications and then intention. This revised augmented model accounted for a substantial 40.8% of the variance in student engagement in research misconduct and questionable research practices, demonstrating its predictive utility. The model can be used to target interventions aimed at reducing student engagement in research misconduct and questionable research practices.

  20. Green roof hydrologic performance and modeling: a review.

    PubMed

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  1. Ross River virus and Barmah Forest virus infections: a review of history, ecology, and predictive models, with implications for tropical northern Australia.

    PubMed

    Jacups, Susan P; Whelan, Peter I; Currie, Bart J

    2008-04-01

    The purpose of the present article is to present a review of the Ross River virus (RRV) and Barmah Forest virus (BFV) literature in relation to potential implications for future disease in tropical northern Australia. Ross River virus infection is the most common and most widespread arboviral disease in Australia, with an average of 4,800 national notifications annually. Of recent concern is the sudden rise in BFV infections; the 2005-2006 summer marked the largest BFV epidemic on record in Australia, with 1,895 notifications. Although not life-threatening, infection with either virus can cause arthritis, myalgia, and fatigue for 6 months or longer, resulting in substantial morbidity and economic impact. The geographic distribution of mosquito species and their seasonal activity is determined in large part by temperature and rainfall. Predictive models can be useful tools in providing early warning systems for epidemics of RRV and BFV infection. Various models have been developed to predict RRV outbreaks, but these appear to be mostly only regionally valid, being dependent on local ecological factors. Difficulties have arisen in developing useful models for the tropical northern parts of Australia, and to date no models have been developed for the Northern Territory. Only one model has been developed for predicting BFV infections using climate and tide variables. It is predicted that the exacerbation of current greenhouse conditions will result in longer periods of high mosquito activity in the tropical regions where RRV and BFV are already common. In addition, the endemic locations may expand further within temperate regions, and epidemics may become more frequent in those areas. Further development of predictive models should benefit public health planning by providing early warning systems of RRV and BFV infection outbreaks in different geographical locations.

  2. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals

    PubMed Central

    Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A.; Bonomi, Alberto G.; Moore, Jonathan P.; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included. PMID:27959935

  3. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals.

    PubMed

    Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.

  4. Predictive aging results for cable materials in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Clough, R.L.

    1990-11-01

    In this report, we provide a detailed discussion of methodology of predicting cable degradation versus dose rate, temperature, and exposure time and its application to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicon rubber and two ethylenetetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7 to 9 years), low dose-rate results recently obtained for the same material types actually aged under nuclear power plant conditions. Based on a combination of the modelling and long-term results, we findmore » indications of reasonably similar degradation responses among several different commercial formulations for each of the following generic'' materials: hypalon, ethylenetetrafluoroethylene, silicone rubber and PVC. If such generic'' behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated. Finally, to aid utilities in their cable life extension decisions, we utilize our modelling results to generate lifetime prediction curves for the materials modelled to data. These curves plot expected material lifetime versus dose rate and temperature down to the levels of interest to nuclear power plant aging. 18 refs., 30 figs., 3 tabs.« less

  5. Comparison of output-based approaches used to substantiate bovine tuberculosis free status in Danish cattle herds.

    PubMed

    Foddai, Alessandro; Nielsen, Liza Rosenbaum; Willeberg, Preben; Alban, Lis

    2015-09-01

    We compared two published studies based on different output-based surveillance models, which were used for evaluating the performance of two meat inspection systems in cattle and to substantiate freedom from bovine tuberculosis (bTB) in Denmark. The systems were the current meat inspection methods (CMI) vs. the visual-only inspection (VOI). In one study, the surveillance system sensitivity (SSe) was estimated to substantiate the bTB free status. The other study used SSe in the estimation of the probability of freedom (PFree), based on the epidemiological concept of negative predictive value to substantiate the bTB free status. Both studies found that changing from CMI to VOI would markedly decrease the SSe. However, the two studies reported diverging conclusions regarding the effect on the substantiation of Denmark as a bTB free country, if VOI were to be introduced. The objectives of this work were: (a) to investigate the reasons why conclusions based on the two models differed, and (b) to create a hybrid model based on elements from both studies to evaluate the impact of a change from CMI to VOI. The hybrid model was based on the PFree approach to substantiate freedom from bTB and was parametrized with inputs according to the newest available information. The PFree was updated on an annual basis for each of 42 years of test-negative surveillance data (1995-2037), while assuming a low (<1%) annual probability of introduction of bTB into Danish cattle herds. The most important reasons for the difference between the study conclusions were: the approach chosen to substantiate the bTB free status (SSe vs. PFree) and the number of years of surveillance data considered. With the hybrid model, the PFree reached a level >95% after the first year of surveillance and remained ≥96% with both the CMI and VOI systems until the end of the analyzed period. It is appropriate to use the PFree of the surveillance system to substantiate confidence in bTB free status, when test-negative surveillance results can be documented over an extended period of time, while maintaining a low probability of introduction of bTB into the cattle population. For Denmark, the probability of introduction of bTB should be kept <1% on an annual basis to sustain the high confidence in freedom over time. The results could be considered when deciding if the CMI can be replaced by VOI in cattle abattoirs of countries for which bTB freedom can be demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. New Techniques Used in Modeling the 2017 Total Solar Eclipse: Energizing and Heating the Large-Scale Corona

    NASA Astrophysics Data System (ADS)

    Downs, Cooper; Mikic, Zoran; Linker, Jon A.; Caplan, Ronald M.; Lionello, Roberto; Torok, Tibor; Titov, Viacheslav; Riley, Pete; Mackay, Duncan; Upton, Lisa

    2017-08-01

    Over the past two decades, our group has used a magnetohydrodynamic (MHD) model of the corona to predict the appearance of total solar eclipses. In this presentation we detail recent innovations and new techniques applied to our prediction model for the August 21, 2017 total solar eclipse. First, we have developed a method for capturing the large-scale energized fields typical of the corona, namely the sheared/twisted fields built up through long-term processes of differential rotation and flux-emergence/cancellation. Using inferences of the location and chirality of filament channels (deduced from a magnetofrictional model driven by the evolving photospheric field produced by the Advective Flux Transport model), we tailor a customized boundary electric field profile that will emerge shear along the desired portions of polarity inversion lines (PILs) and cancel flux to create long twisted flux systems low in the corona. This method has the potential to improve the morphological shape of streamers in the low solar corona. Second, we apply, for the first time in our eclipse prediction simulations, a new wave-turbulence-dissipation (WTD) based model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the coronal field---a key property for modeling/predicting the thermal-magnetic structure of the solar corona. Overall, we will examine the effect of these considerations on white-light and EUV observables from the simulations, and present them in the context of our final 2017 eclipse prediction model.Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.

  7. SAE for the prediction of road traffic status from taxicab operating data and bus smart card data

    NASA Astrophysics Data System (ADS)

    Zhengfeng, Huang; Pengjun, Zheng; Wenjun, Xu; Gang, Ren

    Road traffic status is significant for trip decision and traffic management, and thus should be predicted accurately. A contribution is that we consider multi-modal data for traffic status prediction than only using single source data. With the substantial data from Ningbo Passenger Transport Management Sector (NPTMS), we wished to determine whether it was possible to develop Stacked Autoencoders (SAEs) for accurately predicting road traffic status from taxicab operating data and bus smart card data. We show that SAE performed better than linear regression model and Back Propagation (BP) neural network for determining the relationship between road traffic status and those factors. In a 26-month data experiment using SAE, we show that it is possible to develop highly accurate predictions (91% test accuracy) of road traffic status from daily taxicab operating data and bus smart card data.

  8. Motion compensation via redundant-wavelet multihypothesis.

    PubMed

    Fowler, James E; Cui, Suxia; Wang, Yonghui

    2006-10-01

    Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.

  9. Risk-adjusted payment and performance assessment for primary care.

    PubMed

    Ash, Arlene S; Ellis, Randall P

    2012-08-01

    Many wish to change incentives for primary care practices through bundled population-based payments and substantial performance feedback and bonus payments. Recognizing patient differences in costs and outcomes is crucial, but customized risk adjustment for such purposes is underdeveloped. Using MarketScan's claims-based data on 17.4 million commercially insured lives, we modeled bundled payment to support expected primary care activity levels (PCAL) and 9 patient outcomes for performance assessment. We evaluated models using 457,000 people assigned to 436 primary care physician panels, and among 13,000 people in a distinct multipayer medical home implementation with commercially insured, Medicare, and Medicaid patients. Each outcome is separately predicted from age, sex, and diagnoses. We define the PCAL outcome as a subset of all costs that proxies the bundled payment needed for comprehensive primary care. Other expected outcomes are used to establish targets against which actual performance can be fairly judged. We evaluate model performance using R(2)'s at patient and practice levels, and within policy-relevant subgroups. The PCAL model explains 67% of variation in its outcome, performing well across diverse patient ages, payers, plan types, and provider specialties; it explains 72% of practice-level variation. In 9 performance measures, the outcome-specific models explain 17%-86% of variation at the practice level, often substantially outperforming a generic score like the one used for full capitation payments in Medicare: for example, with grouped R(2)'s of 47% versus 5% for predicting "prescriptions for antibiotics of concern." Existing data can support the risk-adjusted bundled payment calculations and performance assessments needed to encourage desired transformations in primary care.

  10. Neutral Evolution and Dispersal Limitation Produce Biogeographic Patterns in Microcystis aeruginosa Populations of Lake Systems.

    PubMed

    Shirani, Sahar; Hellweger, Ferdi L

    2017-08-01

    Molecular observations reveal substantial biogeographic patterns of cyanobacteria within systems of connected lakes. An important question is the relative role of environmental selection and neutral processes in the biogeography of these systems. Here, we quantify the effect of genetic drift and dispersal limitation by simulating individual cyanobacteria cells using an agent-based model (ABM). In the model, cells grow (divide), die, and migrate between lakes. Each cell has a full genome that is subject to neutral mutation (i.e., the growth rate is independent of the genome). The model is verified by simulating simplified lake systems, for which theoretical solutions are available. Then, it is used to simulate the biogeography of the cyanobacterium Microcystis aeruginosa in a number of real systems, including the Great Lakes, Klamath River, Yahara River, and Chattahoochee River. Model output is analyzed using standard bioinformatics tools (BLAST, MAFFT). The emergent patterns of nucleotide divergence between lakes are dynamic, including gradual increases due to accumulation of mutations and abrupt changes due to population takeovers by migrant cells (coalescence events). The model predicted nucleotide divergence is heterogeneous within systems, and for weakly connected lakes, it can be substantial. For example, Lakes Superior and Michigan are predicted to have an average genomic nucleotide divergence of 8200 bp or 0.14%. The divergence between more strongly connected lakes is much lower. Our results provide a quantitative baseline for future biogeography studies. They show that dispersal limitation can be an important factor in microbe biogeography, which is contrary to the common belief, and could affect how a system responds to environmental change.

  11. Hot Dust in Panchromatic SED Fitting: Identification of Active Galactic Nuclei and Improved Galaxy Properties

    NASA Astrophysics Data System (ADS)

    Leja, Joel; Johnson, Benjamin D.; Conroy, Charlie; van Dokkum, Pieter

    2018-02-01

    Forward modeling of the full galaxy SED is a powerful technique, providing self-consistent constraints on stellar ages, dust properties, and metallicities. However, the accuracy of these results is contingent on the accuracy of the model. One significant source of uncertainty is the contribution of obscured AGN, as they are relatively common and can produce substantial mid-IR (MIR) emission. Here we include emission from dusty AGN torii in the Prospector SED-fitting framework, and fit the UV–IR broadband photometry of 129 nearby galaxies. We find that 10% of the fitted galaxies host an AGN contributing >10% of the observed galaxy MIR luminosity. We demonstrate the necessity of this AGN component in the following ways. First, we compare observed spectral features to spectral features predicted from our model fit to the photometry. We find that the AGN component greatly improves predictions for observed Hα and Hβ luminosities, as well as mid-infrared Akari and Spitzer/IRS spectra. Second, we show that inclusion of the AGN component changes stellar ages and SFRs by up to a factor of 10, and dust attenuations by up to a factor of 2.5. Finally, we show that the strength of our model AGN component correlates with independent AGN indicators, suggesting that these galaxies truly host AGN. Notably, only 46% of the SED-detected AGN would be detected with a simple MIR color selection. Based on these results, we conclude that SED models which fit MIR data without AGN components are vulnerable to substantial bias in their derived parameters.

  12. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits.

    PubMed

    Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G

    2017-12-05

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.

  13. The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.

    PubMed

    Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina

    2018-05-23

    Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.

  14. A poroelastic model describing nutrient transport and cell stresses within a cyclically strained collagen hydrogel.

    PubMed

    Vaughan, Benjamin L; Galie, Peter A; Stegemann, Jan P; Grotberg, James B

    2013-11-05

    In the creation of engineered tissue constructs, the successful transport of nutrients and oxygen to the contained cells is a significant challenge. In highly porous scaffolds subject to cyclic strain, the mechanical deformations can induce substantial fluid pressure gradients, which affect the transport of solutes. In this article, we describe a poroelastic model to predict the solid and fluid mechanics of a highly porous hydrogel subject to cyclic strain. The model was validated by matching the predicted penetration of a bead into the hydrogel from the model with experimental observations and provides insight into nutrient transport. Additionally, the model provides estimates of the wall-shear stresses experienced by the cells embedded within the scaffold. These results provide insight into the mechanics of and convective nutrient transport within a cyclically strained hydrogel, which could lead to the improved design of engineered tissues. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. The technology acceptance model: its past and its future in health care.

    PubMed

    Holden, Richard J; Karsh, Ben-Tzion

    2010-02-01

    Increasing interest in end users' reactions to health information technology (IT) has elevated the importance of theories that predict and explain health IT acceptance and use. This paper reviews the application of one such theory, the Technology Acceptance Model (TAM), to health care. We reviewed 16 data sets analyzed in over 20 studies of clinicians using health IT for patient care. Studies differed greatly in samples and settings, health ITs studied, research models, relationships tested, and construct operationalization. Certain TAM relationships were consistently found to be significant, whereas others were inconsistent. Several key relationships were infrequently assessed. Findings show that TAM predicts a substantial portion of the use or acceptance of health IT, but that the theory may benefit from several additions and modifications. Aside from improved study quality, standardization, and theoretically motivated additions to the model, an important future direction for TAM is to adapt the model specifically to the health care context, using beliefs elicitation methods.

  16. THE TECHNOLOGY ACCEPTANCE MODEL: ITS PAST AND ITS FUTURE IN HEALTH CARE

    PubMed Central

    HOLDEN, RICHARD J.; KARSH, BEN-TZION

    2009-01-01

    Increasing interest in end users’ reactions to health information technology (IT) has elevated the importance of theories that predict and explain health IT acceptance and use. This paper reviews the application of one such theory, the Technology Acceptance Model (TAM), to health care. We reviewed 16 data sets analyzed in over 20 studies of clinicians using health IT for patient care. Studies differed greatly in samples and settings, health ITs studied, research models, relationships tested, and construct operationalization. Certain TAM relationships were consistently found to be significant, whereas others were inconsistent. Several key relationships were infrequently assessed. Findings show that TAM predicts a substantial portion of the use or acceptance of health IT, but that the theory may benefit from several additions and modifications. Aside from improved study quality, standardization, and theoretically motivated additions to the model, an important future direction for TAM is to adapt the model specifically to the health care context, using beliefs elicitation methods. PMID:19615467

  17. Skillful prediction of hot temperature extremes over the source region of ancient Silk Road.

    PubMed

    Zhang, Jingyong; Yang, Zhanmei; Wu, Lingyun

    2018-04-27

    The source region of ancient Silk Road (SRASR) in China, a region of around 150 million people, faces a rapidly increased risk of extreme heat in summer. In this study, we develop statistical models to predict summer hot temperature extremes over the SRASR based on a timescale decomposition approach. Results show that after removing the linear trends, the inter-annual components of summer hot days and heatwaves over the SRASR are significantly related with those of spring soil temperature over Central Asia and sea surface temperature over Northwest Atlantic while their inter-decadal components are closely linked to those of spring East Pacific/North Pacific pattern and Atlantic Multidecadal Oscillation for 1979-2016. The physical processes involved are also discussed. Leave-one-out cross-validation for detrended 1979-2016 time series indicates that the statistical models based on identified spring predictors can predict 47% and 57% of the total variances of summer hot days and heatwaves averaged over the SRASR, respectively. When the linear trends are put back, the prediction skills increase substantially to 64% and 70%. Hindcast experiments for 2012-2016 show high skills in predicting spatial patterns of hot temperature extremes over the SRASR. The statistical models proposed herein can be easily applied to operational seasonal forecasting.

  18. A Bayesian network based framework for real-time crash prediction on the basic freeway segments of urban expressways.

    PubMed

    Hossain, Moinul; Muromachi, Yasunori

    2012-03-01

    The concept of measuring the crash risk for a very short time window in near future is gaining more practicality due to the recent advancements in the fields of information systems and traffic sensor technology. Although some real-time crash prediction models have already been proposed, they are still primitive in nature and require substantial improvements to be implemented in real-life. This manuscript investigates the major shortcomings of the existing models and offers solutions to overcome them with an improved framework and modeling method. It employs random multinomial logit model to identify the most important predictors as well as the most suitable detector locations to acquire data to build such a model. Afterwards, it applies Bayesian belief net (BBN) to build the real-time crash prediction model. The model has been constructed using high resolution detector data collected from Shibuya 3 and Shinjuku 4 expressways under the jurisdiction of Tokyo Metropolitan Expressway Company Limited, Japan. It has been specifically built for the basic freeway segments and it predicts the chance of formation of a hazardous traffic condition within the next 4-9 min for a particular 250 meter long road section. The performance evaluation results reflect that at an average threshold value the model is able to successful classify 66% of the future crashes with a false alarm rate less than 20%. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Predicting the Impact of Vaccination on the Transmission Dynamics of Typhoid in South Asia: A Mathematical Modeling Study

    PubMed Central

    Pitzer, Virginia E.; Bowles, Cayley C.; Baker, Stephen; Kang, Gagandeep; Balaji, Veeraraghavan; Farrar, Jeremy J.; Grenfell, Bryan T.

    2014-01-01

    Background Modeling of the transmission dynamics of typhoid allows for an evaluation of the potential direct and indirect effects of vaccination; however, relevant typhoid models rooted in data have rarely been deployed. Methodology/Principal Findings We developed a parsimonious age-structured model describing the natural history and immunity to typhoid infection. The model was fit to data on culture-confirmed cases of typhoid fever presenting to Christian Medical College hospital in Vellore, India from 2000–2012. The model was then used to evaluate the potential impact of school-based vaccination strategies using live oral, Vi-polysaccharide, and Vi-conjugate vaccines. The model was able to reproduce the incidence and age distribution of typhoid cases in Vellore. The basic reproductive number (R 0) of typhoid was estimated to be 2.8 in this setting. Vaccination was predicted to confer substantial indirect protection leading to a decrease in the incidence of typhoid in the short term, but (intuitively) typhoid incidence was predicted to rebound 5–15 years following a one-time campaign. Conclusions/Significance We found that model predictions for the overall and indirect effects of vaccination depend strongly on the role of chronic carriers in transmission. Carrier transmissibility was tentatively estimated to be low, consistent with recent studies, but was identified as a pivotal area for future research. It is unlikely that typhoid can be eliminated from endemic settings through vaccination alone. PMID:24416466

  20. Application of the migration models implemented in the decision system MOIRA-PLUS to assess the long term behaviour of (137)Cs in water and fish of the Baltic Sea.

    PubMed

    Monte, Luigi

    2014-08-01

    This work presents and discusses the results of an application of the contaminant migration models implemented in the decision support system MOIRA-PLUS to simulate the time behaviour of the concentrations of (137)Cs of Chernobyl origin in water and fish of the Baltic Sea. The results of the models were compared with the extensive sets of highly reliable empirical data of radionuclide contamination available from international databases and covering a period of, approximately, twenty years. The model application involved three main phases: a) the customisation performed by using hydrological, morphometric and water circulation data obtained from the literature; b) a blind test of the model results, in the sense that the models made use of default values of the migration parameters to predict the dynamics of the contaminant in the environmental components; and c) the adjustment of the model parameter values to improve the agreement of the predictions with the empirical data. The results of the blind test showed that the models successfully predicted the empirical contamination values within the expected range of uncertainty of the predictions (confidence level at 68% of approximately a factor 2). The parameter adjustment can be helpful for the assessment of the fluxes of water circulating among the main sub-basins of the Baltic Sea, substantiating the usefulness of radionuclides to trace the movement of masses of water in seas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. AUTO-MUTE 2.0: A Portable Framework with Enhanced Capabilities for Predicting Protein Functional Consequences upon Mutation.

    PubMed

    Masso, Majid; Vaisman, Iosif I

    2014-01-01

    The AUTO-MUTE 2.0 stand-alone software package includes a collection of programs for predicting functional changes to proteins upon single residue substitutions, developed by combining structure-based features with trained statistical learning models. Three of the predictors evaluate changes to protein stability upon mutation, each complementing a distinct experimental approach. Two additional classifiers are available, one for predicting activity changes due to residue replacements and the other for determining the disease potential of mutations associated with nonsynonymous single nucleotide polymorphisms (nsSNPs) in human proteins. These five command-line driven tools, as well as all the supporting programs, complement those that run our AUTO-MUTE web-based server. Nevertheless, all the codes have been rewritten and substantially altered for the new portable software, and they incorporate several new features based on user feedback. Included among these upgrades is the ability to perform three highly requested tasks: to run "big data" batch jobs; to generate predictions using modified protein data bank (PDB) structures, and unpublished personal models prepared using standard PDB file formatting; and to utilize NMR structure files that contain multiple models.

  2. A hierarchical spatial model for well yield in complex aquifers

    NASA Astrophysics Data System (ADS)

    Montgomery, J.; O'sullivan, F.

    2017-12-01

    Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.

  3. An effective drift correction for dynamical downscaling of decadal global climate predictions

    NASA Astrophysics Data System (ADS)

    Paeth, Heiko; Li, Jingmin; Pollinger, Felix; Müller, Wolfgang A.; Pohlmann, Holger; Feldmann, Hendrik; Panitz, Hans-Jürgen

    2018-04-01

    Initialized decadal climate predictions with coupled climate models are often marked by substantial climate drifts that emanate from a mismatch between the climatology of the coupled model system and the data set used for initialization. While such drifts may be easily removed from the prediction system when analyzing individual variables, a major problem prevails for multivariate issues and, especially, when the output of the global prediction system shall be used for dynamical downscaling. In this study, we present a statistical approach to remove climate drifts in a multivariate context and demonstrate the effect of this drift correction on regional climate model simulations over the Euro-Atlantic sector. The statistical approach is based on an empirical orthogonal function (EOF) analysis adapted to a very large data matrix. The climate drift emerges as a dramatic cooling trend in North Atlantic sea surface temperatures (SSTs) and is captured by the leading EOF of the multivariate output from the global prediction system, accounting for 7.7% of total variability. The SST cooling pattern also imposes drifts in various atmospheric variables and levels. The removal of the first EOF effectuates the drift correction while retaining other components of intra-annual, inter-annual and decadal variability. In the regional climate model, the multivariate drift correction of the input data removes the cooling trends in most western European land regions and systematically reduces the discrepancy between the output of the regional climate model and observational data. In contrast, removing the drift only in the SST field from the global model has hardly any positive effect on the regional climate model.

  4. What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models

    NASA Astrophysics Data System (ADS)

    Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman

    2018-01-01

    Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.

  5. Within-and among-year germination in Sonoran Desert winter annuals: bet hedging and predictive germination in a variable environment.

    PubMed

    Gremer, Jennifer R; Kimball, Sarah; Venable, D Lawrence

    2016-10-01

    In variable environments, organisms must have strategies to ensure fitness as conditions change. For plants, germination can time emergence with favourable conditions for later growth and reproduction (predictive germination), spread the risk of unfavourable conditions (bet hedging) or both (integrated strategies). Here we explored the adaptive value of within- and among-year germination timing for 12 species of Sonoran Desert winter annual plants. We parameterised models with long-term demographic data to predict optimal germination fractions and compared them to observed germination. At both temporal scales we found that bet hedging is beneficial and that predicted optimal strategies corresponded well with observed germination. We also found substantial fitness benefits to varying germination timing, suggesting some degree of predictive germination in nature. However, predictive germination was imperfect, calling for some degree of bet hedging. Together, our results suggest that desert winter annuals have integrated strategies combining both predictive plasticity and bet hedging. © 2016 John Wiley & Sons Ltd/CNRS.

  6. When will the TBT go away? Integrating monitoring and modelling to address TBT's delayed disappearance in the Drammensfjord, Norway.

    PubMed

    Arp, Hans Peter H; Eek, Espen; Nybakk, Anita Whitlock; Glette, Tormod; Møskeland, Thomas; Pettersen, Arne

    2014-11-15

    Despite a substantial decrease in the use and production of the marine antifouling agent tributyltin (TBT), its continuing presence in harbors remains a serious environmental concern. Herein a case study of TBT's persistence in the Drammensfjord, Norway, is presented. In 2005, severe TBT pollution was measured in the harbor of the Drammensfjord, with an average sediment concentration of 3387 μg kg(-1). To chart natural recovery in the Drammensfjord, an extensive sampling campaign was carried out over six years (2008-2013), quantifying TBT in water, settling particles and sediments. The monitoring campaign found a rapid decrease in sediment TBT concentration in the most contaminated areas, as well as a decrease in TBT entering the harbor via rivers and urban runoff. Changes observed in the more remote areas of the Drammensfjord, however, were less substantial. These data, along with measured and estimated geophysical properties, were used to parameterize and calibrate a coupled linear water-sediment model, referred to as the Drammensfjord model, to make prognosis on future TBT levels due to natural recovery. Unique to this type of model, the calibration was done using a Bayesian Monte Carlo (BMC) updating approach, which used monitoring data to calibrate predictions, as well as reduce the uncertainty of input parameters. To our knowledge, this is the first use of BMC updating to calibrate a model describing natural recovery in a lake/harbor type system. Prior to BMC updating, the non-calibrated model data agreed with monitoring data by a factor of 4.3. After BMC updating, the agreement was within a factor 3.2. The non-calibrated model predicted an average sediment concentration in the year 2025 of 2.5 μg kg(-1). The BMC calibrated model, however, predicted a higher concentration in the year 2025 of 16 μg kg(-1). This discrepancy was mainly due to the BMC calibration increasing the estimated riverine and runoff TBT emission levels relative to the initial input levels. Future monitoring campaigns can be used for further calibration of emission levels, and a clearer prognosis of when natural recovery will remove TBT pollution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Feature Selection Methods for Zero-Shot Learning of Neural Activity.

    PubMed

    Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  8. Surface tension prevails over solute effect in organic-influenced cloud droplet activation.

    PubMed

    Ovadnevaite, Jurgita; Zuend, Andreas; Laaksonen, Ari; Sanchez, Kevin J; Roberts, Greg; Ceburnis, Darius; Decesari, Stefano; Rinaldi, Matteo; Hodas, Natasha; Facchini, Maria Cristina; Seinfeld, John H; O' Dowd, Colin

    2017-06-29

    The spontaneous growth of cloud condensation nuclei (CCN) into cloud droplets under supersaturated water vapour conditions is described by classic Köhler theory. This spontaneous activation of CCN depends on the interplay between the Raoult effect, whereby activation potential increases with decreasing water activity or increasing solute concentration, and the Kelvin effect, whereby activation potential decreases with decreasing droplet size or increases with decreasing surface tension, which is sensitive to surfactants. Surface tension lowering caused by organic surfactants, which diminishes the Kelvin effect, is expected to be negated by a concomitant reduction in the Raoult effect, driven by the displacement of surfactant molecules from the droplet bulk to the droplet-vapour interface. Here we present observational and theoretical evidence illustrating that, in ambient air, surface tension lowering can prevail over the reduction in the Raoult effect, leading to substantial increases in cloud droplet concentrations. We suggest that consideration of liquid-liquid phase separation, leading to complete or partial engulfing of a hygroscopic particle core by a hydrophobic organic-rich phase, can explain the lack of concomitant reduction of the Raoult effect, while maintaining substantial lowering of surface tension, even for partial surface coverage. Apart from the importance of particle size and composition in droplet activation, we show by observation and modelling that incorporation of phase-separation effects into activation thermodynamics can lead to a CCN number concentration that is up to ten times what is predicted by climate models, changing the properties of clouds. An adequate representation of the CCN activation process is essential to the prediction of clouds in climate models, and given the effect of clouds on the Earth's energy balance, improved prediction of aerosol-cloud-climate interactions is likely to result in improved assessments of future climate change.

  9. Prediction of seasonal climate-induced variations in global food production

    NASA Astrophysics Data System (ADS)

    Iizumi, Toshichika; Sakuma, Hirofumi; Yokozawa, Masayuki; Luo, Jing-Jia; Challinor, Andrew J.; Brown, Molly E.; Sakurai, Gen; Yamagata, Toshio

    2013-10-01

    Consumers, including the poor in many countries, are increasingly dependent on food imports and are thus exposed to variations in yields, production and export prices in the major food-producing regions of the world. National governments and commercial entities are therefore paying increased attention to the cropping forecasts of important food-exporting countries as well as to their own domestic food production. Given the increased volatility of food markets and the rising incidence of climatic extremes affecting food production, food price spikes may increase in prevalence in future years. Here we present a global assessment of the reliability of crop failure hindcasts for major crops at two lead times derived by linking ensemble seasonal climatic forecasts with statistical crop models. We found that moderate-to-marked yield loss over a substantial percentage (26-33%) of the harvested area of these crops is reliably predictable if climatic forecasts are near perfect. However, only rice and wheat production are reliably predictable at three months before the harvest using within-season hindcasts. The reliabilities of estimates varied substantially by crop--rice and wheat yields were the most predictable, followed by soybean and maize. The reasons for variation in the reliability of the estimates included the differences in crop sensitivity to the climate and the technology used by the crop-producing regions. Our findings reveal that the use of seasonal climatic forecasts to predict crop failures will be useful for monitoring global food production and will encourage the adaptation of food systems toclimatic extremes.

  10. Climate change and maize yield in Iowa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hong; Twine, Tracy E.; Girvetz, Evan

    Climate is changing across the world, including the major maize-growing state of Iowa in the USA. To maintain crop yields, farmers will need a suite of adaptation strategies, and choice of strategy will depend on how the local to regional climate is expected to change. Here we predict how maize yield might change through the 21 st century as compared with late 20 th century yields across Iowa, USA, a region representing ideal climate and soils for maize production that contributes substantially to the global maize economy. To account for climate model uncertainty, we drive a dynamic ecosystem model withmore » output from six climate models and two future climate forcing scenarios. Despite a wide range in the predicted amount of warming and change to summer precipitation, all simulations predict a decrease in maize yields from late 20 th century to middle and late 21 st century ranging from 15% to 50%. Linear regression of all models predicts a 6% state-averaged yield decrease for every 1°C increase in warm season average air temperature. When the influence of moisture stress on crop growth is removed from the model, yield decreases either remain the same or are reduced, depending on predicted changes in warm season precipitation. Lastly, our results suggest that even if maize were to receive all the water it needed, under the strongest climate forcing scenario yields will decline by 10-20% by the end of the 21 st century.« less

  11. Climate change and maize yield in Iowa

    DOE PAGES

    Xu, Hong; Twine, Tracy E.; Girvetz, Evan

    2016-05-24

    Climate is changing across the world, including the major maize-growing state of Iowa in the USA. To maintain crop yields, farmers will need a suite of adaptation strategies, and choice of strategy will depend on how the local to regional climate is expected to change. Here we predict how maize yield might change through the 21 st century as compared with late 20 th century yields across Iowa, USA, a region representing ideal climate and soils for maize production that contributes substantially to the global maize economy. To account for climate model uncertainty, we drive a dynamic ecosystem model withmore » output from six climate models and two future climate forcing scenarios. Despite a wide range in the predicted amount of warming and change to summer precipitation, all simulations predict a decrease in maize yields from late 20 th century to middle and late 21 st century ranging from 15% to 50%. Linear regression of all models predicts a 6% state-averaged yield decrease for every 1°C increase in warm season average air temperature. When the influence of moisture stress on crop growth is removed from the model, yield decreases either remain the same or are reduced, depending on predicted changes in warm season precipitation. Lastly, our results suggest that even if maize were to receive all the water it needed, under the strongest climate forcing scenario yields will decline by 10-20% by the end of the 21 st century.« less

  12. Prediction of Losartan-Active Carboxylic Acid Metabolite Exposure Following Losartan Administration Using Static and Physiologically Based Pharmacokinetic Models.

    PubMed

    Nguyen, Hoa Q; Lin, Jian; Kimoto, Emi; Callegari, Ernesto; Tse, Susanna; Obach, R Scott

    2017-09-01

    The aim of this study was to evaluate a strategy based on static and dynamic physiologically based pharmacokinetic (PBPK) modeling for the prediction of metabolite and parent drug area under the time-concentration curve ratio (AUC m /AUC p ) and their PK profiles in humans using in vitro data when active transport processes are involved in disposition. The strategy was applied to losartan and its pharmacologically active metabolite carboxylosartan as test compounds. Hepatobiliary transport including transport-mediated uptake, canilicular and basolateral efflux, and metabolic clearance estimates were obtained from in vitro studies using human liver microsomes and sandwich-cultured hepatocytes. Human renal clearance of carboxylosartan was estimated from dog renal clearance using allometric scaling approach. All clearance mechanisms were mechanistically incorporated in a static model to predict the relative exposure of carboxylosartan versus losartan (AUC m /AUC p ). The predicted AUC m /AUC p were consistent with the observed data following intravenous and oral administration of losartan. Moreover, the in vitro parameters were used as initial parameters in PBPK permeability-limited disposition models to predict the concentration-time profiles for both parent and its active metabolite after oral administration of losartan. The PBPK model was able to recover the plasma profiles of both losartan and carboxylosartan, further substantiating the validity of this approach. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  13. Computational Fluid Dynamics of Whole-Body Aircraft

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh

    1999-01-01

    The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  14. OCEAN CIRCULATION. Observing the Atlantic Meridional Overturning Circulation yields a decade of inevitable surprises.

    PubMed

    Srokosz, M A; Bryden, H L

    2015-06-19

    The importance of the Atlantic Meridional Overturning Circulation (AMOC) heat transport for climate is well acknowledged. Climate models predict that the AMOC will slow down under global warming, with substantial impacts, but measurements of ocean circulation have been inadequate to evaluate these predictions. Observations over the past decade have changed that situation, providing a detailed picture of variations in the AMOC. These observations reveal a surprising degree of AMOC variability in terms of the intraannual range, the amplitude and phase of the seasonal cycle, the interannual changes in strength affecting the ocean heat content, and the decline of the AMOC over the decade, both of the latter two exceeding the variations seen in climate models. Copyright © 2015, American Association for the Advancement of Science.

  15. The functional consequences of non-genetic diversity in cellular navigation

    NASA Astrophysics Data System (ADS)

    Emonet, Thierry; Waite, Adam J.; Frankel, Nicholas W.; Dufour, Yann; Johnston, Jessica F.

    Substantial non-genetic diversity in complex behaviors, such as chemotaxis in E. coli, has been observed for decades, but the relevance of this diversity for the population is not well understood. Here, we use microfluidics to show that non-genetic diversity leads to significant structuring of the population in space and time, which confirms predictions made by our detailed mathematical model of chemotaxis. We then use genetic tools to show that altering the expression level of a single chemotaxis protein is sufficient to alter the distribution of swimming behaviors, which directly determines the performance of a population in a gradient of attractant, a result also predicted by our model. Supported by NIH 1R01GM106189, the James S McDonnell Foundation, and the Paul Allen foundation.

  16. Ejecta from large craters on the moon - Comments on the geometric model of McGetchin et al

    NASA Technical Reports Server (NTRS)

    Pike, R. J.

    1974-01-01

    Amendments to a quantitative scheme developed by T. R. McGetchin et al. (1973) for predicting the distribution of ejecta from lunar basins yield substantially thicker estimates of ejecta, deposited at the basin rim-crest and at varying ranges beyond, than does the original model. Estimates of the total volume of material ejected from a basin, illustrated by Imbrium, also are much greater. Because many uncertainties affect any geometric model developed primarily from terrestrial analogs of lunar craters, predictions of ejecta thickness and volume on the moon may range within at least an order of magnitude. These problems are exemplified by the variability of T, thickness of ejecta at the rim-crest of terrestrial experimental craters. The proportion of T to crater rim-height depends critically upon scaled depth-of-burst and whether the explosive is nuclear or chemical.

  17. Gene Expression-Based Survival Prediction in Lung Adenocarcinoma: A Multi-Site, Blinded Validation Study

    PubMed Central

    Shedden, Kerby; Taylor, Jeremy M.G.; Enkemann, Steve A.; Tsao, Ming S.; Yeatman, Timothy J.; Gerald, William L.; Eschrich, Steve; Jurisica, Igor; Venkatraman, Seshan E.; Meyerson, Matthew; Kuick, Rork; Dobbin, Kevin K.; Lively, Tracy; Jacobson, James W.; Beer, David G.; Giordano, Thomas J.; Misek, David E.; Chang, Andrew C.; Zhu, Chang Qi; Strumpf, Dan; Hanash, Samir; Shepherd, Francis A.; Ding, Kuyue; Seymour, Lesley; Naoki, Katsuhiko; Pennell, Nathan; Weir, Barbara; Verhaak, Roel; Ladd-Acosta, Christine; Golub, Todd; Gruidl, Mike; Szoke, Janos; Zakowski, Maureen; Rusch, Valerie; Kris, Mark; Viale, Agnes; Motoi, Noriko; Travis, William; Sharma, Anupama

    2009-01-01

    Although prognostic gene expression signatures for survival in early stage lung cancer have been proposed, for clinical application it is critical to establish their performance across different subject populations and in different laboratories. Here we report a large, training-testing, multi-site blinded validation study to characterize the performance of several prognostic models based on gene expression for 442 lung adenocarcinomas. The hypotheses proposed examined whether microarray measurements of gene expression either alone or combined with basic clinical covariates (stage, age, sex) can be used to predict overall survival in lung cancer subjects. Several models examined produced risk scores that substantially correlated with actual subject outcome. Most methods performed better with clinical data, supporting the combined use of clinical and molecular information when building prognostic models for early stage lung cancer. This study also provides the largest available set of microarray data with extensive pathological and clinical annotation for lung adenocarcinomas. PMID:18641660

  18. Is pigment patterning in fish skin determined by the Turing mechanism?

    PubMed

    Watanabe, Masakatsu; Kondo, Shigeru

    2015-02-01

    More than half a century ago, Alan Turing postulated that pigment patterns may arise from a mechanism that could be mathematically modeled based on the diffusion of two substances that interact with each other. Over the past 15 years, the molecular and genetic tools to verify this prediction have become available. Here, we review experimental studies aimed at identifying the mechanism underlying pigment pattern formation in zebrafish. Extensive molecular genetic studies in this model organism have revealed the interactions between the pigment cells that are responsible for the patterns. The mechanism discovered is substantially different from that predicted by the mathematical model, but it retains the property of 'local activation and long-range inhibition', a necessary condition for Turing pattern formation. Although some of the molecular details of pattern formation remain to be elucidated, current evidence confirms that the underlying mechanism is mathematically equivalent to the Turing mechanism. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Prediction of genomic breeding values for dairy traits in Italian Brown and Simmental bulls using a principal component approach.

    PubMed

    Pintus, M A; Gaspa, G; Nicolazzi, E L; Vicario, D; Rossoni, A; Ajmone-Marsan, P; Nardone, A; Dimauro, C; Macciotta, N P P

    2012-06-01

    The large number of markers available compared with phenotypes represents one of the main issues in genomic selection. In this work, principal component analysis was used to reduce the number of predictors for calculating genomic breeding values (GEBV). Bulls of 2 cattle breeds farmed in Italy (634 Brown and 469 Simmental) were genotyped with the 54K Illumina beadchip (Illumina Inc., San Diego, CA). After data editing, 37,254 and 40,179 single nucleotide polymorphisms (SNP) were retained for Brown and Simmental, respectively. Principal component analysis carried out on the SNP genotype matrix extracted 2,257 and 3,596 new variables in the 2 breeds, respectively. Bulls were sorted by birth year to create reference and prediction populations. The effect of principal components on deregressed proofs in reference animals was estimated with a BLUP model. Results were compared with those obtained by using SNP genotypes as predictors with either the BLUP or Bayes_A method. Traits considered were milk, fat, and protein yields, fat and protein percentages, and somatic cell score. The GEBV were obtained for prediction population by blending direct genomic prediction and pedigree indexes. No substantial differences were observed in squared correlations between GEBV and EBV in prediction animals between the 3 methods in the 2 breeds. The principal component analysis method allowed for a reduction of about 90% in the number of independent variables when predicting direct genomic values, with a substantial decrease in calculation time and without loss of accuracy. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Worldwide multi-model intercomparison of clear-sky solar irradiance predictions

    NASA Astrophysics Data System (ADS)

    Ruiz-Arias, Jose A.; Gueymard, Christian A.; Cebecauer, Tomas

    2017-06-01

    Accurate modeling of solar radiation in the absence of clouds is highly important because solar power production peaks during cloud-free situations. The conventional validation approach of clear-sky solar radiation models relies on the comparison between model predictions and ground observations. Therefore, this approach is limited to locations with availability of high-quality ground observations, which are scarce worldwide. As a consequence, many areas of in-terest for, e.g., solar energy development, still remain sub-validated. Here, a worldwide inter-comparison of the global horizontal irradiance (GHI) and direct normal irradiance (DNI) calculated by a number of appropriate clear-sky solar ra-diation models is proposed, without direct intervention of any weather or solar radiation ground-based observations. The model inputs are all gathered from atmospheric reanalyses covering the globe. The model predictions are compared to each other and only their relative disagreements are quantified. The largest differences between model predictions are found over central and northern Africa, the Middle East, and all over Asia. This coincides with areas of high aerosol optical depth and highly varying aerosol distribution size. Overall, the differences in modeled DNI are found about twice larger than for GHI. It is argued that the prevailing weather regimes (most importantly, aerosol conditions) over regions exhibiting substantial divergences are not adequately parameterized by all models. Further validation and scrutiny using conventional methods based on ground observations should be pursued in priority over those specific regions to correctly evaluate the performance of clear-sky models, and select those that can be recommended for solar concentrating applications in particular.

  1. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  2. HIV Treatment as Prevention: Systematic Comparison of Mathematical Models of the Potential Impact of Antiretroviral Therapy on HIV Incidence in South Africa

    PubMed Central

    Eaton, Jeffrey W.; Johnson, Leigh F.; Salomon, Joshua A.; Bärnighausen, Till; Bendavid, Eran; Bershteyn, Anna; Bloom, David E.; Cambiano, Valentina; Fraser, Christophe; Hontelez, Jan A. C.; Humair, Salal; Klein, Daniel J.; Long, Elisa F.; Phillips, Andrew N.; Pretorius, Carel; Stover, John; Wenger, Edward A.; Williams, Brian G.; Hallett, Timothy B.

    2012-01-01

    Background Many mathematical models have investigated the impact of expanding access to antiretroviral therapy (ART) on new HIV infections. Comparing results and conclusions across models is challenging because models have addressed slightly different questions and have reported different outcome metrics. This study compares the predictions of several mathematical models simulating the same ART intervention programmes to determine the extent to which models agree about the epidemiological impact of expanded ART. Methods and Findings Twelve independent mathematical models evaluated a set of standardised ART intervention scenarios in South Africa and reported a common set of outputs. Intervention scenarios systematically varied the CD4 count threshold for treatment eligibility, access to treatment, and programme retention. For a scenario in which 80% of HIV-infected individuals start treatment on average 1 y after their CD4 count drops below 350 cells/µl and 85% remain on treatment after 3 y, the models projected that HIV incidence would be 35% to 54% lower 8 y after the introduction of ART, compared to a counterfactual scenario in which there is no ART. More variation existed in the estimated long-term (38 y) reductions in incidence. The impact of optimistic interventions including immediate ART initiation varied widely across models, maintaining substantial uncertainty about the theoretical prospect for elimination of HIV from the population using ART alone over the next four decades. The number of person-years of ART per infection averted over 8 y ranged between 5.8 and 18.7. Considering the actual scale-up of ART in South Africa, seven models estimated that current HIV incidence is 17% to 32% lower than it would have been in the absence of ART. Differences between model assumptions about CD4 decline and HIV transmissibility over the course of infection explained only a modest amount of the variation in model results. Conclusions Mathematical models evaluating the impact of ART vary substantially in structure, complexity, and parameter choices, but all suggest that ART, at high levels of access and with high adherence, has the potential to substantially reduce new HIV infections. There was broad agreement regarding the short-term epidemiologic impact of ambitious treatment scale-up, but more variation in longer term projections and in the efficiency with which treatment can reduce new infections. Differences between model predictions could not be explained by differences in model structure or parameterization that were hypothesized to affect intervention impact. Please see later in the article for the Editors' Summary PMID:22802730

  3. On the prediction of the Free Core Nutation

    NASA Astrophysics Data System (ADS)

    Belda Palazón, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald; Modiri, Sadegh

    2017-04-01

    Consideration of the Free Core Nutation (FCN) model is obliged for improved modelling of the Celestial Pole Offsets (CPO), since it is the major source of inaccuracy or unexplained time variability with respect to the current IAU2000 nutation theory. FCN is excited from various geophysical sources and thus it cannot be known until it is inferred from observations. However, given that the variations of the FCN signal are slow and seldom abrupt, we examine whether the availability of new FCN empirical models (i.e., Malkin 2007; Krásná et al. 2013; Belda et al. 2016) can be exploited to make reasonably accurate predictions of the FCN signal before observing it. In this work we study CPO predictions for the FCN model provided by Belda et al. 2016, in which the amplitude coefficients were estimated by using a sliding window with a width of 400 days and with a minimal displacement between the subsequent fits (one-day step). Our results exhibit two significant features: (1) the prediction of the FCN signal can be done on the basis of its prior amplitudes with a mean error of about 30 microarcseconds per year, with an apparent linear trend; and (2) the Weighted Root Mean Square (wrms) of the differences between the CPO produced by the IERS (International Earth Rotation and Reference Systems Service) and our predicted FCN exhibit an exponential slow-growing pattern, with a wmrs close to 120 microarcseconds along several months. Therefore a substantial improvement with respect to the CPO operational predictions of the IERS Rapid Service/Prediction Centre can be achieved.

  4. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  5. Spatial distribution of CH3 and CH2 radicals in a methane rf discharge

    NASA Astrophysics Data System (ADS)

    Sugai, H.; Kojima, H.; Ishida, A.; Toyoda, H.

    1990-06-01

    Spatial distributions of neutral radicals CH3 and CH2 in a capacitively coupled rf glow discharge of methane were measured by threshold ionization mass spectrometry. A strong asymmetry of the density profile was found for the CH2 radical in the high-pressure (˜100 mTorr) discharge. In addition, comprehensive measurements of electron energy distribution, ionic composition, and radical sticking coefficient were made to use as inputs to theoretical modeling of radicals in the methane plasma. The model predictions agree substantially with the measured radical distributions.

  6. Intraplate deformation, stress in the lithosphere and the driving mechanism for plate motions

    NASA Technical Reports Server (NTRS)

    Albee, Arden L.

    1993-01-01

    The initial research proposed was to use the predictions of geodynamical models of mantle flow, combined with geodetic observations of intraplate strain and stress, to better constrain mantle convection and the driving mechanism for plate motions and deformation. It is only now that geodetic observations of intraplate strain are becoming sufficiently well resolved to make them useful for substantial geodynamical inference to be made. A model of flow in the mantle that explains almost 90 percent of the variance in the observed longwavelength nonhydrostatic geoid was developed.

  7. Assessing the accuracy and stability of variable selection ...

    EPA Pesticide Factsheets

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti

  8. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Choung

    1993-01-01

    The NASA Propagation Program supports academic research that models various propagation phenomena in the space research frequency bands. NASA supports such research via school and institutions prominent in the field. The products of such efforts are particularly useful for researchers in the field of propagation phenomena and telecommunications systems engineers. The systems engineer usually needs a few propagation parameter values for a system design. Published literature on the subject, such as the Cunsultative Committee for International Radio (CCIR) publications, may help somewhat, but often times, the parameter values given in such publications use a particular set of conditions which may not quite include the requirements of the system design. The systems engineer must resort to programming the propagation phenomena model of interest and to obtain the parameter values to be used in the project. Furthermore, the researcher in the propagation field must then program the propagation models either to substantiate the model or to generate a new model. The researcher or the systems engineer must either be a skillful computer programmer or hire a programmer, which of course increases the cost of the effort. An increase in cost due to the inevitable programming effort may seem particularly inappropriate if the data generated by the experiment is to be used to substantiate the already well-established models, or a slight variation thereof. To help researchers and the systems engineers, it was recommended by the participants of NASA Propagation Experimenters (NAPEX) 15 held in London, Ontario, Canada on 28-29 June 1991, that propagation software should be constructed which will contain models and prediction methods of most propagation phenomenon. Moreover, the software should be flexible enough for the user to make slight changes to the models without expending a substantial effort in programming.

  9. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305

  10. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.

  11. The Regionalization of National-Scale SPARROW Models for Stream Nutrients

    USGS Publications Warehouse

    Schwarz, G.E.; Alexander, R.B.; Smith, R.A.; Preston, S.D.

    2011-01-01

    This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ??100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.

  12. A structural model for the osmosensor, transporter, and osmoregulator ProP of Escherichia coli.

    PubMed

    Wood, Janet M; Culham, Doreen E; Hillar, Alexander; Vernikovska, Yaroslava I; Liu, Feng; Boggs, Joan M; Keates, Robert A B

    2005-04-19

    Transporter ProP of Escherichia coli, a member of the major facilitator superfamily (MFS), acts as an osmosensor and an osmoregulator in cells and after purification and reconstitution in proteoliposomes. H(+)-osmoprotectant symport via ProP is activated when medium osmolality is elevated with membrane impermeant osmolytes. The three-dimensional structure of ProP was modeled with the crystal structure of MFS member GlpT as a template. This GlpT structure represents the inward (or cytoplasm)-facing conformation predicted by the alternating access model for transport. LacZ-PhoA fusion analysis and site-directed fluorescence labeling substantiated the membrane topology and orientation predicted by this model and most hydropathy analyses. The model predicts the presence of a proton pathway within the N-terminal six-helix bundle of ProP (as opposed to the corresponding pathway found within the C-terminal helix bundle of its paralogue, LacY). Replacement of residues within the N-terminal helix bundle impaired the osmotic activation of ProP, providing the first indication that residues outside the C-terminal domain are involved in osmosensing. Some residues that were accessible from the periplasmic side, as predicted by the structural model, were more susceptible to covalent labeling in permeabilized membrane fractions than in intact bacteria. These residues may be accessible from the cytoplasmic side in structures not represented by our current model, or their limited exposure in vivo may reflect constraints on transporter structure that are related to its osmosensory mechanism.

  13. Fraction of organic carbon predicts labile desorption rates of chlorinated organic pollutants in laboratory-spiked geosorbents.

    PubMed

    Ginsbach, Jake W; Killops, Kato L; Olsen, Robert M; Peterson, Brittney; Dunnivant, Frank M

    2010-05-01

    The resuspension of large volumes of sediments that are contaminated with chlorinated pollutants continues to threaten environmental quality and human health. Whereas kinetic models are more accurate for estimating the environmental impact of these events, their widespread use is substantially hampered by the need for costly, time-consuming, site-specific kinetics experiments. The present study investigated the development of a predictive model for desorption rates from easily measurable sorbent and pollutant properties by examining the relationship between the fraction of organic carbon (fOC) and labile release rates. Duplicate desorption measurements were performed on 46 unique combinations of pollutants and sorbents with fOC values ranging from 0.001 to 0.150. Labile desorption rate constants indicate that release rates predominantly depend upon the fOC in the geosorbent. Previous theoretical models, such as the macro-mesopore and organic matter (MOM) diffusion model, have predicted such a relationship but could not accurately predict the experimental rate constants collected in the present study. An empirical model was successfully developed to correlate the labile desorption rate constant (krap) to the fraction of organic material where log(krap)=0.291-0.785 . log(fOC). These results provide the first experimental evidence that kinetic pollution releases during resuspension events are governed by the fOC content in natural geosorbents. Copyright (c) 2010 SETAC.

  14. Adsorption of selected pharmaceuticals and an endocrine disrupting compound by granular activated carbon. 2. Model prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.; Peldszus, S.; Huck, P.M.

    The adsorption of two representative pharmaceutically active compounds (PhACs) naproxen and carbamazepine and one endocrine disrupting compound (EDC) nonylphenol was studied in pilot-scale granular activated carbon (GAC) adsorbers using post-sedimentation (PS) water from a full-scale drinking water treatment plant. The GAC adsorbents were coal-based Calgon Filtrasorb 400 and coconut shell-based PICA CTIF TE. Acidic naproxen broke through fastest while nonylphenol was removed best, which was consistent with the degree to which fouling affected compound removals. Model predictions and experimental data were generally in good agreement for all three compounds, which demonstrated the effectiveness and robustness of the pore and surfacemore » diffusion model (PSDM) used in combination with the time-variable parameter approach for predicting removals at environmentally relevant concentrations (i.e., ng/L range). Sensitivity analyses suggested that accurate determination of film diffusion coefficients was critical for predicting breakthrough for naproxen and carbamazepine, in particular when high removals are targeted. Model simulations demonstrated that GAC carbon usage rates (CURs) for naproxen were substantially influenced by the empty bed contact time (EBCT) at the investigated conditions. Model-based comparisons between GAC CURs and minimum CURs for powdered activated carbon (PAC) applications suggested that PAC would be most appropriate for achieving 90% removal of naproxen, whereas GAC would be more suitable for nonylphenol. 25 refs., 4 figs., 1 tab.« less

  15. Effect of smoking parameters on the particle size distribution and predicted airway deposition of mainstream cigarette smoke.

    PubMed

    Kane, David B; Asgharian, Bahman; Price, Owen T; Rostami, Ali; Oldham, Michael J

    2010-02-01

    It is known that puffing conditions such as puff volume, duration, and frequency vary substantially among individual smokers. This study investigates how these parameters affect the particle size distribution and concentration of fresh mainstream cigarette smoke (MCS) and how these changes affect the predicted deposition of MCS particles in a model human respiratory tract. Measurements of the particle size distribution made with an electrical low pressure impactor for a variety of puffing conditions are presented. The average flow rate of the puff is found to be the major factor effecting the measured particle size distribution of the MCS. The results of these measurements were then used as input to a deterministic dosimetry model (MPPD) to estimate the changes in the respiratory tract deposition fraction of smoke particles. The MPPD dosimetry model was modified by incorporating mechanisms involved in respiratory tract deposition of MCS: hygroscopic growth, coagulation, evaporation of semivolatiles, and mixing of the smoke with inhaled dilution air. The addition of these mechanisms to MPPD resulted in reasonable agreement between predicted airway deposition and human smoke retention measurements. The modified MPPD model predicts a modest 10% drop in the total deposition efficiency in a model human respiratory tract as the puff flow rate is increased from 1050 to 3100 ml/min, for a 2-s puff.

  16. Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT‐HD

    PubMed Central

    Long, Jeffrey D.

    2015-01-01

    Abstract Background It is well known in Huntington's disease that cytosine‐adenine‐guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. Methods One thousand seventy‐eight Huntington's disease gene–expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right‐censored data. Results Adding 34 variables along with cytosine‐adenine‐guanine and age substantially increased predictive accuracy relative to cytosine‐adenine‐guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5‐y predictive accuracy than when using cytosine‐adenine‐guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Conclusions Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine‐adenine‐guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society PMID:26340420

  17. Predicting on-road assessment pass and fail outcomes in older drivers with cognitive impairment using a battery of computerized sensory-motor and cognitive tests.

    PubMed

    Hoggarth, Petra A; Innes, Carrie R H; Dalrymple-Alford, John C; Jones, Richard D

    2013-12-01

    To generate a robust model of computerized sensory-motor and cognitive test performance to predict on-road driving assessment outcomes in older persons with diagnosed or suspected cognitive impairment. A logistic regression model classified pass–fail outcomes of a blinded on-road driving assessment. Generalizability of the model was tested using leave-one-out cross-validation. Three specialist clinics in New Zealand. Drivers (n=279; mean age 78.4, 65% male) with diagnosed or suspected dementia, mild cognitive impairment, unspecified cognitive impairment, or memory problems referred for a medical driving assessment. A computerized battery of sensory-motor and cognitive tests and an on-road medical driving assessment. One hundred fifty-five participants (55.5%) received an on-road fail score. Binary logistic regression correctly classified 75.6% of the sample into on-road pass and fail groups. The cross-validation indicated accuracy of the model of 72.0% with sensitivity for detecting on-road fails of 73.5%, specificity of 70.2%, positive predictive value of 75.5%, and negative predictive value of 68%. The off-road assessment prediction model resulted in a substantial number of people who were assessed as likely to fail despite passing an on-road assessment and vice versa. Thus, despite a large multicenter sample, the use of off-road tests previously found to be useful in other older populations, and a carefully constructed and tested prediction model, off-road measures have yet to be found that are sufficiently accurate to allow acceptable determination of on-road driving safety of cognitively impaired older drivers. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.

  18. Development and Validation of a Practical Two-Step Prediction Model and Clinical Risk Score for Post-Thrombotic Syndrome.

    PubMed

    Amin, Elham E; van Kuijk, Sander M J; Joore, Manuela A; Prandoni, Paolo; Cate, Hugo Ten; Cate-Hoek, Arina J Ten

    2018-06-04

     Post-thrombotic syndrome (PTS) is a common chronic consequence of deep vein thrombosis that affects the quality of life and is associated with substantial costs. In clinical practice, it is not possible to predict the individual patient risk. We develop and validate a practical two-step prediction tool for PTS in the acute and sub-acute phase of deep vein thrombosis.  Multivariable regression modelling with data from two prospective cohorts in which 479 (derivation) and 1,107 (validation) consecutive patients with objectively confirmed deep vein thrombosis of the leg, from thrombosis outpatient clinic of Maastricht University Medical Centre, the Netherlands (derivation) and Padua University hospital in Italy (validation), were included. PTS was defined as a Villalta score of ≥ 5 at least 6 months after acute thrombosis.  Variables in the baseline model in the acute phase were: age, body mass index, sex, varicose veins, history of venous thrombosis, smoking status, provoked thrombosis and thrombus location. For the secondary model, the additional variable was residual vein obstruction. Optimism-corrected area under the receiver operating characteristic curves (AUCs) were 0.71 for the baseline model and 0.60 for the secondary model. Calibration plots showed well-calibrated predictions. External validation of the derived clinical risk scores was successful: AUC, 0.66 (95% confidence interval [CI], 0.63-0.70) and 0.64 (95% CI, 0.60-0.69).  Individual risk for PTS in the acute phase of deep vein thrombosis can be predicted based on readily accessible baseline clinical and demographic characteristics. The individual risk in the sub-acute phase can be predicted with limited additional clinical characteristics. Schattauer GmbH Stuttgart.

  19. Elastic network model of learned maintained contacts to predict protein motion

    PubMed Central

    Putz, Ines

    2017-01-01

    We present a novel elastic network model, lmcENM, to determine protein motion even for localized functional motions that involve substantial changes in the protein’s contact topology. Existing elastic network models assume that the contact topology remains unchanged throughout the motion and are thus most appropriate to simulate highly collective function-related movements. lmcENM uses machine learning to differentiate breaking from maintained contacts. We show that lmcENM accurately captures functional transitions unexplained by the classical ENM and three reference ENM variants, while preserving the simplicity of classical ENM. We demonstrate the effectiveness of our approach on a large set of proteins covering different motion types. Our results suggest that accurately predicting a “deformation-invariant” contact topology offers a promising route to increase the general applicability of ENMs. We also find that to correctly predict this contact topology a combination of several features seems to be relevant which may vary slightly depending on the protein. Additionally, we present case studies of two biologically interesting systems, Ferric Citrate membrane transporter FecA and Arachidonate 15-Lipoxygenase. PMID:28854238

  20. Thick Galactic Cosmic Radiation Shielding Using Atmospheric Data

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Nurge, Mark A.; Starr, Stanley O.; Koontz, Steven L.

    2013-01-01

    NASA is concerned with protecting astronauts from the effects of galactic cosmic radiation and has expended substantial effort in the development of computer models to predict the shielding obtained from various materials. However, these models were only developed for shields up to about 120 g!cm2 in thickness and have predicted that shields of this thickness are insufficient to provide adequate protection for extended deep space flights. Consequently, effort is underway to extend the range of these models to thicker shields and experimental data is required to help confirm the resulting code. In this paper empirically obtained effective dose measurements from aircraft flights in the atmosphere are used to obtain the radiation shielding function of the earth's atmosphere, a very thick shield. Obtaining this result required solving an inverse problem and the method for solving it is presented. The results are shown to be in agreement with current code in the ranges where they overlap. These results are then checked and used to predict the radiation dosage under thick shields such as planetary regolith and the atmosphere of Venus.

  1. Measured and predicted impingement noise for a model-scale under the wing externally blown flap configuration with a QCSEE type nozzle

    NASA Technical Reports Server (NTRS)

    Mckinzie, D. J., Jr.

    1980-01-01

    Jet/flap interaction noise was measured and predicted for a small-scale model two-flap, under-the-wing, externally blown flap configuration equipped with and without noise suppression devices. The devices consisted of short spanwise fairings centered in relationship to the jet axis and positioned in the slots between the wing and flaps. The nozzle approximated that of the Quiet Clean Short-haul Experimental Engine (QCSEE). Takeoff noise reductions of 6 dB in the flyover and 5 dB in the sideline plane were obtained over a wide range of radiation angles. Approach noise reductions of about 5 dB were obtained only in the forward quadrant of the flyover plane; no reductions were obtained in the sideline plane. Models of several noise sources were combined analytically to form an overall noise prediction, the results from which compared favorably with the measured data. The aerodynamic performance characteristics for these configurations were substantially the same in the takeoff attitude. However, in the approach attitude, the suppressed configuration produced a 6 percent reduction in the flow turning efficiency.

  2. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  3. How much detail is needed in modeling a transcranial magnetic stimulation figure-8 coil: Measurements and brain simulations

    PubMed Central

    Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.

    2017-01-01

    Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923

  4. Topographic Metric Predictions of Soil redistribution and Organic Carbon Distribution in Croplands

    NASA Astrophysics Data System (ADS)

    Mccarty, G.; Li, X.

    2017-12-01

    Landscape topography is a key factor controlling soil redistribution and soil organic carbon (SOC) distribution in Iowa croplands (USA). In this study, we adopted a combined approach based on carbon () and cesium (137Cs) isotope tracers, and digital terrain analysis to understand patterns of SOC redistribution and carbon sequestration dynamics as influenced by landscape topography in tilled cropland under long term corn/soybean management. The fallout radionuclide 137Cs was used to estimate soil redistribution rates and a Lidar-derived DEM was used to obtain a set of topographic metrics for digital terrain analysis. Soil redistribution rates and patterns of SOC distribution were examined across 560 sampling locations at two field sites as well as at larger scale within the watershed. We used δ13C content in SOC to partition C3 and C4 plant derived C density at 127 locations in one of the two field sites with corn being the primary source of C4 C. Topography-based models were developed to simulate SOC distribution and soil redistribution using stepwise ordinary least square regression (SOLSR) and stepwise principal component regression (SPCR). All topography-based models developed through SPCR and SOLSR demonstrated good simulation performance, explaining more than 62% variability in SOC density and soil redistribution rates across two field sites with intensive samplings. However, the SOLSR models showed lower reliability than the SPCR models in predicting SOC density at the watershed scale. Spatial patterns of C3-derived SOC density were highly related to those of SOC density. Topographic metrics exerted substantial influence on C3-derived SOC density with the SPCR model accounting for 76.5% of the spatial variance. In contrast C4 derived SOC density had poor spatial structure likely reflecting the substantial contribution of corn vegetation to recently sequestered SOC density. Results of this study highlighted the utility of topographic SPCR models for scaling field measurements of SOC density and soil redistribution rates to watershed scale which will allow watershed model to better predict fate of ecosystem C on agricultural landscapes.

  5. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    PubMed

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age of 40. The PREDICT v2 is an improved prognostication and treatment benefit model compared with v1. The online version should continue to aid clinical decision making in women with early breast cancer.

  6. Plant microRNA-Target Interaction Identification Model Based on the Integration of Prediction Tools and Support Vector Machine

    PubMed Central

    Meng, Jun; Shi, Lin; Luan, Yushi

    2014-01-01

    Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153

  7. Local air gap thickness and contact area models for realistic simulation of human thermo-physiological response

    NASA Astrophysics Data System (ADS)

    Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.

    2018-02-01

    To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.

  8. Estimating the Need for Medical Intervention due to Sleep Disruption on the International Space Station

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Lewandowski, Beth E.; Brooker, John E.; Hurst, S. R.; Mallis, Melissa M.; Caldwell, J. Lynn

    2008-01-01

    During ISS and shuttle missions, difficulties with sleep affect more than half of all US crews. Mitigation strategies to help astronauts cope with the challenges of disrupted sleep patterns can negatively impact both mission planning and vehicle design. The methods for addressing known detrimental impacts for some mission scenarios may have a substantial impact on vehicle specific consumable mass or volume or on the mission timeline. As part of the Integrated Medical Model (IMM) task, NASA Glenn Research Center is leading the development of a Monte Carlo based forecasting tool designed to determine the consumables required to address risks related to sleep disruption. The model currently focuses on the International Space Station and uses an algorithm that assembles representative mission schedules and feeds this into a well validated model that predicts relative levels of performance, and need for sleep (SAFTE Model, IBR Inc). Correlation of the resulting output to self-diagnosed needs for hypnotics, stimulants, and other pharmaceutical countermeasures, allows prediction of pharmaceutical use and the uncertainty of the specified prediction. This paper outlines a conceptual model for determining a rate of pharmaceutical utilization that can be used in the IMM model for comparison and optimization of mitigation methods with respect to all other significant medical needs and interventions.

  9. Attachment theory and theory of planned behavior: an integrative model predicting underage drinking.

    PubMed

    Lac, Andrew; Crano, William D; Berger, Dale E; Alvaro, Eusebio M

    2013-08-01

    Research indicates that peer and maternal bonds play important but sometimes contrasting roles in the outcomes of children. Less is known about attachment bonds to these 2 reference groups in young adults. Using a sample of 351 participants (18 to 20 years of age), the research integrated two theoretical traditions: attachment theory and theory of planned behavior (TPB). The predictive contribution of both theories was examined in the context of underage adult alcohol use. Using full structural equation modeling, results substantiated the hypotheses that secure peer attachment positively predicted norms and behavioral control toward alcohol, but secure maternal attachment inversely predicted attitudes and behavioral control toward alcohol. Alcohol attitudes, norms, and behavioral control each uniquely explained alcohol intentions, which anticipated an increase in alcohol behavior 1 month later. The hypothesized processes were statistically corroborated by tests of indirect and total effects. These findings support recommendations for programs designed to curtail risky levels of underage drinking using the tenets of attachment theory and TPB. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  10. Adaptive on-line prediction of the available power of lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Waag, Wladislaw; Fleischer, Christian; Sauer, Dirk Uwe

    2013-11-01

    In this paper a new approach for prediction of the available power of a lithium-ion battery pack is presented. It is based on a nonlinear battery model that includes current dependency of the battery resistance. It results in an accurate power prediction not only at room temperature, but also at lower temperatures at which the current dependency is substantial. The used model parameters are fully adaptable on-line to the given state of the battery (state of charge, state of health, temperature). This on-line adaption in combination with an explicit consideration of differences between characteristics of individual cells in a battery pack ensures an accurate power prediction under all possible conditions. The proposed trade-off between the number of used cell parameters and the total accuracy as well as the optimized algorithm results in a real-time capability of the method, which is demonstrated on a low-cost 16 bit microcontroller. The verification tests performed on a software-in-the-loop test bench system with four 40 Ah lithium-ion cells show promising results.

  11. Quantifying the Dynamics of Field Cancerization in Tobacco-Related Head and Neck Cancer: A Multiscale Modeling Approach.

    PubMed

    Ryser, Marc D; Lee, Walter T; Ready, Neal E; Leder, Kevin Z; Foo, Jasmine

    2016-12-15

    High rates of local recurrence in tobacco-related head and neck squamous cell carcinoma (HNSCC) are commonly attributed to unresected fields of precancerous tissue. Because they are not easily detectable at the time of surgery without additional biopsies, there is a need for noninvasive methods to predict the extent and dynamics of these fields. Here, we developed a spatial stochastic model of tobacco-related HNSCC at the tissue level and calibrated the model using a Bayesian framework and population-level incidence data from the Surveillance, Epidemiology, and End Results (SEER) registry. Probabilistic model analyses were performed to predict the field geometry at time of diagnosis, and model predictions of age-specific recurrence risks were tested against outcome data from SEER. The calibrated models predicted a strong dependence of the local field size on age at diagnosis, with a doubling of the expected field diameter between ages at diagnosis of 50 and 90 years, respectively. Similarly, the probability of harboring multiple, clonally unrelated fields at the time of diagnosis was found to increase substantially with patient age. On the basis of these findings, we hypothesized a higher recurrence risk in older than in younger patients when treated by surgery alone; we successfully tested this hypothesis using age-stratified outcome data. Further clinical studies are needed to validate the model predictions in a patient-specific setting. This work highlights the importance of spatial structure in models of epithelial carcinogenesis and suggests that patient age at diagnosis may be a critical predictor of the size and multiplicity of precancerous lesions. Cancer Res; 76(24); 7078-88. ©2016 AACR. ©2016 American Association for Cancer Research.

  12. Quantifying the dynamics of field cancerization in tobacco-related head and neck cancer: a multi-scale modeling approach

    PubMed Central

    Ryser, Marc D.; Lee, Walter T.; Readyz, Neal E.; Leder, Kevin Z.; Foo, Jasmine

    2017-01-01

    High rates of local recurrence in tobacco-related head and neck squamous cell carcinoma (HNSCC) are commonly attributed to unresected fields of precancerous tissue. Since they are not easily detectable at the time of surgery without additional biopsies, there is a need for non-invasive methods to predict the extent and dynamics of these fields. Here we developed a spatial stochastic model of tobacco-related HNSCC at the tissue level and calibrated the model using a Bayesian framework and population-level incidence data from the Surveillance, Epidemiology, and End Results (SEER) registry. Probabilistic model analyses were performed to predict the field geometry at time of diagnosis, and model predictions of age-specific recurrence risks were tested against outcome data from SEER. The calibrated models predicted a strong dependence of the local field size on age at diagnosis, with a doubling of the expected field diameter between ages at diagnosis of 50 and 90 years, respectively. Similarly, the probability of harboring multiple, clonally unrelated fields at the time of diagnosis were found to increase substantially with patient age. Based on these findings, we hypothesized a higher recurrence risk in older compared to younger patients when treated by surgery alone; we successfully tested this hypothesis using age-stratified outcome data. Further clinical studies are needed to validate the model predictions in a patient-specific setting. This work highlights the importance of spatial structure in models of epithelial carcinogenesis, and suggests that patient age at diagnosis may be a critical predictor of the size and multiplicity of precancerous lesions. Major Findings Patient age at diagnosis was found to be a critical predictor of the size and multiplicity of precancerous lesions. This finding challenges the current one-size-fits-all approach to surgical excision margins. PMID:27913438

  13. Clinical prediction model to identify vulnerable patients in ambulatory surgery: towards optimal medical decision-making.

    PubMed

    Mijderwijk, Herjan; Stolker, Robert Jan; Duivenvoorden, Hugo J; Klimek, Markus; Steyerberg, Ewout W

    2016-09-01

    Ambulatory surgery patients are at risk of adverse psychological outcomes such as anxiety, aggression, fatigue, and depression. We developed and validated a clinical prediction model to identify patients who were vulnerable to these psychological outcome parameters. We prospectively assessed 383 mixed ambulatory surgery patients for psychological vulnerability, defined as the presence of anxiety (state/trait), aggression (state/trait), fatigue, and depression seven days after surgery. Three psychological vulnerability categories were considered-i.e., none, one, or multiple poor scores, defined as a score exceeding one standard deviation above the mean for each single outcome according to normative data. The following determinants were assessed preoperatively: sociodemographic (age, sex, level of education, employment status, marital status, having children, religion, nationality), medical (heart rate and body mass index), and psychological variables (self-esteem and self-efficacy), in addition to anxiety, aggression, fatigue, and depression. A prediction model was constructed using ordinal polytomous logistic regression analysis, and bootstrapping was applied for internal validation. The ordinal c-index (ORC) quantified the discriminative ability of the model, in addition to measures for overall model performance (Nagelkerke's R (2) ). In this population, 137 (36%) patients were identified as being psychologically vulnerable after surgery for at least one of the psychological outcomes. The most parsimonious and optimal prediction model combined sociodemographic variables (level of education, having children, and nationality) with psychological variables (trait anxiety, state/trait aggression, fatigue, and depression). Model performance was promising: R (2)  = 30% and ORC = 0.76 after correction for optimism. This study identified a substantial group of vulnerable patients in ambulatory surgery. The proposed clinical prediction model could allow healthcare professionals the opportunity to identify vulnerable patients in ambulatory surgery, although additional modification and validation are needed. (ClinicalTrials.gov number, NCT01441843).

  14. Habitat features and predictive habitat modeling for the Colorado chipmunk in southern New Mexico

    USGS Publications Warehouse

    Rivieccio, M.; Thompson, B.C.; Gould, W.R.; Boykin, K.G.

    2003-01-01

    Two subspecies of Colorado chipmunk (state threatened and federal species of concern) occur in southern New Mexico: Tamias quadrivittatus australis in the Organ Mountains and T. q. oscuraensis in the Oscura Mountains. We developed a GIS model of potentially suitable habitat based on vegetation and elevation features, evaluated site classifications of the GIS model, and determined vegetation and terrain features associated with chipmunk occurrence. We compared GIS model classifications with actual vegetation and elevation features measured at 37 sites. At 60 sites we measured 18 habitat variables regarding slope, aspect, tree species, shrub species, and ground cover. We used logistic regression to analyze habitat variables associated with chipmunk presence/absence. All (100%) 37 sample sites (28 predicted suitable, 9 predicted unsuitable) were classified correctly by the GIS model regarding elevation and vegetation. For 28 sites predicted suitable by the GIS model, 18 sites (64%) appeared visually suitable based on habitat variables selected from logistic regression analyses, of which 10 sites (36%) were specifically predicted as suitable habitat via logistic regression. We detected chipmunks at 70% of sites deemed suitable via the logistic regression models. Shrub cover, tree density, plant proximity, presence of logs, and presence of rock outcrop were retained in the logistic model for the Oscura Mountains; litter, shrub cover, and grass cover were retained in the logistic model for the Organ Mountains. Evaluation of predictive models illustrates the need for multi-stage analyses to best judge performance. Microhabitat analyses indicate prospective needs for different management strategies between the subspecies. Sensitivities of each population of the Colorado chipmunk to natural and prescribed fire suggest that partial burnings of areas inhabited by Colorado chipmunks in southern New Mexico may be beneficial. These partial burnings may later help avoid a fire that could substantially reduce habitat of chipmunks over a mountain range.

  15. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling.

    PubMed

    Feng, Xiaohui; Uriarte, María; González, Grizelle; Reed, Sasha; Thompson, Jill; Zimmerman, Jess K; Murphy, Lora

    2018-01-01

    Tropical forests play a critical role in carbon and water cycles at a global scale. Rapid climate change is anticipated in tropical regions over the coming decades and, under a warmer and drier climate, tropical forests are likely to be net sources of carbon rather than sinks. However, our understanding of tropical forest response and feedback to climate change is very limited. Efforts to model climate change impacts on carbon fluxes in tropical forests have not reached a consensus. Here, we use the Ecosystem Demography model (ED2) to predict carbon fluxes of a Puerto Rican tropical forest under realistic climate change scenarios. We parameterized ED2 with species-specific tree physiological data using the Predictive Ecosystem Analyzer workflow and projected the fate of this ecosystem under five future climate scenarios. The model successfully captured interannual variability in the dynamics of this tropical forest. Model predictions closely followed observed values across a wide range of metrics including aboveground biomass, tree diameter growth, tree size class distributions, and leaf area index. Under a future warming and drying climate scenario, the model predicted reductions in carbon storage and tree growth, together with large shifts in forest community composition and structure. Such rapid changes in climate led the forest to transition from a sink to a source of carbon. Growth respiration and root allocation parameters were responsible for the highest fraction of predictive uncertainty in modeled biomass, highlighting the need to target these processes in future data collection. Our study is the first effort to rely on Bayesian model calibration and synthesis to elucidate the key physiological parameters that drive uncertainty in tropical forests responses to climatic change. We propose a new path forward for model-data synthesis that can substantially reduce uncertainty in our ability to model tropical forest responses to future climate. © 2017 John Wiley & Sons Ltd.

  16. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling

    USGS Publications Warehouse

    Feng, Xiaohui; Uriarte, María; González, Grizelle; Reed, Sasha C.; Thompson, Jill; Zimmerman, Jess K.; Murphy, Lora

    2018-01-01

    Tropical forests play a critical role in carbon and water cycles at a global scale. Rapid climate change is anticipated in tropical regions over the coming decades and, under a warmer and drier climate, tropical forests are likely to be net sources of carbon rather than sinks. However, our understanding of tropical forest response and feedback to climate change is very limited. Efforts to model climate change impacts on carbon fluxes in tropical forests have not reached a consensus. Here we use the Ecosystem Demography model (ED2) to predict carbon fluxes of a Puerto Rican tropical forest under realistic climate change scenarios. We parameterized ED2 with species-specific tree physiological data using the Predictive Ecosystem Analyzer workflow and projected the fate of this ecosystem under five future climate scenarios. The model successfully captured inter-annual variability in the dynamics of this tropical forest. Model predictions closely followed observed values across a wide range of metrics including above-ground biomass, tree diameter growth, tree size class distributions, and leaf area index. Under a future warming and drying climate scenario, the model predicted reductions in carbon storage and tree growth, together with large shifts in forest community composition and structure. Such rapid changes in climate led the forest to transition from a sink to a source of carbon. Growth respiration and root allocation parameters were responsible for the highest fraction of predictive uncertainty in modeled biomass, highlighting the need to target these processes in future data collection. Our study is the first effort to rely on Bayesian model calibration and synthesis to elucidate the key physiological parameters that drive uncertainty in tropical forests responses to climatic change. We propose a new path forward for model-data synthesis that can substantially reduce uncertainty in our ability to model tropical forest responses to future climate.

  17. Modeling the Etiology of Adolescent Substance Use: A Test of the Social Development Model

    PubMed Central

    Catalano, Richard F.; Kosterman, Rick; Hawkins, J. David; Newcomb, Michael D.; Abbott, Robert D.

    2007-01-01

    The social development model is a general theory of human behavior that seeks to explain antisocial behaviors through specification of predictive developmental relationships. It incorporates the effects of empirical predictors (“risk factors” and “protective factors”) for antisocial behavior and attempts to synthesize the most strongly supported propositions of control theory, social learning theory, and differential association theory. This article examines the power of social development model constructs measured at ages 9 to 10 and 13 to 14 to predict drug use at ages 17 to 18. The sample of 590 is from the longitudinal panel of the Seattle Social Development Project, which in 1985 sampled fifth grade students from high crime neighborhoods in Seattle, Washington. Structural equation modeling techniques were used to examine the fit of the model to the data. Although all but one path coefficient were significant and in the expected direction, the model did not fit the data as well as expected (CFI=.87). We next specified second-order factors for each path to capture the substantial common variance in the constructs' opportunities, involvement, and rewards. This model fit the data well (CFI=.90). We conclude that the social development model provides an acceptable fit to predict drug use at ages 17 to 18. Implications for the temporal nature of key constructs and for prevention are discussed. PMID:17848978

  18. Landscape modeling for Everglades ecosystem restoration

    USGS Publications Warehouse

    DeAngelis, D.L.; Gross, L.J.; Huston, M.A.; Wolff, W.F.; Fleming, D.M.; Comiskey, E.J.; Sylvester, S.M.

    1998-01-01

    A major environmental restoration effort is under way that will affect the Everglades and its neighboring ecosystems in southern Florida. Ecosystem and population-level modeling is being used to help in the planning and evaluation of this restoration. The specific objective of one of these modeling approaches, the Across Trophic Level System Simulation (ATLSS), is to predict the responses of a suite of higher trophic level species to several proposed alterations in Everglades hydrology. These include several species of wading birds, the snail kite, Cape Sable seaside sparrow, Florida panther, white-tailed deer, American alligator, and American crocodile. ATLSS is an ecosystem landscape-modeling approach and uses Geographic Information System (GIS) vegetation data and existing hydrology models for South Florida to provide the basic landscape for these species. A method of pseudotopography provides estimates of water depths through time at 28 ?? 28-m resolution across the landscape of southern Florida. Hydrologic model output drives models of habitat and prey availability for the higher trophic level species. Spatially explicit, individual-based computer models simulate these species. ATLSS simulations can compare the landscape dynamic spatial pattern of the species resulting from different proposed water management strategies. Here we compare the predicted effects of one possible change in water management in South Florida with the base case of no change. Preliminary model results predict substantial differences between these alternatives in some biotic spatial patterns. ?? 1998 Springer-Verlag.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Michael D.; Olsen, Brett N.; Schlesinger, Paul H.

    In mammalian cells cholesterol is essential for membrane function, but in excess can be cytototoxic. The cellular response to acute cholesterol loading involves biophysical-based mechanisms that regulate cholesterol levels, through modulation of the “activity” or accessibility of cholesterol to extra-membrane acceptors. Experiments and united atom (UA) simulations show that at high concentrations of cholesterol, lipid bilayers thin significantly and cholesterol availability to external acceptors increases substantially. Such cholesterol activation is critical to its trafficking within cells. Here we aim to reduce the computational cost to enable simulation of large and complex systems involved in cholesterol regulation, such as those includingmore » oxysterols and cholesterol-sensing proteins. To accomplish this, we have modified the published MARTINI coarse-grained force field to improve its predictions of cholesterol-induced changes in both macroscopic and microscopic properties of membranes. Most notably, MARTINI fails to capture both the (macroscopic) area condensation and membrane thickening seen at less than 30% cholesterol and the thinning seen above 40% cholesterol. The thinning at high concentration is critical to cholesterol activation. Microscopic properties of interest include cholesterol-cholesterol radial distribution functions (RDFs), tilt angle, and accessible surface area. First, we develop an “angle-corrected” model wherein we modify the coarse-grained bond angle potentials based on atomistic simulations. This modification significantly improves prediction of macroscopic properties, most notably the thickening/thinning behavior, and also slightly improves microscopic property prediction relative to MARTINI. Second, we add to the angle correction a “volume correction” by also adjusting phospholipid bond lengths to achieve a more accurate volume per molecule. The angle + volume correction substantially further improves the quantitative agreement of the macroscopic properties (area per molecule and thickness) with united atom simulations. However, this improvement also reduces the accuracy of microscopic predictions like radial distribution functions and cholesterol tilt below that of either MARTINI or the angle-corrected model. Thus, while both of our forcefield corrections improve MARTINI, the combined angle and volume correction should be used for problems involving sterol effects on the overall structure of the membrane, while our angle-corrected model should be used in cases where the properties of individual lipid and sterol models are critically important.« less

  20. Probabilistic population projections with migration uncertainty

    PubMed Central

    Azose, Jonathan J.; Ševčíková, Hana; Raftery, Adrian E.

    2016-01-01

    We produce probabilistic projections of population for all countries based on probabilistic projections of fertility, mortality, and migration. We compare our projections to those from the United Nations’ Probabilistic Population Projections, which uses similar methods for fertility and mortality but deterministic migration projections. We find that uncertainty in migration projection is a substantial contributor to uncertainty in population projections for many countries. Prediction intervals for the populations of Northern America and Europe are over 70% wider, whereas prediction intervals for the populations of Africa, Asia, and the world as a whole are nearly unchanged. Out-of-sample validation shows that the model is reasonably well calibrated. PMID:27217571

  1. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  2. Improving a prediction system for oil spills in the Yellow Sea: effect of tides on subtidal flow.

    PubMed

    Kim, Chang-Sin; Cho, Yang-Ki; Choi, Byoung-Ju; Jung, Kyung Tae; You, Sung Hyup

    2013-03-15

    A multi-nested prediction system for the Yellow Sea using drifter trajectory simulations was developed to predict the movements of an oil spill after the MV Hebei Spirit accident. The speeds of the oil spill trajectories predicted by the model without tidal forcing were substantially faster than the observations; however, predictions taking into account the tides, including both tidal cycle and subtidal periods, were satisfactorily improved. Subtidal flow in the simulation without tides was stronger than in that with tides because of reduced frictional effects. Friction induced by tidal stress decelerated the southward subtidal flows driven by northwesterly winter winds along the Korean coast of the Yellow Sea. These results strongly suggest that in order to produce accurate predictions of oil spill trajectories, simulations must include tidal effects, such as variations within a tidal cycle and advections over longer time scales in tide-dominated areas. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Development of a flood-induced health risk prediction model for Africa

    NASA Astrophysics Data System (ADS)

    Lee, D.; Block, P. J.

    2017-12-01

    Globally, many floods occur in developing or tropical regions where the impact on public health is substantial, including death and injury, drinking water, endemic disease, and so on. Although these flood impacts on public health have been investigated, integrated management of floods and flood-induced health risks is technically and institutionally limited. Specifically, while the use of climatic and hydrologic forecasts for disaster management has been highlighted, analogous predictions for forecasting the magnitude and impact of health risks are lacking, as is the infrastructure for health early warning systems, particularly in developing countries. In this study, we develop flood-induced health risk prediction model for African regions using season-ahead flood predictions with climate drivers and a variety of physical and socio-economic information, such as local hazard, exposure, resilience, and health vulnerability indicators. Skillful prediction of flood and flood-induced health risks can contribute to practical pre- and post-disaster responses in both local- and global-scales, and may eventually be integrated into multi-hazard early warning systems for informed advanced planning and management. This is especially attractive for areas with limited observations and/or little capacity to develop flood-induced health risk warning systems.

  4. Assessing Principal Component Regression Prediction of Neurochemicals Detected with Fast-Scan Cyclic Voltammetry

    PubMed Central

    2011-01-01

    Principal component regression is a multivariate data analysis approach routinely used to predict neurochemical concentrations from in vivo fast-scan cyclic voltammetry measurements. This mathematical procedure can rapidly be employed with present day computer programming languages. Here, we evaluate several methods that can be used to evaluate and improve multivariate concentration determination. The cyclic voltammetric representation of the calculated regression vector is shown to be a valuable tool in determining whether the calculated multivariate model is chemically appropriate. The use of Cook’s distance successfully identified outliers contained within in vivo fast-scan cyclic voltammetry training sets. This work also presents the first direct interpretation of a residual color plot and demonstrated the effect of peak shifts on predicted dopamine concentrations. Finally, separate analyses of smaller increments of a single continuous measurement could not be concatenated without substantial error in the predicted neurochemical concentrations due to electrode drift. Taken together, these tools allow for the construction of more robust multivariate calibration models and provide the first approach to assess the predictive ability of a procedure that is inherently impossible to validate because of the lack of in vivo standards. PMID:21966586

  5. Cosmological implications of a large complete quasar sample.

    PubMed

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  6. Assessing principal component regression prediction of neurochemicals detected with fast-scan cyclic voltammetry.

    PubMed

    Keithley, Richard B; Wightman, R Mark

    2011-06-07

    Principal component regression is a multivariate data analysis approach routinely used to predict neurochemical concentrations from in vivo fast-scan cyclic voltammetry measurements. This mathematical procedure can rapidly be employed with present day computer programming languages. Here, we evaluate several methods that can be used to evaluate and improve multivariate concentration determination. The cyclic voltammetric representation of the calculated regression vector is shown to be a valuable tool in determining whether the calculated multivariate model is chemically appropriate. The use of Cook's distance successfully identified outliers contained within in vivo fast-scan cyclic voltammetry training sets. This work also presents the first direct interpretation of a residual color plot and demonstrated the effect of peak shifts on predicted dopamine concentrations. Finally, separate analyses of smaller increments of a single continuous measurement could not be concatenated without substantial error in the predicted neurochemical concentrations due to electrode drift. Taken together, these tools allow for the construction of more robust multivariate calibration models and provide the first approach to assess the predictive ability of a procedure that is inherently impossible to validate because of the lack of in vivo standards.

  7. How self-organization can guide evolution.

    PubMed

    Glancy, Jonathan; Stone, James V; Wilson, Stuart P

    2016-11-01

    Self-organization and natural selection are fundamental forces that shape the natural world. Substantial progress in understanding how these forces interact has been made through the study of abstract models. Further progress may be made by identifying a model system in which the interaction between self-organization and selection can be investigated empirically. To this end, we investigate how the self-organizing thermoregulatory huddling behaviours displayed by many species of mammals might influence natural selection of the genetic components of metabolism. By applying a simple evolutionary algorithm to a well-established model of the interactions between environmental, morphological, physiological and behavioural components of thermoregulation, we arrive at a clear, but counterintuitive, prediction: rodents that are able to huddle together in cold environments should evolve a lower thermal conductance at a faster rate than animals reared in isolation. The model therefore explains how evolution can be accelerated as a consequence of relaxed selection , and it predicts how the effect may be exaggerated by an increase in the litter size, i.e. by an increase in the capacity to use huddling behaviours for thermoregulation. Confirmation of these predictions in future experiments with rodents would constitute strong evidence of a mechanism by which self-organization can guide natural selection.

  8. Structural vascular disease in Africans: Performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: The SABPA study.

    PubMed

    Botha, J; de Ridder, J H; Potgieter, J C; Steyn, H S; Malan, L

    2013-10-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fasting bloods (glucose, high density lipoprotein (HDL) and triglycerides) were obtained in a well-controlled setting. The RPWC male model (LR ROC AUC: 0.71, NN ROC AUC: 0.71) was practically equal to the JSC model (LR ROC AUC: 0.71, NN ROC AUC: 0.69) to predict structural vascular -disease. Similarly, the female RPWC model (LR ROC AUC: 0.84, NN ROC AUC: 0.82) and JSC model (LR ROC AUC: 0.82, NN ROC AUC: 0.81) equally predicted CIMT as surrogate marker for structural vascular disease. Odds ratios supported validity where prediction of CIMT revealed -clinical -significance, well over 1, for both the JSC and RPWC models in African males and females (OR 3.75-13.98). In conclusion, the proposed RPWC model was substantially validated utilizing linear and non-linear analyses. We therefore propose ethnic-specific WC cut points (African males, ≥90 cm; -females, ≥98 cm) to predict a surrogate marker for structural vascular disease. © J. A. Barth Verlag in Georg Thieme Verlag KG Stuttgart · New York.

  9. Age structure is critical to the population dynamics and survival of honeybee colonies.

    PubMed

    Betti, M I; Wahl, L M; Zamir, M

    2016-11-01

    Age structure is an important feature of the division of labour within honeybee colonies, but its effects on colony dynamics have rarely been explored. We present a model of a honeybee colony that incorporates this key feature, and use this model to explore the effects of both winter and disease on the fate of the colony. The model offers a novel explanation for the frequently observed phenomenon of 'spring dwindle', which emerges as a natural consequence of the age-structured dynamics. Furthermore, the results indicate that a model taking age structure into account markedly affects the predicted timing and severity of disease within a bee colony. The timing of the onset of disease with respect to the changing seasons may also have a substantial impact on the fate of a honeybee colony. Finally, simulations predict that an infection may persist in a honeybee colony over several years, with effects that compound over time. Thus, the ultimate collapse of the colony may be the result of events several years past.

  10. The risk of establishment of aquatic invasive species: joining invasibility and propagule pressure.

    PubMed

    Leung, Brian; Mandrak, Nicholas E

    2007-10-22

    Invasive species are increasingly becoming a policy priority. This has spurred researchers and managers to try to estimate the risk of invasion. Conceptually, invasions are dependent both on the receiving environment (invasibility) and on the ability to reach these new areas (propagule pressure). However, analyses of risk typically examine only one or the other. Here, we develop and apply a joint model of invasion risk that simultaneously incorporates invasibility and propagule pressure. We present arguments that the behaviour of these two elements of risk differs substantially--propagule pressure is a function of time, whereas invasibility is not--and therefore have different management implications. Further, we use the well-studied zebra mussel (Dreissena polymorpha) to contrast predictions made using the joint model to those made by separate invasibility and propagule pressure models. We show that predictions of invasion progress as well as of the long-term invasion pattern are strongly affected by using a joint model.

  11. A test of the facultative calibration/reactive heritability model of extraversion

    PubMed Central

    Haysom, Hannah J.; Mitchem, Dorian G.; Lee, Anthony J.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.

    2015-01-01

    A model proposed by Lukaszewski and Roney (2011) suggests that each individual’s level of extraversion is calibrated to other traits that predict the success of an extraverted behavioural strategy. Under ‘facultative calibration’, extraversion is not directly heritable, but rather exhibits heritability through its calibration to directly heritable traits (“reactive heritability”). The current study uses biometrical modelling of 1659 identical and non-identical twins and their siblings to assess whether the genetic variation in extraversion is calibrated to variation in facial attractiveness, intelligence, height in men and body mass index (BMI) in women. Extraversion was significantly positively correlated with facial attractiveness in both males (r=.11) and females (r=.18), but correlations between extraversion and the other variables were not consistent with predictions. Further, twin modelling revealed that the genetic variation in facial attractiveness did not account for a substantial proportion of the variation in extraversion in either males (2.4%) or females (0.5%). PMID:26880866

  12. Effects of macromolecular crowding on biochemical reaction equilibria: a molecular thermodynamic perspective.

    PubMed

    Hu, Zhongqiao; Jiang, Jianwen; Rajagopalan, Raj

    2007-09-01

    A molecular thermodynamic model is developed to investigate the effects of macromolecular crowding on biochemical reactions. Three types of reactions, representing protein folding/conformational isomerization, coagulation/coalescence, and polymerization/association, are considered. The reactants, products, and crowders are modeled as coarse-grained spherical particles or as polymer chains, interacting through hard-sphere interactions with or without nonbonded square-well interactions, and the effects of crowder size and chain length as well as product size are examined. The results predicted by this model are consistent with experimentally observed crowding effects based on preferential binding or preferential exclusion of the crowders. Although simple hard-core excluded-volume arguments do in general predict the qualitative aspects of the crowding effects, the results show that other intermolecular interactions can substantially alter the extent of enhancement or reduction of the equilibrium and can even change the direction of the shift. An advantage of the approach presented here is that competing reactions can be incorporated within the model.

  13. Modeling causes of death: an integrated approach using CODEm

    PubMed Central

    2012-01-01

    Background Data on causes of death by age and sex are a critical input into health decision-making. Priority setting in public health should be informed not only by the current magnitude of health problems but by trends in them. However, cause of death data are often not available or are subject to substantial problems of comparability. We propose five general principles for cause of death model development, validation, and reporting. Methods We detail a specific implementation of these principles that is embodied in an analytical tool - the Cause of Death Ensemble model (CODEm) - which explores a large variety of possible models to estimate trends in causes of death. Possible models are identified using a covariate selection algorithm that yields many plausible combinations of covariates, which are then run through four model classes. The model classes include mixed effects linear models and spatial-temporal Gaussian Process Regression models for cause fractions and death rates. All models for each cause of death are then assessed using out-of-sample predictive validity and combined into an ensemble with optimal out-of-sample predictive performance. Results Ensemble models for cause of death estimation outperform any single component model in tests of root mean square error, frequency of predicting correct temporal trends, and achieving 95% coverage of the prediction interval. We present detailed results for CODEm applied to maternal mortality and summary results for several other causes of death, including cardiovascular disease and several cancers. Conclusions CODEm produces better estimates of cause of death trends than previous methods and is less susceptible to bias in model specification. We demonstrate the utility of CODEm for the estimation of several major causes of death. PMID:22226226

  14. A multi-scale comparison of modeled and observed seasonal methane emissions in northern wetlands

    DOE PAGES

    Xu, Xiyan; Riley, William J.; Koven, Charles D.; ...

    2016-09-13

    Wetlands are the largest global natural methane (CH 4) source, and emissions between 50 and 70° N latitude contribute 10-30 % to this source. Predictive capability of land models for northern wetland CH 4 emissions is still low due to limited site measurements, strong spatial and temporal variability in emissions, and complex hydrological and biogeochemical dynamics. To explore this issue, we compare wetland CH 4 emission predictions from the Community Land Model 4.5 (CLM4.5-BGC) with site- to regional-scale observations. A comparison of the CH 4 fluxes with eddy flux data highlighted needed changes to the model's estimate of aerenchyma area,more » which we implemented and tested. The model modification substantially reduced biases in CH 4 emissions when compared with CarbonTracker CH 4 predictions. CLM4.5 CH 4 emission predictions agree well with growing season (May–September) CarbonTracker Alaskan regional-level CH 4 predictions and site-level observations. However, CLM4.5 underestimated CH 4 emissions in the cold season (October–April). The monthly atmospheric CH 4 mole fraction enhancements due to wetland emissions are also assessed using the Weather Research and Forecasting-Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model coupled with daily emissions from CLM4.5 and compared with aircraft CH 4 mole fraction measurements from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) campaign. Both the tower and aircraft analyses confirm the underestimate of cold-season CH 4 emissions by CLM4.5. The greatest uncertainties in predicting the seasonal CH 4 cycle are from the wetland extent, cold-season CH 4 production and CH 4 transport processes. We recommend more cold-season experimental studies in high-latitude systems, which could improve the understanding and parameterization of ecosystem structure and function during this period. Predicted CH 4 emissions remain uncertain, but we show here that benchmarking against observations across spatial scales can inform model structural and parameter improvements.« less

  15. A multi-scale comparison of modeled and observed seasonal methane emissions in northern wetlands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xiyan; Riley, William J.; Koven, Charles D.

    Wetlands are the largest global natural methane (CH 4) source, and emissions between 50 and 70° N latitude contribute 10-30 % to this source. Predictive capability of land models for northern wetland CH 4 emissions is still low due to limited site measurements, strong spatial and temporal variability in emissions, and complex hydrological and biogeochemical dynamics. To explore this issue, we compare wetland CH 4 emission predictions from the Community Land Model 4.5 (CLM4.5-BGC) with site- to regional-scale observations. A comparison of the CH 4 fluxes with eddy flux data highlighted needed changes to the model's estimate of aerenchyma area,more » which we implemented and tested. The model modification substantially reduced biases in CH 4 emissions when compared with CarbonTracker CH 4 predictions. CLM4.5 CH 4 emission predictions agree well with growing season (May–September) CarbonTracker Alaskan regional-level CH 4 predictions and site-level observations. However, CLM4.5 underestimated CH 4 emissions in the cold season (October–April). The monthly atmospheric CH 4 mole fraction enhancements due to wetland emissions are also assessed using the Weather Research and Forecasting-Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model coupled with daily emissions from CLM4.5 and compared with aircraft CH 4 mole fraction measurements from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) campaign. Both the tower and aircraft analyses confirm the underestimate of cold-season CH 4 emissions by CLM4.5. The greatest uncertainties in predicting the seasonal CH 4 cycle are from the wetland extent, cold-season CH 4 production and CH 4 transport processes. We recommend more cold-season experimental studies in high-latitude systems, which could improve the understanding and parameterization of ecosystem structure and function during this period. Predicted CH 4 emissions remain uncertain, but we show here that benchmarking against observations across spatial scales can inform model structural and parameter improvements.« less

  16. Parallel constraint satisfaction in memory-based decisions.

    PubMed

    Glöckner, Andreas; Hodges, Sara D

    2011-01-01

    Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.

  17. Alternative research funding to improve clinical outcomes: model of prediction and prevention of sudden cardiac death.

    PubMed

    Myerburg, Robert J; Ullmann, Steven G

    2015-04-01

    Although identification and management of cardiovascular risk markers have provided important population risk insights and public health benefits, individual risk prediction remains challenging. Using sudden cardiac death risk as a base case, the complex epidemiology of sudden cardiac death risk and the substantial new funding required to study individual risk are explored. Complex epidemiology derives from the multiple subgroups having different denominators and risk profiles, while funding limitations emerge from saturation of conventional sources of research funding without foreseeable opportunities for increases. A resolution to this problem would have to emerge from new sources of funding targeted to individual risk prediction. In this analysis, we explore the possibility of a research funding strategy that would offer business incentives to the insurance industries, while providing support for unresolved research goals. The model is developed for the case of sudden cardiac death risk, but the concept is applicable to other areas of the medical enterprise. © 2015 American Heart Association, Inc.

  18. Predicting fiber refractive index from a measured preform index profile

    NASA Astrophysics Data System (ADS)

    Kiiveri, P.; Koponen, J.; Harra, J.; Novotny, S.; Husu, H.; Ihalainen, H.; Kokki, T.; Aallos, V.; Kimmelma, O.; Paul, J.

    2018-02-01

    When producing fiber lasers and amplifiers, silica glass compositions consisting of three to six different materials are needed. Due to the varying needs of different applications, substantial number of different glass compositions are used in the active fiber structures. Often it is not possible to find material parameters for theoretical models to estimate thermal and mechanical properties of those glass compositions. This makes it challenging to predict accurately fiber core refractive index values, even if the preform index profile is measured. Usually the desired fiber refractive index value is achieved experimentally, which is expensive. To overcome this problem, we analyzed statistically the changes between the measured preform and fiber index values. We searched for correlations that would help to predict the Δn-value change from preform to fiber in a situation where we don't know the values of the glass material parameters that define the change. Our index change models were built using the data collected from preforms and fibers made by the Direct Nanoparticle Deposition (DND) technology.

  19. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  20. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  1. EGRET upper limits to the high-energy gamma-ray emission from the millisecond pulsars in nearby globular clusters

    NASA Technical Reports Server (NTRS)

    Michelson, P. F.; Bertsch, D. L.; Brazier, K.; Chiang, J.; Dingus, B. L.; Fichtel, C. E.; Fierro, J.; Hartman, R. C.; Hunter, S. D.; Kanbach, G.

    1994-01-01

    We report upper limits to the high-energy gamma-ray emission from the millisecond pulsars (MSPs) in a number of globular clusters. The observations were done as part of an all-sky survey by the energetic Gamma Ray Experiment Telescope (EGRET) on the Compton Gamma Ray Observatory (CGRO) during Phase I of the CGRO mission (1991 June to 1992 November). Several theoretical models suggest that MSPs may be sources of high-energy gamma radiation emitted either as primary radiation from the pulsar magnetosphere or as secondary radiation generated by conversion into photons of a substantial part of the relativistic e(+/-) pair wind expected to flow from the pulsar. To date, no high-energy emission has been detected from an individual MSP. However, a large number of MSPs are expected in globular cluster cores where the formation rate of accreting binary systems is high. Model predictions of the total number of pulsars range in the hundreds for some clusters. These expectations have been reinforced by recent discoveries of a substantial number of radio MSPs in several clusters; for example, 11 have been found in 47 Tucanae (Manchester et al.). The EGRET observations have been used to obtain upper limits for the efficiency eta of conversion of MSP spin-down power into hard gamma rays. The upper limits are also compared with the gamma-ray fluxes predicted from theoretical models of pulsar wind emission (Tavani). The EGRET limits put significant constraints on either the emission models or the number of pulsars in the globular clusters.

  2. Classical Mathematical Models for Description and Prediction of Experimental Tumor Growth

    PubMed Central

    Benzekry, Sébastien; Lamont, Clare; Beheshti, Afshin; Tracz, Amanda; Ebos, John M. L.; Hlatky, Lynn; Hahnfeldt, Philip

    2014-01-01

    Despite internal complexity, tumor growth kinetics follow relatively simple laws that can be expressed as mathematical models. To explore this further, quantitative analysis of the most classical of these were performed. The models were assessed against data from two in vivo experimental systems: an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: 1) to determine a statistical model for description of the measurement error, 2) to establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) to assess the models' ability to forecast future tumor growth. The models included in the study comprised the exponential, exponential-linear, power law, Gompertz, logistic, generalized logistic, von Bertalanffy and a model with dynamic carrying capacity. For the breast data, the dynamics were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power, with excellent prediction scores (≥80%) extending out as far as 12 days in the future. For the lung data, the Gompertz and power law models provided the most parsimonious and parametrically identifiable description. However, not one of the models was able to achieve a substantial prediction rate (≥70%) beyond the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement. For instance, forecast success rates went from 14.9% to 62.7% when using the power law model to predict the full future tumor growth curves, using just three data points. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinic. PMID:25167199

  3. Classical mathematical models for description and prediction of experimental tumor growth.

    PubMed

    Benzekry, Sébastien; Lamont, Clare; Beheshti, Afshin; Tracz, Amanda; Ebos, John M L; Hlatky, Lynn; Hahnfeldt, Philip

    2014-08-01

    Despite internal complexity, tumor growth kinetics follow relatively simple laws that can be expressed as mathematical models. To explore this further, quantitative analysis of the most classical of these were performed. The models were assessed against data from two in vivo experimental systems: an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: 1) to determine a statistical model for description of the measurement error, 2) to establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) to assess the models' ability to forecast future tumor growth. The models included in the study comprised the exponential, exponential-linear, power law, Gompertz, logistic, generalized logistic, von Bertalanffy and a model with dynamic carrying capacity. For the breast data, the dynamics were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power, with excellent prediction scores (≥80%) extending out as far as 12 days in the future. For the lung data, the Gompertz and power law models provided the most parsimonious and parametrically identifiable description. However, not one of the models was able to achieve a substantial prediction rate (≥70%) beyond the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement. For instance, forecast success rates went from 14.9% to 62.7% when using the power law model to predict the full future tumor growth curves, using just three data points. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinic.

  4. Scaling local species-habitat relations to the larger landscape with a hierarchical spatial count model

    USGS Publications Warehouse

    Thogmartin, W.E.; Knutson, M.G.

    2007-01-01

    Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.

  5. Dynamic evaluation of CMAQ part I: Separating the effects of ...

    EPA Pesticide Factsheets

    A dynamic evaluation of the Community Multiscale Air Quality (CMAQ) modeling system version 5.0.1 was conducted to evaluate the model's ability to predict changes in ozone levels between 2002 and 2005, a time period characterized by emission reductions associated with the EPA's Nitrogen Oxides State Implementation Plan as well as significant reductions in mobile source emissions. Model results for the summers of 2002 and 2005 were compared to simulations from a previous version of CMAQ to assess the impact of model updates on predicted pollutant response. Changes to the model treatment of emissions, meteorology and chemistry had substantial impacts on the simulated ozone concentrations. While the median bias for high summertime ozone decreased in both years compared to previous simulations, the observed decrease in ozone from 2002 to 2005 in the eastern US continued to be underestimated by the model. Additional “cross” simulations were used to decompose the model predicted change in ozone into the change due to emissions, the change due to meteorology and any remaining change not explained individually by these two components. The decomposition showed that the emission controls led to a decrease in modeled high summertime ozone close to twice as large as the decrease attributable to changes in meteorology alone. Quantifying the impact of retrospective emission controls by removing the impacts of meteorology during the control period can be a valuable approac

  6. A Lévy-flight diffusion model to predict transgenic pollen dispersal.

    PubMed

    Vallaeys, Valentin; Tyson, Rebecca C; Lane, W David; Deleersnijder, Eric; Hanert, Emmanuel

    2017-01-01

    The containment of genetically modified (GM) pollen is an issue of significant concern for many countries. For crops that are bee-pollinated, model predictions of outcrossing rates depend on the movement hypothesis used for the pollinators. Previous work studying pollen spread by honeybees, the most important pollinator worldwide, was based on the assumption that honeybee movement can be well approximated by Brownian motion. A number of recent studies, however, suggest that pollinating insects such as bees perform Lévy flights in their search for food. Such flight patterns yield much larger rates of spread, and so the Brownian motion assumption might significantly underestimate the risk associated with GM pollen outcrossing in conventional crops. In this work, we propose a mechanistic model for pollen dispersal in which the bees perform truncated Lévy flights. This assumption leads to a fractional-order diffusion model for pollen that can be tuned to model motion ranging from pure Brownian to pure Lévy. We parametrize our new model by taking the same pollen dispersal dataset used in Brownian motion modelling studies. By numerically solving the model equations, we show that the isolation distances required to keep outcrossing levels below a certain threshold are substantially increased by comparison with the original predictions, suggesting that isolation distances may need to be much larger than originally thought. © 2017 The Author(s).

  7. A Lévy-flight diffusion model to predict transgenic pollen dispersal

    PubMed Central

    Vallaeys, Valentin; Tyson, Rebecca C.; Lane, W. David; Deleersnijder, Eric

    2017-01-01

    The containment of genetically modified (GM) pollen is an issue of significant concern for many countries. For crops that are bee-pollinated, model predictions of outcrossing rates depend on the movement hypothesis used for the pollinators. Previous work studying pollen spread by honeybees, the most important pollinator worldwide, was based on the assumption that honeybee movement can be well approximated by Brownian motion. A number of recent studies, however, suggest that pollinating insects such as bees perform Lévy flights in their search for food. Such flight patterns yield much larger rates of spread, and so the Brownian motion assumption might significantly underestimate the risk associated with GM pollen outcrossing in conventional crops. In this work, we propose a mechanistic model for pollen dispersal in which the bees perform truncated Lévy flights. This assumption leads to a fractional-order diffusion model for pollen that can be tuned to model motion ranging from pure Brownian to pure Lévy. We parametrize our new model by taking the same pollen dispersal dataset used in Brownian motion modelling studies. By numerically solving the model equations, we show that the isolation distances required to keep outcrossing levels below a certain threshold are substantially increased by comparison with the original predictions, suggesting that isolation distances may need to be much larger than originally thought. PMID:28123097

  8. Does substantiated childhood maltreatment lead to poor quality of life in young adulthood? Evidence from an Australian birth cohort study.

    PubMed

    Abajobir, Amanuel Alemu; Kisely, Steve; Williams, Gail; Strathearn, Lane; Clavarino, Alexandra; Najman, Jake Moses

    2017-07-01

    To examine the independent effect of single and multiple forms of substantiated childhood maltreatment (CM) on quality of life (QoL), controlling for selected potential confounders and/or covariates, and concurrent depressive symptoms. We used data from a prospective pre-birth cohort of 8556 mothers recruited consecutively during their first antenatal clinic visit at the Mater Hospital from 1981 to 1983 in Brisbane, Australia. The data were linked to substantiated cases of CM reported to the child protection government agency up to the age of 14 years. The sample consisted of 3730 (49.7% female) young adults for whom there were complete data on QoL at the 21-year follow-up. The mean age of participants was 20.6 years. Logistic regression models were used to assess the association between CM and QoL measured at the 21-year follow-up. There were statistically significant associations between exposure to substantiated CM and poorer QoL. This also applied to the subcategories of childhood physical abuse, childhood emotional abuse (CEA), and neglect. These associations were generally stable after adjusting for confounders/covariates and concurrent depressive symptoms, except physical abuse. CEA with or without neglect significantly and particularly predicted worse subsequent QoL. Exposure to any substantiated maltreatment substantially contributed to worse QoL in young adulthood, with a particular association with CEA and neglect. Prior experiences of CM may have a substantial association with subsequent poorer QoL.

  9. Dynamic modeling of Tampa Bay urban development using parallel computing

    USGS Publications Warehouse

    Xian, G.; Crane, M.; Steinwand, D.

    2005-01-01

    Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively.

  10. Boric acid permeation in forward osmosis membrane processes: modeling, experiments, and implications.

    PubMed

    Jin, Xue; Tang, Chuyang Y; Gu, Yangshuo; She, Qianhong; Qi, Saren

    2011-03-15

    Forward osmosis (FO) is attracting increasing interest for its potential applications in desalination. In FO, permeation of contaminants from feed solution into draw solution through the semipermeable membrane can take place simultaneously with water diffusion. Understanding the contaminants transport through and rejection by FO membrane has significant technical implications in the way to separate clean water from the diluted draw solution. In this study, a model was developed to predict boron flux in FO operation. A strong agreement between modeling results and experimental data indicates that the model developed in this study can accurately predict the boron transport through FO membranes. Furthermore, the model can guide the fabrication of improved FO membranes with decreased boron permeability and structural parameter to minimize boron flux. Both theoretical model and experimental results demonstrated that when membrane active layer was facing draw solution, boron flux was substantially greater compared to the other membrane orientation due to more severe internal concentration polarization. In this investigation, for the first time, rejection of contaminants was defined in FO processes. This is critical to compare the membrane performance between different membranes and experimental conditions.

  11. The multiaxial fatigue response of cylindrical geometry under proportional loading subject to fluctuating tractions

    NASA Astrophysics Data System (ADS)

    Martinez, Rudy D.

    A multiaxial fatigue model is proposed, as it would apply to cylindrical geometry in the form of industrial sized pressure vessels. The main focus of the multiaxial fatigue model will be based on using energy methods with the loading states confined to fluctuating tractions under proportional loading. The proposed fatigue model is an effort to support and enhance existing fatigue life predicting methods for pressure vessel design, beyond the ASME Boiler and Pressure Vessel codes, ASME Section VIII Division 2 and 3, which is currently used in industrial engineering practice for pressure vessel design. Both uniaxial and biaxial low alloy pearlittic-ferritic steel cylindrical cyclic test data are utilized to substantiate the proposed fatigue model. Approximate material hardening and softening aspects from applied load cycling states and the Bauschinger effect are accounted for by adjusting strain control generated hysteresis loops and the cyclic stress strain curve. The proposed fatigue energy model and the current ASME fatigue model are then compared with regards to the accuracy of predicting fatigue life cycle consistencies.

  12. Effects of winglet on transonic flutter characteristics of a cantilevered twin-engine-transport wing model

    NASA Technical Reports Server (NTRS)

    Ruhlin, C. L.; Bhatia, K. G.; Nagaraja, K. S.

    1986-01-01

    A transonic model and a low-speed model were flutter tested in the Langley Transonic Dynamics Tunnel at Mach numbers up to 0.90. Transonic flutter boundaries were measured for 10 different model configurations, which included variations in wing fuel, nacelle pylon stiffness, and wingtip configuration. The winglet effects were evaluated by testing the transonic model, having a specific wing fuel and nacelle pylon stiffness, with each of three wingtips, a nonimal tip, a winglet, and a nominal tip ballasted to simulate the winglet mass. The addition of the winglet substantially reduced the flutter speed of the wing at transonic Mach numbers. The winglet effect was configuration-dependent and was primarily due to winglet aerodynamics rather than mass. Flutter analyses using modified strip-theory aerodynamics (experimentally weighted) correlated reasonably well with test results. The four transonic flutter mechanisms predicted by analysis were obtained experimentally. The analysis satisfactorily predicted the mass-density-ratio effects on subsonic flutter obtained using the low-speed model. Additional analyses were made to determine the flutter sensitivity to several parameters at transonic speeds.

  13. Nonlinear damping for vibration isolation of microsystems using shear thickening fluid

    NASA Astrophysics Data System (ADS)

    Iyer, S. S.; Vedad-Ghavami, R.; Lee, H.; Liger, M.; Kavehpour, H. P.; Candler, R. N.

    2013-06-01

    This work reports the measurement and analysis of nonlinear damping of micro-scale actuators immersed in shear thickening fluids (STFs). A power-law damping term is added to the linear second-order model to account for the shear-dependent viscosity of the fluid. This nonlinear model is substantiated by measurements of oscillatory motion of a torsional microactuator. At high actuation forces, the vibration velocity amplitude saturates. The model accurately predicts the nonlinear damping characteristics of the STF using a power-law index extracted from independent rheology experiments. This result reveals the potential to use STFs as adaptive, passive dampers for vibration isolation of microelectromechanical systems.

  14. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    PubMed Central

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  15. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  16. Species distribution models: A comparison of statistical approaches for livestock and disease epidemics.

    PubMed

    Hollings, Tracey; Robinson, Andrew; van Andel, Mary; Jewell, Chris; Burgman, Mark

    2017-01-01

    In livestock industries, reliable up-to-date spatial distribution and abundance records for animals and farms are critical for governments to manage and respond to risks. Yet few, if any, countries can afford to maintain comprehensive, up-to-date agricultural census data. Statistical modelling can be used as a proxy for such data but comparative modelling studies have rarely been undertaken for livestock populations. Widespread species, including livestock, can be difficult to model effectively due to complex spatial distributions that do not respond predictably to environmental gradients. We assessed three machine learning species distribution models (SDM) for their capacity to estimate national-level farm animal population numbers within property boundaries: boosted regression trees (BRT), random forests (RF) and K-nearest neighbour (K-NN). The models were built from a commercial livestock database and environmental and socio-economic predictor data for New Zealand. We used two spatial data stratifications to test (i) support for decision making in an emergency response situation, and (ii) the ability for the models to predict to new geographic regions. The performance of the three model types varied substantially, but the best performing models showed very high accuracy. BRTs had the best performance overall, but RF performed equally well or better in many simulations; RFs were superior at predicting livestock numbers for all but very large commercial farms. K-NN performed poorly relative to both RF and BRT in all simulations. The predictions of both multi species and single species models for farms and within hypothetical quarantine zones were very close to observed data. These models are generally applicable for livestock estimation with broad applications in disease risk modelling, biosecurity, policy and planning.

  17. Species distribution models: A comparison of statistical approaches for livestock and disease epidemics

    PubMed Central

    Robinson, Andrew; van Andel, Mary; Jewell, Chris; Burgman, Mark

    2017-01-01

    In livestock industries, reliable up-to-date spatial distribution and abundance records for animals and farms are critical for governments to manage and respond to risks. Yet few, if any, countries can afford to maintain comprehensive, up-to-date agricultural census data. Statistical modelling can be used as a proxy for such data but comparative modelling studies have rarely been undertaken for livestock populations. Widespread species, including livestock, can be difficult to model effectively due to complex spatial distributions that do not respond predictably to environmental gradients. We assessed three machine learning species distribution models (SDM) for their capacity to estimate national-level farm animal population numbers within property boundaries: boosted regression trees (BRT), random forests (RF) and K-nearest neighbour (K-NN). The models were built from a commercial livestock database and environmental and socio-economic predictor data for New Zealand. We used two spatial data stratifications to test (i) support for decision making in an emergency response situation, and (ii) the ability for the models to predict to new geographic regions. The performance of the three model types varied substantially, but the best performing models showed very high accuracy. BRTs had the best performance overall, but RF performed equally well or better in many simulations; RFs were superior at predicting livestock numbers for all but very large commercial farms. K-NN performed poorly relative to both RF and BRT in all simulations. The predictions of both multi species and single species models for farms and within hypothetical quarantine zones were very close to observed data. These models are generally applicable for livestock estimation with broad applications in disease risk modelling, biosecurity, policy and planning. PMID:28837685

  18. Increasing horizontal resolution in numerical weather prediction and climate simulations: illusion or panacea?

    PubMed

    Wedi, Nils P

    2014-06-28

    The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Integrating non-colocated well and geophysical data to capture subsurface heterogeneity at an aquifer recharge and recovery site

    NASA Astrophysics Data System (ADS)

    Gottschalk, Ian P.; Hermans, Thomas; Knight, Rosemary; Caers, Jef; Cameron, David A.; Regnery, Julia; McCray, John E.

    2017-12-01

    Geophysical data have proven to be very useful for lithological characterization. However, quantitatively integrating the information gained from acquiring geophysical data generally requires colocated lithological and geophysical data for constructing a rock-physics relationship. In this contribution, the issue of integrating noncolocated geophysical and lithological data is addressed, and the results are applied to simulate groundwater flow in a heterogeneous aquifer in the Prairie Waters Project North Campus aquifer recharge site, Colorado. Two methods of constructing a rock-physics transform between electrical resistivity tomography (ERT) data and lithology measurements are assessed. In the first approach, a maximum likelihood estimation (MLE) is used to fit a bimodal lognormal distribution to horizontal crosssections of the ERT resistivity histogram. In the second approach, a spatial bootstrap is applied to approximate the rock-physics relationship. The rock-physics transforms provide soft data for multiple point statistics (MPS) simulations. Subsurface models are used to run groundwater flow and tracer test simulations. Each model's uncalibrated, predicted breakthrough time is evaluated based on its agreement with measured subsurface travel time values from infiltration basins to selected groundwater recovery wells. We find that incorporating geophysical information into uncalibrated flow models reduces the difference with observed values, as compared to flow models without geophysical information incorporated. The integration of geophysical data also narrows the variance of predicted tracer breakthrough times substantially. Accuracy is highest and variance is lowest in breakthrough predictions generated by the MLE-based rock-physics transform. Calibrating the ensemble of geophysically constrained models would help produce a suite of realistic flow models for predictive purposes at the site. We find that the success of breakthrough predictions is highly sensitive to the definition of the rock-physics transform; it is therefore important to model this transfer function accurately.

  20. The impact of climatological model biases in the North Atlantic jet on predicted future circulation change

    NASA Astrophysics Data System (ADS)

    Simpson, I.

    2015-12-01

    A long standing bias among global climate models (GCMs) is their incorrect representation of the wintertime circulation of the North Atlantic region. Specifically models tend to exhibit a North Atlantic jet (and associated storm track) that is too zonal, extending across central Europe, when it should tilt northward toward Scandinavia. GCM's consistently predict substantial changes in the large scale circulation in this region, consisting of a localized anti-cyclonic circulation, centered over the Mediterranean and accompanied by increased aridity there and increased storminess over Northern Europe.Here, we present preliminary results from experiments that are designed to address the question of what the impact of the climatological circulation biases might be on this predicted future response. Climate change experiments will be compared in two versions of the Community Earth System Model: the first is a free running version of the model, as typically used in climate prediction; the second is a bias corrected version of the model in which a seasonally varying cycle of bias correction tendencies are applied to the wind and temperature fields. These bias correction tendencies are designed to account for deficiencies in the fast parameterized processes, with an aim to push the model toward a more realistic climatology.While these experiments come with the caveat that they assume the bias correction tendencies will remain constant with time, they allow for an initial assessment, through controlled experiments, of the impact that biases in the climatological circulation can have on future predictions in this region. They will also motivate future work that can make use of the bias correction tendencies to understand the underlying physical processes responsible for the incorrect tilt of the jet.

  1. Predictive Rate-Distortion for Infinite-Order Markov Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2016-06-01

    Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.

  2. The Influence of Boundary Layer Parameters on Interior Noise

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Rocha, Joana

    2012-01-01

    Predictions of the wall pressure in the turbulent boundary of an aerospace vehicle can differ substantially from measurement due to phenomena that are not well understood. Characterizing the phenomena will require additional testing at considerable cost. Before expending scarce resources, it is desired to quantify the effect of the uncertainty in wall pressure predictions and measurements on structural response and acoustic radiation. A sensitivity analysis is performed on four parameters of the Corcos cross spectrum model: power spectrum, streamwise and cross stream coherence lengths and Mach number. It is found that at lower frequencies where high power levels and long coherence lengths exist, the radiated sound power prediction has up to 7 dB of uncertainty in power spectrum levels with streamwise and cross stream coherence lengths contributing equally to the total.

  3. Challenges in Quantifying Pliocene Terrestrial Warming Revealed by Data-Model Discord

    NASA Technical Reports Server (NTRS)

    Salzmann, Ulrich; Dolan, Aisling M.; Haywood, Alan M.; Chan, Wing-Le; Voss, Jochen; Hill, Daniel J.; Abe-Ouchi, Ayako; Otto-Bliesner, Bette; Bragg, Frances J.; Chandler, Mark A.; hide

    2013-01-01

    Comparing simulations of key warm periods in Earth history with contemporaneous geological proxy data is a useful approach for evaluating the ability of climate models to simulate warm, high-CO2 climates that are unprecedented in the more recent past. Here we use a global data set of confidence-assessed, proxy-based temperature estimates and biome reconstructions to assess the ability of eight models to simulate warm terrestrial climates of the Pliocene epoch. The Late Pliocene, 3.6-2.6 million years ago, is an accessible geological interval to understand climate processes of a warmer world4. We show that model-predicted surface air temperatures reveal a substantial cold bias in the Northern Hemisphere. Particularly strong data-model mismatches in mean annual temperatures (up to 18 C) exist in northern Russia. Our model sensitivity tests identify insufficient temporal constraints hampering the accurate configuration of model boundary conditions as an important factor impacting on data- model discrepancies. We conclude that to allow a more robust evaluation of the ability of present climate models to predict warm climates, future Pliocene data-model comparison studies should focus on orbitally defined time slices.

  4. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    PubMed

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  5. Revisiting the synoptic-scale predictability of severe European winter storms using ECMWF ensemble reforecasts

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich

    2017-10-01

    New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.

  6. Intersection crash prediction modeling with macro-level data from various geographic units.

    PubMed

    Lee, Jaeyoung; Abdel-Aty, Mohamed; Cai, Qing

    2017-05-01

    There have been great efforts to develop traffic crash prediction models for various types of facilities. The crash models have played a key role to identify crash hotspots and evaluate safety countermeasures. In recent, many macro-level crash prediction models have been developed to incorporate highway safety considerations in the long-term transportation planning process. Although the numerous macro-level studies have found that a variety of demographic and socioeconomic zonal characteristics have substantial effects on traffic safety, few studies have attempted to coalesce micro-level with macro-level data from existing geographic units for estimating crash models. In this study, the authors have developed a series of intersection crash models for total, severe, pedestrian, and bicycle crashes with macro-level data for seven spatial units. The study revealed that the total, severe, and bicycle crash models with ZIP-code tabulation area data performs the best, and the pedestrian crash models with census tract-based data outperforms the competing models. Furthermore, it was uncovered that intersection crash models can be drastically improved by only including random-effects for macro-level entities. Besides, the intersection crash models are even further enhanced by including other macro-level variables. Lastly, the pedestrian and bicycle crash modeling results imply that several macro-level variables (e.g., population density, proportions of specific age group, commuters who walk, or commuters using bicycle, etc.) can be a good surrogate exposure for those crashes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  8. How preschool executive functioning predicts several aspects of math achievement in Grades 1 and 3: A longitudinal study.

    PubMed

    Viterbori, Paola; Usai, M Carmen; Traverso, Laura; De Franchis, Valentina

    2015-12-01

    This longitudinal study analyzes whether selected components of executive function (EF) measured during the preschool period predict several indices of math achievement in primary school. Six EF measures were assessed in a sample of 5-year-old children (N = 175). The math achievement of the same children was then tested in Grades 1 and 3 using both a composite math score and three single indices of written calculation, arithmetical facts, and problem solving. Using previous results obtained from the same sample of children, a confirmatory factor analysis examining the latent EF structure in kindergarten indicated that a two-factor model provided the best fit for the data. In this model, inhibition and working memory (WM)-flexibility were separate dimensions. A full structural equation model was then used to test the hypothesis that math achievement (the composite math score and single math scores) in Grades 1 and 3 could be explained by the two EF components comprising the kindergarten model. The results indicate that the WM-flexibility component measured during the preschool period substantially predicts mathematical achievement, especially in Grade 3. The math composite scores were predicted by the WM-flexibility factor at both grade levels. In Grade 3, both problem solving and arithmetical facts were predicted by the WM-flexibility component. The results empirically support interventions that target EF as an important component of early childhood mathematics education. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Evaluation of the CFSv2 CMIP5 decadal predictions

    NASA Astrophysics Data System (ADS)

    Bombardi, Rodrigo J.; Zhu, Jieshun; Marx, Lawrence; Huang, Bohua; Chen, Hua; Lu, Jian; Krishnamurthy, Lakshmi; Krishnamurthy, V.; Colfescu, Ioana; Kinter, James L.; Kumar, Arun; Hu, Zeng-Zhen; Moorthi, Shrinivas; Tripp, Patrick; Wu, Xingren; Schneider, Edwin K.

    2015-01-01

    Retrospective decadal forecasts were undertaken using the Climate Forecast System version 2 (CFSv2) as part of Coupled Model Intercomparison Project 5. Decadal forecasts were performed separately by the National Center for Environmental Prediction (NCEP) and by the Center for Ocean-Land-Atmosphere Studies (COLA), with the centers using two different analyses for the ocean initial conditions the NCEP Climate Forecast System Reanalysis (CFSR) and the NEMOVAR-COMBINE analysis. COLA also examined the sensitivity to the inclusion of forcing by specified volcanic aerosols. Biases in the CFSv2 for both sets of initial conditions include cold midlatitude sea surface temperatures, and rapid melting of sea ice associated with warm polar oceans. Forecasts from the NEMOVAR-COMBINE analysis showed strong weakening of the Atlantic Meridional Overturning Circulation (AMOC), eventually approaching the weaker AMOC associated with CFSR. The decadal forecasts showed high predictive skill over the Indian, the western Pacific, and the Atlantic Oceans and low skill over the central and eastern Pacific. The volcanic forcing shows only small regional differences in predictability of surface temperature at 2m (T2m) in comparison to forecasts without volcanic forcing, especially over the Indian Ocean. An ocean heat content (OHC) budget analysis showed that the OHC has substantial memory, indicating potential for the decadal predictability of T2m; however, the model has a systematic drift in global mean OHC. The results suggest that the reduction of model biases may be the most productive path towards improving the model's decadal forecasts.

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.

  11. Tuning stochastic matrix models with hydrologic data to predict the population dynamics of a riverine fish

    USGS Publications Warehouse

    Sakaris, P.C.; Irwin, E.R.

    2010-01-01

    We developed stochastic matrix models to evaluate the effects of hydrologic alteration and variable mortality on the population dynamics of a lotie fish in a regulated river system. Models were applied to a representative lotic fish species, the flathead catfish (Pylodictis olivaris), for which two populations were examined: a native population from a regulated reach of the Coosa River (Alabama, USA) and an introduced population from an unregulated section of the Ocmulgee River (Georgia, USA). Size-classified matrix models were constructed for both populations, and residuals from catch-curve regressions were used as indices of year class strength (i.e., recruitment). A multiple regression model indicated that recruitment of flathead catfish in the Coosa River was positively related to the frequency of spring pulses between 283 and 566 m3/s. For the Ocmulgee River population, multiple regression models indicated that year class strength was negatively related to mean March discharge and positively related to June low flow. When the Coosa population was modeled to experience five consecutive years of favorable hydrologic conditions during a 50-year projection period, it exhibited a substantial spike in size and increased at an overall 0.2% annual rate. When modeled to experience five years of unfavorable hydrologic conditions, the Coosa population initially exhibited a decrease in size but later stabilized and increased at a 0.4% annual rate following the decline. When the Ocmulgee River population was modeled to experience five years of favorable conditions, it exhibited a substantial spike in size and increased at an overall 0.4% annual rate. After the Ocmulgee population experienced five years of unfavorable conditions, a sharp decline in population size was predicted. However, the population quickly recovered, with population size increasing at a 0.3% annual rate following the decline. In general, stochastic population growth in the Ocmulgee River was more erratic and variable than population growth in the Coosa River. We encourage ecologists to develop similar models for other lotic species, particularly in regulated river systems. Successful management of fish populations in regulated systems requires that we are able to predict how hydrology affects recruitment and will ultimately influence the population dynamics of fishes. ?? 2010 by the Ecological Society of America.

  12. Formation of Giant Planets and Brown Dwarves

    NASA Technical Reports Server (NTRS)

    Lissauer, Jack J.

    2003-01-01

    According to the prevailing core instability model, giant planets begin their growth by the accumulation of small solid bodies, as do terrestrial planets. However, unlike terrestrial planets, the growing giant planet cores become massive enough that they are able to accumulate substantial amounts of gas before the protoplanetary disk dissipates. Models predict that rocky planets should form in orbit about most stars. It is uncertain whether or not gas giant planet formation is common, because most protoplanetary disks may dissipate before solid planetary cores can grow large enough to gravitationally trap substantial quantities of gas. Ongoing theoretical modeling of accretion of giant planet atmospheres, as well as observations of protoplanetary disks, will help decide this issue. Observations of extrasolar planets around main sequence stars can only provide a lower limit on giant planet formation frequency . This is because after giant planets form, gravitational interactions with material within the protoplanetary disk may cause them to migrat inwards and be lost to the central star. The core instability model can only produce planets greater than a few jovian masses within protoplanetary disks that are more viscous than most such disks are believed to be. Thus, few brown dwarves (objects massive enough to undergo substantial deuterium fusion, estimated to occur above approximately 13 jovian masses) are likely to be formed in this manner. Most brown dwarves, as well as an unknown number of free-floating objects of planetary mass, are probably formed as are stars, by the collapse of extended gas/dust clouds into more compact objects.

  13. HRP's Healthcare Spin-Offs Through Computational Modeling and Simulation Practice Methodologies

    NASA Technical Reports Server (NTRS)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Peng, Grace; Morrison, Tina; Erdemir, Ahmet; Myers, Jerry

    2014-01-01

    Spaceflight missions expose astronauts to novel operational and environmental conditions that pose health risks that are currently not well understood, and perhaps unanticipated. Furthermore, given the limited number of humans that have flown in long duration missions and beyond low Earth-orbit, the amount of research and clinical data necessary to predict and mitigate these health and performance risks are limited. Consequently, NASA's Human Research Program (HRP) conducts research and develops advanced methods and tools to predict, assess, and mitigate potential hazards to the health of astronauts. In this light, NASA has explored the possibility of leveraging computational modeling since the 1970s as a means to elucidate the physiologic risks of spaceflight and develop countermeasures. Since that time, substantial progress has been realized in this arena through a number of HRP funded activates such as the Digital Astronaut Project (DAP) and the Integrated Medical Model (IMM). Much of this success can be attributed to HRP's endeavor to establish rigorous verification, validation, and credibility (VV&C) processes that ensure computational models and simulations (M&S) are sufficiently credible to address issues within their intended scope. This presentation summarizes HRP's activities in credibility of modeling and simulation, in particular through its outreach to the community of modeling and simulation practitioners. METHODS: The HRP requires all M&S that can have moderate to high impact on crew health or mission success must be vetted in accordance to NASA Standard for Models and Simulations, NASA-STD-7009 (7009) [5]. As this standard mostly focuses on engineering systems, the IMM and DAP have invested substantial efforts to adapt the processes established in this standard for their application to biological M&S, which is more prevalent in human health and performance (HHP) and space biomedical research and operations [6,7]. These methods have also generated substantial interest by the broader medical community though institutions like the National Institutes of Health (NIH) and the Food and Drug Administration (FDA) to develop similar standards and guidelines applicable to the larger medical operations and research community. DISCUSSION: Similar to NASA, many leading government agencies, health institutions and medical product developers around the world are recognizing the potential of computational M&S to support clinical research and decision making. In this light, substantial investments are being made in computational medicine and notable discoveries are being realized [8]. However, there is a lack of broadly applicable practice guidance for the development and implementation of M&S in clinical care and research in a manner that instills confidence among medical practitioners and biological researchers [9,10]. In this presentation, we will give an overview on how HRP is working with the NIH's Interagency Modeling and Analysis Group (IMAG), the FDA and the American Society of Mechanical Engineers (ASME) to leverage NASA's biomedical VV&C processes to establish a new regulatory standard for Verification and Validation in Computational Modeling of Medical Devices, and Guidelines for Credible Practice of Computational Modeling and Simulation in Healthcare.

  14. Interaction of Neuritic Plaques and Education Predicts Dementia

    PubMed Central

    Roe, Catherine M.; Xiong, Chengjie; Miller, J. Phillip; Cairns, Nigel J.; Morris, John C.

    2009-01-01

    In exploring the cognitive reserve hypothesis in persons with substantial Alzheimer disease neuropathology, we aimed to determine the extent to which educational attainment and densities of diffuse plaques, neuritic plaques, and neurofibrillary tangles predict dementia. Participants were 1563 individuals aged 65 years or above who were assessed for dementia within 1 year of death. Generalized linear mixed models were used to examine whether education and density ratings of diffuse plaques and neuritic plaques, and neurofibrillary tangle stage were associated with a dementia diagnosis. Education interacted with densities of neuritic plaques to predict dementia. Tangle density independently predicted dementia, but did not interact with education. Diffuse plaque density was unrelated to dementia when adjusted for densities of neuritic plaques and tangles. Among individuals with Alzheimer disease neuropathology, educational attainment, as a surrogate of cognitive reserve, modifies the influence of neuritic, but not diffuse, plaque neuropathology on the expression of dementia. PMID:18525294

  15. Prevalence Estimation of Protected Health Information in Swedish Clinical Text.

    PubMed

    Henriksson, Aron; Kvist, Maria; Dalianis, Hercules

    2017-01-01

    Obscuring protected health information (PHI) in the clinical text of health records facilitates the secondary use of healthcare data in a privacy-preserving manner. Although automatic de-identification of clinical text using machine learning holds much promise, little is known about the relative prevalence of PHI in different types of clinical text and whether there is a need for domain adaptation when learning predictive models from one particular domain and applying it to another. In this study, we address these questions by training a predictive model and using it to estimate the prevalence of PHI in clinical text written (1) in different clinical specialties, (2) in different types of notes (i.e., under different headings), and (3) by persons in different professional roles. It is demonstrated that the overall PHI density is 1.57%; however, substantial differences exist across domains.

  16. Poor health but not absent: prevalence, predictors, and outcomes of presenteeism.

    PubMed

    Robertson, Ivan; Leach, Desmond; Doerner, Nadin; Smeed, Matthew

    2012-11-01

    The objective of this study was to examine the prevalence of presenteeism, to develop and test a model of the relationship between workplace factors and presenteeism, and to assess the perceived influence of manager, coworkers, and self on presenteeism. We used survey data collected for 6309 employees from seven different organizations. Nearly 60% of the sample reported presenteeism during a 3-month period. The model was supported, with presenteeism linking workplace factors and health outcomes to productivity, as predicted. The majority of participants (67%) indicated that the primary pressure to attend work while sick came from themselves. A substantial minority (20%) also indicated the manager as a source of pressure. Psychosocial workplace factors are predictive of presenteeism, and efforts to control them, including the use of more effective management, may impact presenteeism rates and the resulting levels of productivity.

  17. Neonatal heart rate prediction.

    PubMed

    Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth

    2009-01-01

    Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.

  18. Posttraumatic Stress Symptoms and Trajectories in Child Sexual Abuse Victims: An Analysis of Sex Differences Using the National Survey of Child and Adolescent Well-Being

    PubMed Central

    Koenen, Karestan C.; Jaffee, Sara R.

    2011-01-01

    Very few studies have prospectively examined sex differences in posttraumatic stress symptoms and symptom trajectories in youth victimized by childhood sexual abuse. This study addresses that question in a relatively large sample of children, drawn from the National Survey of Child and Adolescent Well-Being, who were between the ages of 8–16 years and who were reported to Child Protective Services for alleged sexual abuse. Sex differences were examined using t tests, logistic regression, and latent trajectory modeling. Results revealed that there were not sex differences in victims’ posttraumatic stress symptoms or trajectories. Whereas caseworkers substantiated girls’ abuse at higher rates than boys’ abuse and rated girls significantly higher than boys on level of harm, there were not sex differences in three more objective measures of abuse severity characteristics. Overall, higher caseworker ratings of harm predicted higher initial posttraumatic stress symptom levels, and substantiation status predicted shallower decreases in trauma symptoms over time. Implications for theory and intervention are discussed. PMID:19221872

  19. An improved correlation to predict molecular weight between crosslinks based on equilibrium degree of swelling of hydrogel networks.

    PubMed

    Jimenez-Vergara, Andrea C; Lewis, John; Hahn, Mariah S; Munoz-Pinto, Dany J

    2018-04-01

    Accurate characterization of hydrogel diffusional properties is of substantial importance for a range of biotechnological applications. The diffusional capacity of hydrogels has commonly been estimated using the average molecular weight between crosslinks (M c ), which is calculated based on the equilibrium degree of swelling. However, the existing correlation linking M c and equilibrium swelling fails to accurately reflect the diffusional properties of highly crosslinked hydrogel networks. Also, as demonstrated herein, the current model fails to accurately predict the diffusional properties of hydrogels when polymer concentration and molecular weight are varied simultaneously. To address these limitations, we evaluated the diffusional properties of 48 distinct hydrogel formulations using two different photoinitiator systems, employing molecular size exclusion as an alternative methodology to calculate average hydrogel mesh size. The resulting data were then utilized to develop a revised correlation between M c and hydrogel equilibrium swelling that substantially reduces the limitations associated with the current correlation. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 1339-1348, 2018. © 2017 Wiley Periodicals, Inc.

  20. Cumulative stress and substantiated maltreatment: the importance of caregiver vulnerability and adult partner violence.

    PubMed

    Wekerle, Christine; Wall, Anne-Marie; Leung, Eman; Trocmé, Nico

    2007-04-01

    Our goal is to assess the effect of caregiver vulnerabilities, singly and in combination, on the substantiation of child abuse (physical, sexual) and neglect, while controlling for relevant background variables. We test the moderator role of adult partner violence in qualifying the relationship between caregiver vulnerabilities and maltreatment substantiation. Secondary analyses of the 1998 Canadian Incidence Study of Reported Child Maltreatment (CIS) are used to predict child protective service investigation substantiation versus non-substantiation from a range of caregiver vulnerability factors. Involvement in partner violence was examined as a moderator in the relation between caregiver vulnerabilities and maltreatment substantiation. The CIS is an epidemiological survey of first-reported cases to child protective services, using a random sample of child welfare agencies across Canada. Child welfare workers completed a research form on the child, primary caregiver, family, perpetrator, severity and type of maltreatment, as well as services and court outcomes. All maltreatment classifications were assigned according to the Canadian legal definition of child abuse and neglect. Hierarchical logistic regression analyses were used, with stepped entry of: (1) demographic factors, socioeconomic disadvantage, and caregiver's own history of maltreatment; (2) caregiver vulnerability factors; (3) involvement in partner violence; (4) the interaction between caregiver vulnerability and partner violence. Caregiver substance abuse was found to be the single most potent kind of caregiver vulnerability in predicting maltreatment substantiation. When the total number of vulnerabilities was used as the predictor, prediction across all types of maltreatment increased, especially for substantiated neglect. Analyses also showed that the presence of partner violence in the home exacerbated the effect of caregiver vulnerability on substantiation. The total number of caregiver vulnerabilities was the best predictor of the substantiation of child abuse and neglect. This relationship was moderated by the existence of partner violence: high caregiver vulnerability and high partner violence increased the likelihood of substantiation versus non-substantiation. These results suggest that caregiver issues should be considered in tandem with partner relationships. Among child welfare cases, caregiver vulnerability and partner violence are critical targets for child maltreatment prevention and early child protective services intervention.

  1. Prediction of cardiovascular disease risk among low-income urban dwellers in metropolitan Kuala Lumpur, Malaysia.

    PubMed

    Su, Tin Tin; Amiri, Mohammadreza; Mohd Hairi, Farizah; Thangiah, Nithiah; Bulgiba, Awang; Majid, Hazreen Abdul

    2015-01-01

    We aimed to predict the ten-year cardiovascular disease (CVD) risk among low-income urban dwellers of metropolitan Malaysia. Participants were selected from a cross-sectional survey conducted in Kuala Lumpur. To assess the 10-year CVD risk, we employed the Framingham risk scoring (FRS) models. Significant determinants of the ten-year CVD risk were identified using General Linear Model (GLM). Altogether 882 adults (≥30 years old with no CVD history) were randomly selected. The classic FRS model (figures in parentheses are from the modified model) revealed that 20.5% (21.8%) and 38.46% (38.9%) of respondents were at high and moderate risk of CVD. The GLM models identified the importance of education, occupation, and marital status in predicting the future CVD risk. Our study indicated that one out of five low-income urban dwellers has high chance of having CVD within ten years. Health care expenditure, other illness related costs and loss of productivity due to CVD would worsen the current situation of low-income urban population. As such, the public health professionals and policy makers should establish substantial effort to formulate the public health policy and community-based intervention to minimize the upcoming possible high mortality and morbidity due to CVD among the low-income urban dwellers.

  2. Prediction of Cardiovascular Disease Risk among Low-Income Urban Dwellers in Metropolitan Kuala Lumpur, Malaysia

    PubMed Central

    Su, Tin Tin; Amiri, Mohammadreza; Mohd Hairi, Farizah; Thangiah, Nithiah; Majid, Hazreen Abdul

    2015-01-01

    We aimed to predict the ten-year cardiovascular disease (CVD) risk among low-income urban dwellers of metropolitan Malaysia. Participants were selected from a cross-sectional survey conducted in Kuala Lumpur. To assess the 10-year CVD risk, we employed the Framingham risk scoring (FRS) models. Significant determinants of the ten-year CVD risk were identified using General Linear Model (GLM). Altogether 882 adults (≥30 years old with no CVD history) were randomly selected. The classic FRS model (figures in parentheses are from the modified model) revealed that 20.5% (21.8%) and 38.46% (38.9%) of respondents were at high and moderate risk of CVD. The GLM models identified the importance of education, occupation, and marital status in predicting the future CVD risk. Our study indicated that one out of five low-income urban dwellers has high chance of having CVD within ten years. Health care expenditure, other illness related costs and loss of productivity due to CVD would worsen the current situation of low-income urban population. As such, the public health professionals and policy makers should establish substantial effort to formulate the public health policy and community-based intervention to minimize the upcoming possible high mortality and morbidity due to CVD among the low-income urban dwellers. PMID:25821810

  3. Enhanced outage prediction modeling for strong extratropical storms and hurricanes in the Northeastern United States

    NASA Astrophysics Data System (ADS)

    Cerrai, D.; Anagnostou, E. N.; Wanik, D. W.; Bhuiyan, M. A. E.; Zhang, X.; Yang, J.; Astitha, M.; Frediani, M. E.; Schwartz, C. S.; Pardakhti, M.

    2016-12-01

    The overwhelming majority of human activities need reliable electric power. Severe weather events can cause power outages, resulting in substantial economic losses and a temporary worsening of living conditions. Accurate prediction of these events and the communication of forecasted impacts to the affected utilities is necessary for efficient emergency preparedness and mitigation. The University of Connecticut Outage Prediction Model (OPM) uses regression tree models, high-resolution weather reanalysis and real-time weather forecasts (WRF and NCAR ensemble), airport station data, vegetation and electric grid characteristics and historical outage data to forecast the number and spatial distribution of outages in the power distribution grid located within dense vegetation. Recent OPM improvements consist of improved storm classification and addition of new predictive weather-related variables and are demonstrated using a leave-one-storm-out cross-validation based on 130 severe extratropical storms and two hurricanes (Sandy and Irene) in the Northeast US. We show that it is possible to predict the number of trouble spots causing outages in the electric grid with a median absolute percentage error as low as 27% for some storm types, and at most around 40%, in a scale that varies between four orders of magnitude, from few outages to tens of thousands. This outage information can be communicated to the electric utility to manage allocation of crews and equipment and minimize the recovery time for an upcoming storm hazard.

  4. Usefulness of the rivermead postconcussion symptoms questionnaire and the trail-making test for outcome prediction in patients with mild traumatic brain injury.

    PubMed

    de Guise, Elaine; Bélanger, Sara; Tinawi, Simon; Anderson, Kirsten; LeBlanc, Joanne; Lamoureux, Julie; Audrit, Hélène; Feyz, Mitra

    2016-01-01

    The aim of the study was to determine if the Rivermead Postconcussion Symptoms Questionnaire (RPQ) is a better tool for outcome prediction than an objective neuropsychological assessment following mild traumatic brain injury (mTBI). The study included 47 patients with mTBI referred to an outpatient rehabilitation clinic. The RPQ and a brief neuropsychological battery were performed in the first few days following the trauma. The outcome measure used was the Mayo-Portland Adaptability Inventory-4 (MPAI-4) which was completed within the first 3 months. The only variable associated with results on the MPAI-4 was the RPQ score (p < .001). The predictive outcome model including age, education, and the results of the Trail-Making Test-Parts A and B (TMT) had a pseudo-R(2) of .02. When the RPQ score was added, the pseudo-R(2) climbed to .19. This model indicates that the usefulness of the RPQ score and the TMT in predicting moderate-to-severe limitations, while controlling for confounders, is substantial as suggested by a significant increase in the model chi-square value, delta (1df) = 6.517, p < .001. The RPQ and the TMT provide clinicians with a brief and reliable tool for predicting outcome functioning and can help target the need for further intervention and rehabilitation following mTBI.

  5. Land-atmosphere coupling and climate prediction over the U.S. Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Williams, I. N.; Lu, Y.; Kueppers, L. M.; Riley, W. J.; Biraud, S.; Bagley, J. E.; Torn, M. S.

    2016-12-01

    Biases in land-atmosphere coupling in climate models can contribute to climate prediction biases, but land models are rarely evaluated in the context of this coupling. We tested land-atmosphere coupling and explored effects of land surface parameterizations on climate prediction in a single-column version of the NCAR Community Earth System Model (CESM1.2.2) and an offline Community Land Model (CLM4.5). The correlation between leaf area index (LAI) and surface evaporative fraction (ratio of latent to total turbulent heat flux) was substantially underpredicted compared to observations in the U.S. Southern Great Plains, while the correlation between soil moisture and evaporative fraction was overpredicted by CLM4.5. These correlations were improved by prescribing observed LAI, increasing soil resistance to evaporation, increasing minimum stomatal conductance, and increasing leaf reflectance. The modifications reduced the root mean squared error (RMSE) in daytime 2 m air temperature from 3.6 C to 2 C in summer (JJA), and reduced RMSE in total JJA precipitation from 133 to 84 mm. The modifications had the largest effect on prediction of summer drought in 2006, when a warm bias in daytime 2 m air temperature was reduced from +6 C to a smaller cold bias of -1.3 C, and a corresponding dry bias in total JJA precipitation was reduced from -111 mm to -23 mm. Thus, the role of vegetation in droughts and heat waves is likely underpredicted in CESM1.2.2, and improvements in land surface models can improve prediction of climate extremes.

  6. Understanding Longitudinal Wood Fiber Ultra-structure for Producing Cellulose Nanofibrils Using Disk Milling with Diluted Acid Prehydrolysis

    NASA Astrophysics Data System (ADS)

    Qin, Yanlin; Qiu, Xueqing; Zhu, J. Y.

    2016-10-01

    Here we used dilute oxalic acid to pretreat a kraft bleached Eucalyptus pulp (BEP) fibers to facilitate mechanical fibrillation in producing cellulose nanofibrils using disk milling with substantial mechanical energy savings. We successfully applied a reaction kinetics based combined hydrolysis factor (CHFX) as a severity factor to quantitatively control xylan dissolution and BEP fibril deploymerization. More importantly, we were able to accurately predict the degree of polymerization (DP) of disk-milled fibrils using CHFX and milling time or milling energy consumption. Experimentally determined ratio of fibril DP and number mean fibril height (diameter d), DP/d, an aspect ratio measurer, were independent of the processing conditions. Therefore, we hypothesize that cellulose have a longitudinal hierarchical structure as in the lateral direction. Acid hydrolysis and milling did not substantially cut the “natural” chain length of cellulose fibrils. This cellulose longitudinal hierarchical model provides support for using weak acid hydrolysis in the production of cellulose nanofibrils with substantially reduced energy input without negatively affecting fibril mechanical strength.

  7. Magnitude and Temporal Variability of Inter-stimulus EEG Modulate the Linear Relationship Between Laser-Evoked Potentials and Fast-Pain Perception

    PubMed Central

    Li, Linling; Huang, Gan; Lin, Qianqian; Liu, Jia; Zhang, Shengli; Zhang, Zhiguo

    2018-01-01

    The level of pain perception is correlated with the magnitude of pain-evoked brain responses, such as laser-evoked potentials (LEP), across trials. The positive LEP-pain relationship lays the foundation for pain prediction based on single-trial LEP, but cross-individual pain prediction does not have a good performance because the LEP-pain relationship exhibits substantial cross-individual difference. In this study, we aim to explain the cross-individual difference in the LEP-pain relationship using inter-stimulus EEG (isEEG) features. The isEEG features (root mean square as magnitude and mean square successive difference as temporal variability) were estimated from isEEG data (at full band and five frequency bands) recorded between painful stimuli. A linear model was fitted to investigate the relationship between pain ratings and LEP response for fast-pain trials on a trial-by-trial basis. Then the correlation between isEEG features and the parameters of LEP-pain model (slope and intercept) was evaluated. We found that the magnitude and temporal variability of isEEG could modulate the parameters of an individual's linear LEP-pain model for fast-pain trials. Based on this, we further developed a new individualized fast-pain prediction scheme, which only used training individuals with similar isEEG features as the test individual to train the fast-pain prediction model, and obtained improved accuracy in cross-individual fast-pain prediction. The findings could help elucidate the neural mechanism of cross-individual difference in pain experience and the proposed fast-pain prediction scheme could be potentially used as a practical and feasible pain prediction method in clinical practice. PMID:29904336

  8. Climate variability slows evolutionary responses of Colias butterflies to recent climate change.

    PubMed

    Kingsolver, Joel G; Buckley, Lauren B

    2015-03-07

    How does recent climate warming and climate variability alter fitness, phenotypic selection and evolution in natural populations? We combine biophysical, demographic and evolutionary models with recent climate data to address this question for the subalpine and alpine butterfly, Colias meadii, in the southern Rocky Mountains. We focus on predicting patterns of selection and evolution for a key thermoregulatory trait, melanin (solar absorptivity) on the posterior ventral hindwings, which affects patterns of body temperature, flight activity, adult and egg survival, and reproductive success in Colias. Both mean annual summer temperatures and thermal variability within summers have increased during the past 60 years at subalpine and alpine sites. At the subalpine site, predicted directional selection on wing absorptivity has shifted from generally positive (favouring increased wing melanin) to generally negative during the past 60 years, but there is substantial variation among years in the predicted magnitude and direction of selection and the optimal absorptivity. The predicted magnitude of directional selection at the alpine site declined during the past 60 years and varies substantially among years, but selection has generally been positive at this site. Predicted evolutionary responses to mean climate warming at the subalpine site since 1980 is small, because of the variability in selection and asymmetry of the fitness function. At both sites, the predicted effects of adaptive evolution on mean population fitness are much smaller than the fluctuations in mean fitness due to climate variability among years. Our analyses suggest that variation in climate within and among years may strongly limit evolutionary responses of ectotherms to mean climate warming in these habitats. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  9. Pharmacokinetic modeling: Prediction and evaluation of route dependent dosimetry of bisphenol A in monkeys with extrapolation to humans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, Jeffrey W., E-mail: jeffrey.fisher@fda.hhs.gov; Twaddle, Nathan C.; Vanlandingham, Michelle

    A physiologically based pharmacokinetic (PBPK) model was developed for bisphenol A (BPA) in adult rhesus monkeys using intravenous (iv) and oral bolus doses of 100 {mu}g d6-BPA/kg (). This calibrated PBPK adult monkey model for BPA was then evaluated against published monkey kinetic studies with BPA. Using two versions of the adult monkey model based on monkey BPA kinetic data from and , the aglycone BPA pharmacokinetics were simulated for human oral ingestion of 5 mg d16-BPA per person (Voelkel et al., 2002). Voelkel et al. were unable to detect the aglycone BPA in plasma, but were able to detectmore » BPA metabolites. These human model predictions of the aglycone BPA in plasma were then compared to previously published PBPK model predictions obtained by simulating the Voelkel et al. kinetic study. Our BPA human model, using two parameter sets reflecting two adult monkey studies, both predicted lower aglycone levels in human serum than the previous human BPA PBPK model predictions. BPA was metabolized at all ages of monkey (PND 5 to adult) by the gut wall and liver. However, the hepatic metabolism of BPA and systemic clearance of its phase II metabolites appear to be slower in younger monkeys than adults. The use of the current non-human primate BPA model parameters provides more confidence in predicting the aglycone BPA in serum levels in humans after oral ingestion of BPA. -- Highlights: Black-Right-Pointing-Pointer A bisphenol A (BPA) PBPK model for the infant and adult monkey was constructed. Black-Right-Pointing-Pointer The hepatic metabolic rate of BPA increased with age of the monkey. Black-Right-Pointing-Pointer The systemic clearance rate of metabolites increased with age of the monkey. Black-Right-Pointing-Pointer Gut wall metabolism of orally administered BPA was substantial across all ages of monkeys. Black-Right-Pointing-Pointer Aglycone BPA plasma concentrations were predicted in humans orally given oral doses of deuterated BPA.« less

  10. Internal structure of shock waves in disparate mass mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.

    1992-01-01

    The detailed flow structure of a normal shock wave for a gas mixture is investigated using the direct-simulation Monte Carlo method. A variable diameter hard-sphere (VDHS) model is employed to investigate the effect of different viscosity temperature exponents (VTE) for each species in a gas mixture. Special attention is paid to the irregular behavior in the density profiles which was previously observed in a helium-xenon experiment. It is shown that the VTE can have substantial effects in the prediction of the structure of shock waves. The variable hard-sphere model of Bird shows good agreement, but with some limitations, with the experimental data if a common VTE is chosen properly for each case. The VDHS model shows better agreement with the experimental data without adjusting the VTE. The irregular behavior of the light-gas component in shock waves of disparate mass mixtures is observed not only in the density profile, but also in the parallel temperature profile. The strength of the shock wave, the type of molecular interactions, and the mole fraction of heavy species have substantial effects on the existence and structure of the irregularities.

  11. Epidemiology of Plasmodium falciparum gametocytemia in India: prevalence, age structure, risk factors and the role of a predictive score for detection.

    PubMed

    Shah, Naman K; Poole, Charles; MacDonald, Pia D M; Srivastava, Bina; Schapira, Allan; Juliano, Jonathan J; Anvikar, Anup; Meshnick, Steven R; Valecha, Neena; Mishra, Neelima

    2013-07-01

    To characterise the epidemiology of Plasmodium falciparum gametocytemia and determine the prevalence, age structure and the viability of a predictive model for detection. We collected data from 21 therapeutic efficacy trials conducted in India during 2009-2010 and estimated the contribution of each age group to the reservoir of transmission. We built a predictive model for gametocytemia and calculated the diagnostic utility of different score cut-offs from our risk score. Gametocytemia was present in 18% (248/1 335) of patients and decreased with age. Adults constituted 43%, school-age children 45% and under fives 12% of the reservoir for potential transmission. Our model retained age, sex, region and previous antimalarial drug intake as predictors of gametocytemia. The area under the receiver operator characteristic curve was 0.76 (95%CI:0.73,0.78), and a cut-off of 14 or more on a risk score ranging from 0 to 46 provided 91% (95%CI:88,95) sensitivity and 33% (95%CI:31,36) specificity for detecting gametocytemia. Gametocytemia was common in India and varied by region. Notably, adults contributed substantially to the reservoir for potential transmission. Predictive modelling to generate a clinical algorithm for detecting gametocytemia did not provide sufficient discrimination for targeting interventions. © 2013 Blackwell Publishing Ltd.

  12. Improving rational thermal comfort prediction by using subpopulation characteristics: A case study at Hermitage Amsterdam

    PubMed Central

    Kramer, Rick; Schellen, Lisje; Schellen, Henk; Kingma, Boris

    2017-01-01

    ABSTRACT This study aims to improve the prediction accuracy of the rational standard thermal comfort model, known as the Predicted Mean Vote (PMV) model, by (1) calibrating one of its input variables “metabolic rate,” and (2) extending it by explicitly incorporating the variable running mean outdoor temperature (RMOT) that relates to adaptive thermal comfort. The analysis was performed with survey data (n = 1121) and climate measurements of the indoor and outdoor environment from a one year-long case study undertaken at Hermitage Amsterdam museum in the Netherlands. The PMVs were calculated for 35 survey days using (1) an a priori assumed metabolic rate, (2) a calibrated metabolic rate found by fitting the PMVs to the thermal sensation votes (TSVs) of each respondent using an optimization routine, and (3) extending the PMV model by including the RMOT. The results show that the calibrated metabolic rate is estimated to be 1.5 Met for this case study that was predominantly visited by elderly females. However, significant differences in metabolic rates have been revealed between adults and elderly showing the importance of differentiating between subpopulations. Hence, the standard tabular values, which only differentiate between various activities, may be oversimplified for many cases. Moreover, extending the PMV model with the RMOT substantially improves the thermal sensation prediction, but thermal sensation toward extreme cool and warm sensations remains partly underestimated. PMID:28680934

  13. Predicting bifurcation angle effect on blood flow in the microvasculature.

    PubMed

    Yang, Jiho; Pak, Y Eugene; Lee, Tae-Rin

    2016-11-01

    Since blood viscosity is a basic parameter for understanding hemodynamics in human physiology, great amount of research has been done in order to accurately predict this highly non-Newtonian flow property. However, previous works lacked in consideration of hemodynamic changes induced by heterogeneous vessel networks. In this paper, the effect of bifurcation on hemodynamics in a microvasculature is quantitatively predicted. The flow resistance in a single bifurcation microvessel was calculated by combining a new simple mathematical model with 3-dimensional flow simulation for varying bifurcation angles under physiological flow conditions. Interestingly, the results indicate that flow resistance induced by vessel bifurcation holds a constant value of approximately 0.44 over the whole single bifurcation model below diameter of 60μm regardless of geometric parameters including bifurcation angle. Flow solutions computed from this new model showed substantial decrement in flow velocity relative to other mathematical models, which do not include vessel bifurcation effects, while pressure remained the same. Furthermore, when applying the bifurcation angle effect to the entire microvascular network, the simulation results gave better agreements with recent in vivo experimental measurements. This finding suggests a new paradigm in microvascular blood flow properties, that vessel bifurcation itself, regardless of its angle, holds considerable influence on blood viscosity, and this phenomenon will help to develop new predictive tools in microvascular research. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    PubMed Central

    Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513

  15. Improved Electrostatic Embedding for Fragment-Based Chemical Shift Calculations in Molecular Crystals.

    PubMed

    Hartman, Joshua D; Balaji, Ashwin; Beran, Gregory J O

    2017-12-12

    Fragment-based methods predict nuclear magnetic resonance (NMR) chemical shielding tensors in molecular crystals with high accuracy and computational efficiency. Such methods typically employ electrostatic embedding to mimic the crystalline environment, and the quality of the results can be sensitive to the embedding treatment. To improve the quality of this embedding environment for fragment-based molecular crystal property calculations, we borrow ideas from the embedded ion method to incorporate self-consistently polarized Madelung field effects. The self-consistent reproduction of the Madelung potential (SCRMP) model developed here constructs an array of point charges that incorporates self-consistent lattice polarization and which reproduces the Madelung potential at all atomic sites involved in the quantum mechanical region of the system. The performance of fragment- and cluster-based 1 H, 13 C, 14 N, and 17 O chemical shift predictions using SCRMP and density functionals like PBE and PBE0 are assessed. The improved embedding model results in substantial improvements in the predicted 17 O chemical shifts and modest improvements in the 15 N ones. Finally, the performance of the model is demonstrated by examining the assignment of the two oxygen chemical shifts in the challenging γ-polymorph of glycine. Overall, the SCRMP-embedded NMR chemical shift predictions are on par with or more accurate than those obtained with the widely used gauge-including projector augmented wave (GIPAW) model.

  16. Reionization and the Abundance of Galactic Satellites

    NASA Astrophysics Data System (ADS)

    Bullock, James S.; Kravtsov, Andrey V.; Weinberg, David H.

    2000-08-01

    One of the main challenges facing standard hierarchical structure formation models is that the predicted abundance of Galactic subhalos with circular velocities vc~10-30 km s-1 is an order of magnitude higher than the number of satellites actually observed within the Local Group. Using a simple model for the formation and evolution of dark halos, based on the extended Press-Schechter formalism and tested against N-body results, we show that the theoretical predictions can be reconciled with observations if gas accretion in low-mass halos is suppressed after the epoch of reionization. In this picture, the observed dwarf satellites correspond to the small fraction of halos that accreted substantial amounts of gas before reionization. The photoionization mechanism naturally explains why the discrepancy between predicted halos and observed satellites sets in at vc~30 km s-1, and for reasonable choices of the reionization redshift (zre~5-12) the model can reproduce both the amplitude and shape of the observed velocity function of galactic satellites. If this explanation is correct, then typical bright galaxy halos contain many low-mass dark matter subhalos. These might be detectable through their gravitational lensing effects, through their influence on stellar disks, or as dwarf satellites with very high mass-to-light ratios. This model also predicts a diffuse stellar component produced by large numbers of tidally disrupted dwarfs, perhaps sufficient to account for most of the Milky Way's stellar halo.

  17. Improving rational thermal comfort prediction by using subpopulation characteristics: A case study at Hermitage Amsterdam.

    PubMed

    Kramer, Rick; Schellen, Lisje; Schellen, Henk; Kingma, Boris

    2017-01-01

    This study aims to improve the prediction accuracy of the rational standard thermal comfort model, known as the Predicted Mean Vote (PMV) model, by (1) calibrating one of its input variables "metabolic rate," and (2) extending it by explicitly incorporating the variable running mean outdoor temperature (RMOT) that relates to adaptive thermal comfort. The analysis was performed with survey data ( n = 1121) and climate measurements of the indoor and outdoor environment from a one year-long case study undertaken at Hermitage Amsterdam museum in the Netherlands. The PMVs were calculated for 35 survey days using (1) an a priori assumed metabolic rate, (2) a calibrated metabolic rate found by fitting the PMVs to the thermal sensation votes (TSVs) of each respondent using an optimization routine, and (3) extending the PMV model by including the RMOT. The results show that the calibrated metabolic rate is estimated to be 1.5 Met for this case study that was predominantly visited by elderly females. However, significant differences in metabolic rates have been revealed between adults and elderly showing the importance of differentiating between subpopulations. Hence, the standard tabular values, which only differentiate between various activities, may be oversimplified for many cases. Moreover, extending the PMV model with the RMOT substantially improves the thermal sensation prediction, but thermal sensation toward extreme cool and warm sensations remains partly underestimated.

  18. Dependence of Microelastic-plastic Nonlinearity of Martensitic Stainless Steel on Fatigue Damage Accumulation

    NASA Technical Reports Server (NTRS)

    Cantrell, John H.

    2006-01-01

    Self-organized substructural arrangements of dislocations formed in wavy slip metals during cyclic stress-induced fatigue produce substantial changes in the material microelastic-plastic nonlinearity, a quantitative measure of which is the nonlinearity parameter Beta extracted from acoustic harmonic generation measurements. The contributions to Beta from the substructural evolution of dislocations and crack growth for fatigued martensitic 410Cb stainless steel are calculated from the Cantrell model as a function of percent full fatigue life to fracture. A wave interaction factor f(sub WI) is introduced into the model to account experimentally for the relative volume of material fatigue damage included in the volume of material swept out by an interrogating acoustic wave. For cyclic stress-controlled loading at 551 MPa and f(sub WI) = 0.013 the model predicts a monotonic increase in Beta from dislocation substructures of almost 100 percent from the virgin state to roughly 95 percent full life. Negligible contributions from cracks are predicted in this range of fatigue life. However, over the last five percent of fatigue life the model predicts a rapid monotonic increase of Beta by several thousand percent that is dominated by crack growth. The theoretical predictions are in good agreement with experimental measurements of 410Cb stainless steel samples fatigued in uniaxial, stress-controlled cyclic loading at 551 MPa from zero to full tensile load with a measured f(sub WI) of 0.013.

  19. Predictive aging results in radiation environments

    NASA Astrophysics Data System (ADS)

    Gillen, Kenneth T.; Clough, Roger L.

    1993-06-01

    We have previously derived a time-temperature-dose rate superposition methodology, which, when applicable, can be used to predict polymer degradation versus dose rate, temperature and exposure time. This methodology results in predictive capabilities at the low dose rates and long time periods appropriate, for instance, to ambient nuclear power plant environments. The methodology was successfully applied to several polymeric cable materials and then verified for two of the materials by comparisons of the model predictions with 12 year, low-dose-rate aging data on these materials from a nuclear environment. In this paper, we provide a more detailed discussion of the methodology and apply it to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicone rubber and two ethylene-tetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7-9 year) low-dose-rate results recently obtained for the same material types actually aged under bnuclear power plant conditions. Based on a combination of the modelling and long-term results, we find indications of reasonably similar degradation responses among several different commercial formulations for each of the following "generic" materials: hypalon, ethylene-tetrafluoroethylene, silicone rubber and PVC. If such "generic" behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated.

  20. New Methods for Estimating Seasonal Potential Climate Predictability

    NASA Astrophysics Data System (ADS)

    Feng, Xia

    This study develops two new statistical approaches to assess the seasonal potential predictability of the observed climate variables. One is the univariate analysis of covariance (ANOCOVA) model, a combination of autoregressive (AR) model and analysis of variance (ANOVA). It has the advantage of taking into account the uncertainty of the estimated parameter due to sampling errors in statistical test, which is often neglected in AR based methods, and accounting for daily autocorrelation that is not considered in traditional ANOVA. In the ANOCOVA model, the seasonal signals arising from external forcing are determined to be identical or not to assess any interannual variability that may exist is potentially predictable. The bootstrap is an attractive alternative method that requires no hypothesis model and is available no matter how mathematically complicated the parameter estimator. This method builds up the empirical distribution of the interannual variance from the resamplings drawn with replacement from the given sample, in which the only predictability in seasonal means arises from the weather noise. These two methods are applied to temperature and water cycle components including precipitation and evaporation, to measure the extent to which the interannual variance of seasonal means exceeds the unpredictable weather noise compared with the previous methods, including Leith-Shukla-Gutzler (LSG), Madden, and Katz. The potential predictability of temperature from ANOCOVA model, bootstrap, LSG and Madden exhibits a pronounced tropical-extratropical contrast with much larger predictability in the tropics dominated by El Nino/Southern Oscillation (ENSO) than in higher latitudes where strong internal variability lowers predictability. Bootstrap tends to display highest predictability of the four methods, ANOCOVA lies in the middle, while LSG and Madden appear to generate lower predictability. Seasonal precipitation from ANOCOVA, bootstrap, and Katz, resembling that for temperature, is more predictable over the tropical regions, and less predictable in extropics. Bootstrap and ANOCOVA are in good agreement with each other, both methods generating larger predictability than Katz. The seasonal predictability of evaporation over land bears considerably similarity with that of temperature using ANOCOVA, bootstrap, LSG and Madden. The remote SST forcing and soil moisture reveal substantial seasonality in their relations with the potentially predictable seasonal signals. For selected regions, either SST or soil moisture or both shows significant relationships with predictable signals, hence providing indirect insight on slowly varying boundary processes involved to enable useful seasonal climate predication. A multivariate analysis of covariance (MANOCOVA) model is established to identify distinctive predictable patterns, which are uncorrelated with each other. Generally speaking, the seasonal predictability from multivariate model is consistent with that from ANOCOVA. Besides unveiling the spatial variability of predictability, MANOCOVA model also reveals the temporal variability of each predictable pattern, which could be linked to the periodic oscillations.

Top