Sample records for model adequately predicted

  1. Using Multitheory Model of Health Behavior Change to Predict Adequate Sleep Behavior.

    PubMed

    Knowlden, Adam P; Sharma, Manoj; Nahar, Vinayak K

    The purpose of this article was to use the multitheory model of health behavior change in predicting adequate sleep behavior in college students. A valid and reliable survey was administered in a cross-sectional design (n = 151). For initiation of adequate sleep behavior, the construct of behavioral confidence (P < .001) was found to be significant and accounted for 24.4% of the variance. For sustenance of adequate sleep behavior, changes in social environment (P < .02), emotional transformation (P < .001), and practice for change (P < .001) were significant and accounted for 34.2% of the variance.

  2. Pre-clinical methods for detecting the hypersensitivity potential of pharmaceuticals: regulatory considerations.

    PubMed

    Hastings, K L

    2001-02-02

    Immune-based systemic hypersensitivities account for a significant number of adverse drug reactions. There appear to be no adequate nonclinical models to predict systemic hypersensitivity to small molecular weight drugs. Although there are very good methods for detecting drugs that can induce contact sensitization, these have not been successfully adapted for prediction of systemic hypersensitivity. Several factors have made the development of adequate models difficult. The term systemic hypersensitivity encompases many discrete immunopathologies. Each type of immunopathology presumably is the result of a specific cluster of immunologic and biochemical phenomena. Certainly other factors, such as genetic predisposition, metabolic idiosyncrasies, and concomitant diseases, further complicate the problem. Therefore, it may be difficult to find common mechanisms upon which to construct adequate models to predict specific types of systemic hypersensitivity reactions. There is some reason to hope, however, that adequate methods could be developed for at least identifying drugs that have the potential to produce signs indicative of a general hazard for immune-based reactions.

  3. FACTORS INFLUENCING PREDICTION OF BROMODICHLOROMETHANE (BDCM) IN EXHALED BREATH: FURTHER EVALUATION OF A HUMAN BDCM PBPK MODEL

    EPA Science Inventory

    Confidence in the predictive capability of a PBPK model is increased when the model is demonstrated to predict multiple pharmacokinetic outcomes from diverse studies under different exposure conditions. We previously showed that our multi-route human BDCM PBPK model adequately (w...

  4. Geostatistical Prediction of Microbial Water Quality Throughout a Stream Network Using Meteorology, Land Cover, and Spatiotemporal Autocorrelation.

    PubMed

    Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R

    2018-06-25

    Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.

  5. Applicability of empirical data currently used in predicting solid propellant exhaust plumes

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.; Greenwood, T.; Roberts, B. B.

    1977-01-01

    Theoretical and experimental approaches to exhaust plume analysis are compared. A two-phase model is extended to include treatment of reacting gas chemistry, and thermodynamical modeling of the gaseous phase of the flow field is considered. The applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size, and particle size distributions is investigated. Experimental and analytical comparisons are presented for subscale solid rocket motors operating at three altitudes with attention to pitot total pressure and stagnation point heating rate measurements. The mathematical treatment input requirements are explained. The two-phase flow field solution adequately predicts gasdynamic properties in the inviscid portion of two-phase exhaust plumes. It is found that prediction of exhaust plume gas pressures requires an adequate model of flow field dynamics.

  6. Prediction of the dollar to the ruble rate. A system-theoretic approach

    NASA Astrophysics Data System (ADS)

    Borodachev, Sergey M.

    2017-07-01

    Proposed a simple state-space model of dollar rate formation based on changes in oil prices and some mechanisms of money transfer between monetary and stock markets. Comparison of predictions by means of input-output model and state-space model is made. It concludes that with proper use of statistical data (Kalman filter) the second approach provides more adequate predictions of the dollar rate.

  7. An Improved MUSIC Model for Gibbsite Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Scott C.; Bickmore, Barry R.; Tadanier, Christopher J.

    2004-06-01

    Here we use gibbsite as a model system with which to test a recently published, bond-valence method for predicting intrinsic pKa values for surface functional groups on oxides. At issue is whether the method is adequate when valence parameters for the functional groups are derived from ab initio structure optimization of surfaces terminated by vacuum. If not, ab initio molecular dynamics (AIMD) simulations of solvated surfaces (which are much more computationally expensive) will have to be used. To do this, we had to evaluate extant gibbsite potentiometric titration data that where some estimate of edge and basal surface area wasmore » available. Applying BET and recently developed atomic force microscopy methods, we found that most of these data sets were flawed, in that their surface area estimates were probably wrong. Similarly, there may have been problems with many of the titration procedures. However, one data set was adequate on both counts, and we applied our method of surface pKa int prediction to fitting a MUSIC model to this data with considerable success—several features of the titration data were predicted well. However, the model fit was certainly not perfect, and we experienced some difficulties optimizing highly charged, vacuum-terminated surfaces. Therefore, we conclude that we probably need to do AIMD simulations of solvated surfaces to adequately predict intrinsic pKa values for surface functional groups.« less

  8. Using the Gamma-Poisson Model to Predict Library Circulations.

    ERIC Educational Resources Information Center

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  9. Decision-support models for empiric antibiotic selection in Gram-negative bloodstream infections.

    PubMed

    MacFadden, D R; Coburn, B; Shah, N; Robicsek, A; Savage, R; Elligsen, M; Daneman, N

    2018-04-25

    Early empiric antibiotic therapy in patients can improve clinical outcomes in Gram-negative bacteraemia. However, the widespread prevalence of antibiotic-resistant pathogens compromises our ability to provide adequate therapy while minimizing use of broad antibiotics. We sought to determine whether readily available electronic medical record data could be used to develop predictive models for decision support in Gram-negative bacteraemia. We performed a multi-centre cohort study, in Canada and the USA, of hospitalized patients with Gram-negative bloodstream infection from April 2010 to March 2015. We analysed multivariable models for prediction of antibiotic susceptibility at two empiric windows: Gram-stain-guided and pathogen-guided treatment. Decision-support models for empiric antibiotic selection were developed based on three clinical decision thresholds of acceptable adequate coverage (80%, 90% and 95%). A total of 1832 patients with Gram-negative bacteraemia were evaluated. Multivariable models showed good discrimination across countries and at both Gram-stain-guided (12 models, areas under the curve (AUCs) 0.68-0.89, optimism-corrected AUCs 0.63-0.85) and pathogen-guided (12 models, AUCs 0.75-0.98, optimism-corrected AUCs 0.64-0.95) windows. Compared to antibiogram-guided therapy, decision-support models of antibiotic selection incorporating individual patient characteristics and prior culture results have the potential to increase use of narrower-spectrum antibiotics (in up to 78% of patients) while reducing inadequate therapy. Multivariable models using readily available epidemiologic factors can be used to predict antimicrobial susceptibility in infecting pathogens with reasonable discriminatory ability. Implementation of sequential predictive models for real-time individualized empiric antibiotic decision-making has the potential to both optimize adequate coverage for patients while minimizing overuse of broad-spectrum antibiotics, and therefore requires further prospective evaluation. Readily available epidemiologic risk factors can be used to predict susceptibility of Gram-negative organisms among patients with bacteraemia, using automated decision-making models. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  10. Predicting fire spread in Arizona's oak chaparral

    Treesearch

    A. W. Lindenmuth; James R. Davis

    1973-01-01

    Five existing fire models, both experimental and theoretical, did not adequately predict rate-of-spread (ROS) when tested on single- and multiclump fires in oak chaparral in Arizona. A statistical model developed using essentially the same input variables but weighted differently accounted for 81 percent ofthe variation in ROS. A chemical coefficient that accounts for...

  11. Online Bayesian Learning with Natural Sequential Prior Distribution Used for Wind Speed Prediction

    NASA Astrophysics Data System (ADS)

    Cheggaga, Nawal

    2017-11-01

    Predicting wind speed is one of the most important and critic tasks in a wind farm. All approaches, which directly describe the stochastic dynamics of the meteorological data are facing problems related to the nature of its non-Gaussian statistics and the presence of seasonal effects .In this paper, Online Bayesian learning has been successfully applied to online learning for three-layer perceptron's used for wind speed prediction. First a conventional transition model based on the squared norm of the difference between the current parameter vector and the previous parameter vector has been used. We noticed that the transition model does not adequately consider the difference between the current and the previous wind speed measurement. To adequately consider this difference, we use a natural sequential prior. The proposed transition model uses a Fisher information matrix to consider the difference between the observation models more naturally. The obtained results showed a good agreement between both series, measured and predicted. The mean relative error over the whole data set is not exceeding 5 %.

  12. Prediction of Fatigue Crack Growth in Rail Steels.

    DOT National Transportation Integrated Search

    1981-10-01

    Measures to prevent derailments due to fatigue failures of rails require adequate knowledge of the rate of propagation of fatigue cracks under service loading. The report presents a computational model for the prediction of crack growth in rails. The...

  13. Adapting the Water Erosion Prediction Project (WEPP) model for forest applications

    Treesearch

    Shuhui Dun; Joan Q. Wu; William J. Elliot; Peter R. Robichaud; Dennis C. Flanagan; James R. Frankenberger; Robert E. Brown; Arthur C. Xu

    2009-01-01

    There has been an increasing public concern over forest stream pollution by excessive sedimentation due to natural or human disturbances. Adequate erosion simulation tools are needed for sound management of forest resources. The Water Erosion Prediction Project (WEPP) watershed model has proved useful in forest applications where Hortonian flow is the major form of...

  14. Predictor characteristics necessary for building a clinically useful risk prediction model: a simulation study.

    PubMed

    Schummers, Laura; Himes, Katherine P; Bodnar, Lisa M; Hutcheon, Jennifer A

    2016-09-21

    Compelled by the intuitive appeal of predicting each individual patient's risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225) were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke's r 2 ) for each model. Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4). Area under the receiver operating characteristic curve values of >0.8 were necessary to achieve reasonable risk stratification capacity. Our findings provide a guide for researchers to estimate the expected performance of a prediction model before a model has been built based on the characteristics of available predictors.

  15. Assessment of the Ability of Contemporary Climate Models to Assess Adequately the Risk of Possible Regional Anomalies and Trends

    NASA Astrophysics Data System (ADS)

    Mokhov, I. I.

    2018-04-01

    The results describing the ability of contemporary global and regional climate models not only to assess the risk of general trends of changes but also to predict qualitatively new regional effects are presented. In particular, model simulations predicted spatially inhomogeneous changes in the wind and wave conditions in the Arctic basins, which have been confirmed in recent years. According to satellite and reanalysis data, a qualitative transition to the regime predicted by model simulations occurred about a decade ago.

  16. Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System

    DTIC Science & Technology

    2010-09-13

    model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of

  17. Transitioning from adequate to inadequate sleep duration associated with higher smoking rate and greater nicotine dependence in a population sample

    PubMed Central

    Patterson, Freda; Grandner, Michael A.; Lozano, Alicia; Satti, Aditi; Ma, Grace

    2017-01-01

    Introduction Inadequate sleep (≤6 and ≥9 h) is more prevalent in smokers than non-smokers but the extent to which sleep duration in smokers relates to smoking behaviors and cessation outcomes, is not yet clear. To begin to address this knowledge gap, we investigated the extent to which sleep duration predicted smoking behaviors and quitting intention in a population sample. Methods Data from current smokers who completed the baseline (N=635) and 5-year follow-up (N=477) assessment in the United Kingdom Biobank cohort study were analyzed. Multivariable regression models using smoking behavior outcomes (cigarettes per day, time to first cigarette, difficulty not smoking for a day, quitting intention) and sleep duration (adequate (7–8 h) versus inadequate (≤6 and ≥9 h) as the predictor were generated. All models adjusted for age, sex, race, and education. Results Worsening sleep duration (adequate to inadequate) predicted a more than three-fold higher odds in increased cigarettes per day (OR =3.18; 95% CI =1.25–8.06), a more than three-fold increased odds of not smoking for the day remaining difficult (OR =3.90; 95% CI =1.27–12.01), and a > 8-fold increased odds of higher nicotine dependence (OR= 8.98; 95% CI =2.81–28.66). Improving sleep duration (i.e., inadequate to adequate sleep) did not predict reduced cigarette consumption or nicotine dependence in this population sample. Conclusion Transitioning from adequate to inadequate sleep duration may be a risk factor for developing a more “hard-core” smoking profile. The extent to which achieving healthy sleep may promote, or optimize smoking cessation treatment response, warrants investigation. PMID:28950118

  18. Near-wall k-epsilon turbulence modeling

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Kim, J.; Moin, P.

    1987-01-01

    The flow fields from a turbulent channel simulation are used to compute the budgets for the turbulent kinetic energy (k) and its dissipation rate (epsilon). Data from boundary layer simulations are used to analyze the dependence of the eddy-viscosity damping-function on the Reynolds number and the distance from the wall. The computed budgets are used to test existing near-wall turbulence models of the k-epsilon type. It was found that the turbulent transport models should be modified in the vicinity of the wall. It was also found that existing models for the different terms in the epsilon-budget are adequate in the region from the wall, but need modification near the wall. The channel flow is computed using a k-epsilon model with an eddy-viscosity damping function from the data and no damping functions in the epsilon-equation. These computations show that the k-profile can be adequately predicted, but to correctly predict the epsilon-profile, damping functions in the epsilon-equation are needed.

  19. A Predictive Statistical Model of Navy Career Enlisted Retention Behavior Utilizing Economic Variables.

    DTIC Science & Technology

    1980-12-01

    career retention rates , and to predict future career retention rates in the Navy. The statistical model utilizes economic variables as predictors...The model developed r has a high correlation with Navy career retention rates . The problem of Navy career retention has not been adequately studied, 0D...findings indicate Navy policymakers must be cognizant of the relationships of economic factors to Navy career retention rates . Accrzsiofl ’or NTIS GRA&I

  20. Population pharmacokinetic-pharmacodynamic modeling and model-based prediction of docetaxel-induced neutropenia in Japanese patients with non-small cell lung cancer.

    PubMed

    Fukae, Masato; Shiraishi, Yoshimasa; Hirota, Takeshi; Sasaki, Yuka; Yamahashi, Mika; Takayama, Koichi; Nakanishi, Yoichi; Ieiri, Ichiro

    2016-11-01

    Docetaxel is used to treat many cancers, and neutropenia is the dose-limiting factor for its clinical use. A population pharmacokinetic-pharmacodynamic (PK-PD) model was introduced to predict the development of docetaxel-induced neutropenia in Japanese patients with non-small cell lung cancer (NSCLC). Forty-seven advanced or recurrent Japanese patients with NSCLC were enrolled. Patients received 50 or 60 mg/m 2 docetaxel as monotherapy, and blood samples for a PK analysis were collected up to 24 h after its infusion. Laboratory tests including absolute neutrophil count data and demographic information were used in population PK-PD modeling. The model was built by NONMEM 7.2 with a first-order conditional estimation using an interaction method. Based on the final model, a Monte Carlo simulation was performed to assess the impact of covariates on and the predictability of neutropenia. A three-compartment model was employed to describe PK data, and the PK model adequately described the docetaxel concentrations observed. Serum albumin (ALB) was detected as a covariate of clearance (CL): CL (L/h) = 32.5 × (ALB/3.6) 0.965  × (WGHT/70) 3/4 . In population PK-PD modeling, a modified semi-mechanistic myelosuppression model was applied, and characterization of the time course of neutrophil counts was adequate. The covariate selection indicated that α1-acid glycoprotein (AAG) was a predictor of neutropenia. The model-based simulation also showed that ALB and AAG negatively correlated with the development of neutropenia and that the time course of neutrophil counts was predictable. The developed model may facilitate the prediction and care of docetaxel-induced neutropenia.

  1. Stage-structured matrix models for organisms with non-geometric development times

    Treesearch

    Andrew Birt; Richard M. Feldman; David M. Cairns; Robert N. Coulson; Maria Tchakerian; Weimin Xi; James M. Guldin

    2009-01-01

    Matrix models have been used to model population growth of organisms for many decades. They are popular because of both their conceptual simplicity and their computational efficiency. For some types of organisms they are relatively accurate in predicting population growth; however, for others the matrix approach does not adequately model...

  2. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    USDA-ARS?s Scientific Manuscript database

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  3. Validation of behave fire behavior predictions in oak savannas

    USGS Publications Warehouse

    Grabner, Keith W.; Dwyer, John; Cutter, Bruce E.

    1997-01-01

    Prescribed fire is a valuable tool in the restoration and management of oak savannas. BEHAVE, a fire behavior prediction system developed by the United States Forest Service, can be a useful tool when managing oak savannas with prescribed fire. BEHAVE predictions of fire rate-of-spread and flame length were validated using four standardized fuel models: Fuel Model 1 (short grass), Fuel Model 2 (timber and grass), Fuel Model 3 (tall grass), and Fuel Model 9 (hardwood litter). Also, a customized oak savanna fuel model (COSFM) was created and validated. Results indicate that standardized fuel model 2 and the COSFM reliably estimate mean rate-of-spread (MROS). The COSFM did not appreciably reduce MROS variation when compared to fuel model 2. Fuel models 1, 3, and 9 did not reliably predict MROS. Neither the standardized fuel models nor the COSFM adequately predicted flame lengths. We concluded that standardized fuel model 2 should be used with BEHAVE when predicting fire rates-of-spread in established oak savannas.

  4. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    PubMed

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is <0.5 from the observed response. The effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures with varying degrees of stimulation.

  5. Suitability of different comfort indices for the prediction of thermal conditions in tree-covered outdoor spaces in arid cities

    NASA Astrophysics Data System (ADS)

    Ruiz, María Angélica; Correa, Erica Norma

    2015-10-01

    Outdoor thermal comfort is one of the most influential factors in the habitability of a space. Thermal level is defined not only by climate variables but also by the adaptation of people to the environment. This study presents a comparison between inductive and deductive thermal comfort models, contrasted with subjective reports, in order to identify which of the models can be used to most correctly predict thermal comfort in tree-covered outdoor spaces of the Mendoza Metropolitan Area, an intensely forested and open city located in an arid zone. Interviews and microclimatic measurements were carried out in winter 2010 and in summer 2011. Six widely used indices were selected according to different levels of complexity: the Temperature-Humidity Index (THI), Vinje's Comfort Index (PE), Thermal Sensation Index (TS), the Predicted Mean Vote (PMV), the COMFA model's energy balance (S), and the Physiological Equivalent Temperature (PET). The results show that the predictive models evaluated show percentages of predictive ability lower than 25 %. Despite this low indicator, inductive methods are adequate for obtaining a diagnosis of the degree and frequency in which a space is comfortable or not whereas deductive methods are recommended to influence urban design strategies. In addition, it is necessary to develop local models to evaluate perceived thermal comfort more adequately. This type of tool is very useful in the design and evaluation of the thermal conditions in outdoor spaces, based not only to climatic criteria but also subjective sensations.

  6. Abiotic/biotic coupling in the rhizosphere: a reactive transport modeling analysis

    USGS Publications Warehouse

    Lawrence, Corey R.; Steefel, Carl; Maher, Kate

    2014-01-01

    A new generation of models is needed to adequately simulate patterns of soil biogeochemical cycling in response changing global environmental drivers. For example, predicting the influence of climate change on soil organic matter storage and stability requires models capable of addressing complex biotic/abiotic interactions of rhizosphere and weathering processes. Reactive transport modeling provides a powerful framework simulating these interactions and the resulting influence on soil physical and chemical characteristics. Incorporation of organic reactions in an existing reactive transport model framework has yielded novel insights into soil weathering and development but much more work is required to adequately capture root and microbial dynamics in the rhizosphere. This endeavor provides many advantages over traditional soil biogeochemical models but also many challenges.

  7. Developing an Adequately Specified Model of State Level Student Achievement with Multilevel Data.

    ERIC Educational Resources Information Center

    Bernstein, Lawrence

    Limitations of using linear, unilevel regression procedures in modeling student achievement are discussed. This study is a part of a broader study that is developing an empirically-based predictive model of variables associated with academic achievement from a multilevel perspective and examining the differences by which parameters are estimated…

  8. Evaluation of Lightning Induced Effects in a Graphite Composite Fairing Structure. Parts 1 and 2

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.

    2011-01-01

    Defining the electromagnetic environment inside a graphite composite fairing due to lightning is of interest to spacecraft developers. This paper is the first in a two part series and studies the shielding effectiveness of a graphite composite model fairing using derived equivalent properties. A frequency domain Method of Moments (MoM) model is developed and comparisons are made with shielding test results obtained using a vehicle-like composite fairing. The comparison results show that the analytical models can adequately predict the test results. Both measured and model data indicate that graphite composite fairings provide significant attenuation to magnetic fields as frequency increases. Diffusion effects are also discussed. Part 2 examines the time domain based effects through the development of a loop based induced field testing and a Transmission-Line-Matrix (TLM) model is developed in the time domain to study how the composite fairing affects lightning induced magnetic fields. Comparisons are made with shielding test results obtained using a vehicle-like composite fairing in the time domain. The comparison results show that the analytical models can adequately predict the test and industry results.

  9. The Implementation and Evaluation of the Emergency Response Dose Assessment System (ERDAS) at Cape Canaveral Air Station/Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Evans, Randolph J.; Tremback, Craig J.; Lyons, Walter A.

    1996-01-01

    The Emergency Response Dose Assessment System (ERDAS) is a system which combines the mesoscale meteorological prediction model RAMS with the diffusion models REEDM and HYPACT. Operators use a graphical user interface to run the models for emergency response and toxic hazard planning at CCAS/KCS. The Applied Meteorology Unit has been evaluating the ERDAS meteorological and diffusion models and obtained the following results: (1) RAMS adequately predicts the occurrence of the daily sea breeze during non-cloudy conditions for several cases. (2) RAMS shows a tendency to predict the sea breeze to occur slightly earlier and to move it further inland than observed. The sea breeze predictions could most likely be improved by better parameterizing the soil moisture and/or sea surface temperatures. (3) The HYPACT/REEDM/RAMS models accurately predict launch plume locations when RAMS winds are accurate and when the correct plume layer is modeled. (4) HYPACT does not adequately handle plume buoyancy for heated plumes since all plumes are presently treated as passive tracers. Enhancements should be incorporated into the ERDAS as it moves toward being a fully operational system and as computer workstations continue to increase in power and decrease in cost. These enhancements include the following: activate RAMS moisture physics; use finer RAMS grid resolution; add RAMS input parameters (e.g. soil moisture, radar, and/or satellite data); automate data quality control; implement four-dimensional data assimilation; modify HYPACT plume rise and deposition physics; and add cumulative dosage calculations in HYPACT.

  10. Predicting Drug Concentration‐Time Profiles in Multiple CNS Compartments Using a Comprehensive Physiologically‐Based Pharmacokinetic Model

    PubMed Central

    Yamamoto, Yumi; Välitalo, Pyry A.; Huntjens, Dymphy R.; Proost, Johannes H.; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W.; van den Berg, Dirk‐Jan; Hartman, Robin; Wong, Yin Cheong; Danhof, Meindert; van Hasselt, John G. C.

    2017-01-01

    Drug development targeting the central nervous system (CNS) is challenging due to poor predictability of drug concentrations in various CNS compartments. We developed a generic physiologically based pharmacokinetic (PBPK) model for prediction of drug concentrations in physiologically relevant CNS compartments. System‐specific and drug‐specific model parameters were derived from literature and in silico predictions. The model was validated using detailed concentration‐time profiles from 10 drugs in rat plasma, brain extracellular fluid, 2 cerebrospinal fluid sites, and total brain tissue. These drugs, all small molecules, were selected to cover a wide range of physicochemical properties. The concentration‐time profiles for these drugs were adequately predicted across the CNS compartments (symmetric mean absolute percentage error for the model prediction was <91%). In conclusion, the developed PBPK model can be used to predict temporal concentration profiles of drugs in multiple relevant CNS compartments, which we consider valuable information for efficient CNS drug development. PMID:28891201

  11. Modeling tree-level fuel connectivity to evaluate the effectiveness of thinning treatments for reducing crown fire potential

    Treesearch

    Marco A. Contreras; Russell A. Parsons; Woodam Chung

    2012-01-01

    Land managers have been using fire behavior and simulation models to assist in several fire management tasks. These widely-used models use average attributes to make stand-level predictions without considering spatial variability of fuels within a stand. Consequently, as the existing models have limitations in adequately modeling crown fire initiation and propagation,...

  12. Prediction of Drug-Drug Interactions with Crizotinib as the CYP3A Substrate Using a Physiologically Based Pharmacokinetic Model.

    PubMed

    Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J

    2015-10-01

    An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.

  13. A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks

    NASA Astrophysics Data System (ADS)

    Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan

    2018-01-01

    Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.

  14. Predictive model for the growth kinetics of Staphylococcus aureus in raw pork developed using Integrated Pathogen Modeling Program (IPMP) 2013.

    PubMed

    Lee, Yong Ju; Jung, Byeong Su; Kim, Kee-Tae; Paik, Hyun-Dong

    2015-09-01

    A predictive model was performed to describe the growth of Staphylococcus aureus in raw pork by using Integrated Pathogen Modeling Program 2013 and a polynomial model as a secondary predictive model. S. aureus requires approximately 180 h to reach 5-6 log CFU/g at 10 °C. At 15 °C and 25 °C, approximately 48 and 20 h, respectively, are required to cause food poisoning. Predicted data using the Gompertz model was the most accurate in this study. For lag time (LT) model, bias factor (Bf) and accuracy factor (Af) values were both 1.014, showing that the predictions were within a reliable range. For specific growth rate (SGR) model, Bf and Af were 1.188 and 1.190, respectively. Additionally, both Bf and Af values of the LT and SGR models were close to 1, indicating that IPMP Gompertz model is more adequate for predicting the growth of S. aureus on raw pork than other models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  16. What Matters from Admissions? Identifying Success and Risk Among Canadian Dental Students.

    PubMed

    Plouffe, Rachel A; Hammond, Robert; Goldberg, Harvey A; Chahine, Saad

    2018-05-01

    The aims of this study were to determine whether different student profiles would emerge in terms of high and low GPA performance in each year of dental school and to investigate the utility of preadmissions variables in predicting performance and performance stability throughout each year of dental school. Data from 11 graduating cohorts (2004-14) at the Schulich School of Medicine & Dentistry, University of Western Ontario, Canada, were collected and analyzed using bivariate correlations, latent profile analysis, and hierarchical generalized linear models (HGLMs). The data analyzed were for 616 students in total (332 males and 284 females). Four models were developed to predict adequate and poor performance throughout each of four dental school years. An additional model was developed to predict student performance stability across time. Two separate student profiles reflecting high and low GPA performance across each year of dental school were identified, and scores on cognitive preadmissions variables differentially predicted the probability of grouping into high and low performance profiles. Students with higher pre-dental GPAs and DAT chemistry were most likely to remain stable in a high-performance group across each year of dental school. Overall, the findings suggest that selection committees should consider pre-dental GPA and DAT chemistry scores as important tools for predicting dental school performance and stability across time. This research is important in determining how to better predict success and failure in various areas of preclinical dentistry courses and to provide low-performing students with adequate academic assistance.

  17. Predicting the digestible energy of corn determined with growing swine from nutrient composition and cross-species measurements.

    PubMed

    Smith, B; Hassen, A; Hinds, M; Rice, D; Jones, D; Sauber, T; Iiams, C; Sevenich, D; Allen, R; Owens, F; McNaughton, J; Parsons, C

    2015-03-01

    The DE values of corn grain for pigs will differ among corn sources. More accurate prediction of DE may improve diet formulation and reduce diet cost. Corn grain sources ( = 83) were assayed with growing swine (20 kg) in DE experiments with total collection of feces, with 3-wk-old broiler chick in nitrogen-corrected apparent ME (AME) trials and with cecectomized adult roosters in nitrogen-corrected true ME (TME) studies. Additional AME data for the corn grain source set was generated based on an existing near-infrared transmittance prediction model (near-infrared transmittance-predicted AME [NIT-AME]). Corn source nutrient composition was determined by wet chemistry methods. These data were then used to 1) test the accuracy of predicting swine DE of individual corn sources based on available literature equations and nutrient composition and 2) develop models for predicting DE of sources from nutrient composition and the cross-species information gathered above (AME, NIT-AME, and TME). The overall measured DE, AME, NIT-AME, and TME values were 4,105 ± 11, 4,006 ± 10, 4,004 ± 10, and 4,086 ± 12 kcal/kg DM, respectively. Prediction models were developed using 80% of the corn grain sources; the remaining 20% was reserved for validation of the developed prediction equation. Literature equations based on nutrient composition proved imprecise for predicting corn DE; the root mean square error of prediction ranged from 105 to 331 kcal/kg, an equivalent of 2.6 to 8.8% error. Yet among the corn composition traits, 4-variable models developed in the current study provided adequate prediction of DE (model ranging from 0.76 to 0.79 and root mean square error [RMSE] of 50 kcal/kg). When prediction equations were tested using the validation set, these models had a 1 to 1.2% error of prediction. Simple linear equations from AME, NIT-AME, or TME provided an accurate prediction of DE for individual sources ( ranged from 0.65 to 0.73 and RMSE ranged from 50 to 61 kcal/kg). Percentage error of prediction based on the validation data set was greater (1.4%) for the TME model than for the NIT-AME or AME models (1 and 1.2%, respectively), indicating that swine DE values could be accurately predicted by using AME or NIT-AME. In conclusion, regression equations developed from broiler measurements or from analyzed nutrient composition proved adequate to reliably predict the DE of commercially available corn hybrids for growing pigs.

  18. Application of time series analysis in modelling and forecasting emergency department visits in a medical centre in Southern Taiwan

    PubMed Central

    Juang, Wang-Chuan; Huang, Sin-Jhih; Huang, Fong-Dee; Cheng, Pei-Wen; Wann, Shue-Ren

    2017-01-01

    Objective Emergency department (ED) overcrowding is acknowledged as an increasingly important issue worldwide. Hospital managers are increasingly paying attention to ED crowding in order to provide higher quality medical services to patients. One of the crucial elements for a good management strategy is demand forecasting. Our study sought to construct an adequate model and to forecast monthly ED visits. Methods We retrospectively gathered monthly ED visits from January 2009 to December 2016 to carry out a time series autoregressive integrated moving average (ARIMA) analysis. Initial development of the model was based on past ED visits from 2009 to 2016. A best-fit model was further employed to forecast the monthly data of ED visits for the next year (2016). Finally, we evaluated the predicted accuracy of the identified model with the mean absolute percentage error (MAPE). The software packages SAS/ETS V.9.4 and Office Excel 2016 were used for all statistical analyses. Results A series of statistical tests showed that six models, including ARIMA (0, 0, 1), ARIMA (1, 0, 0), ARIMA (1, 0, 1), ARIMA (2, 0, 1), ARIMA (3, 0, 1) and ARIMA (5, 0, 1), were candidate models. The model that gave the minimum Akaike information criterion and Schwartz Bayesian criterion and followed the assumptions of residual independence was selected as the adequate model. Finally, a suitable ARIMA (0, 0, 1) structure, yielding a MAPE of 8.91%, was identified and obtained as Visitt=7111.161+(at+0.37462 at−1). Conclusion The ARIMA (0, 0, 1) model can be considered adequate for predicting future ED visits, and its forecast results can be used to aid decision-making processes. PMID:29196487

  19. Effects of model spatial resolution on ecohydrologic predictions and their sensitivity to inter-annual climate variability

    Treesearch

    Kyongho Son; Christina Tague; Carolyn Hunsaker

    2016-01-01

    The effect of fine-scale topographic variability on model estimates of ecohydrologic responses to climate variability in California’s Sierra Nevada watersheds has not been adequately quantified and may be important for supporting reliable climate-impact assessments. This study tested the effect of digital elevation model (DEM) resolution on model accuracy and estimates...

  20. A Multilevel Latent Growth Curve Approach to Predicting Student Proficiency

    ERIC Educational Resources Information Center

    Choi, Kilchan; Goldschmidt, Pete

    2012-01-01

    Value-added models and growth-based accountability aim to evaluate school's performance based on student growth in learning. The current focus is on linking the results from value-added models to the ones from growth-based accountability systems including Adequate Yearly Progress decisions mandated by No Child Left Behind. We present a new…

  1. From soilscapes to landscapes: A landscape-oriented approach to simulate soil organic carbon dynamics in intensively managed landscapes

    USDA-ARS?s Scientific Manuscript database

    Most available biogeochemical models focus within a soil profile and cannot adequately resolve contributions of the lighter size fractions of organic rich soils for Enrichment Ratio (ER) estimates, thereby causing unintended errors in Soil Organic Carbon (SOC) storage predictions. These models set E...

  2. Integrating fAPARchl and PRInadir from EO-1/Hyperion to predict cornfield daily gross primary production (GPP)

    USDA-ARS?s Scientific Manuscript database

    Accurate estimates of terrestrial carbon sequestration is essential for evaluating changes in the carbon cycle due to global climate change. In a recent assessment of 26 carbon assimilation models at 39 FLUXNET tower sites across the United States and Canada, all models failed to adequately compute...

  3. Evaluating critical uncertainty thresholds in a spatial model of forest pest invasion risk

    Treesearch

    Frank H. Koch; Denys Yemshanov; Daniel W. McKenney; William D. Smith

    2009-01-01

    Pest risk maps can provide useful decision support in invasive species management, but most do not adequately consider the uncertainty associated with predicted risk values. This study explores how increased uncertainty in a risk model’s numeric assumptions might affect the resultant risk map. We used a spatial stochastic model, integrating components for...

  4. Study of improved modeling and solution procedures for nonlinear analysis. [aircraft-like structures

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1979-01-01

    An evaluation of the ACTION computer code on an aircraft like structure is presented. This computer program proved adequate in predicting gross response parameters in structures which undergo severe localized cross sectional deformations.

  5. Lower-extremity musculoskeletal geometry affects the calculation of patellofemoral forces in vertical jumping and weightlifting.

    PubMed

    Cleather, D I; Bull, A M J

    2010-01-01

    The calculation of the patellofemoral joint contact force using three-dimensional (3D) modelling techniques requires a description of the musculoskeletal geometry of the lower limb. In this study, the influence of the complexity of the muscle model was studied by considering two different muscle models, the Delp and Horsman models. Both models were used to calculate the patellofemoral force during standing, vertical jumping, and Olympic-style weightlifting. The patellofemoral forces predicted by the Horsman model were markedly lower than those predicted by the Delp model in all activities and represented more realistic values when compared with previous work. This was found to be a result of a lower level of redundancy in the Delp model, which forced a higher level of muscular activation in order to allow a viable solution. The higher level of complexity in the Horsman model resulted in a greater degree of redundancy and consequently lower activation and patellofemoral forces. The results of this work demonstrate that a well-posed muscle model must have an adequate degree of complexity to create a sufficient independence, variability, and number of moment arms in order to ensure adequate redundancy of the force-sharing problem such that muscle forces are not overstated.

  6. Modelling blast induced damage from a fully coupled explosive charge

    PubMed Central

    Onederra, Italo A.; Furtney, Jason K.; Sellers, Ewan; Iverson, Stephen

    2015-01-01

    This paper presents one of the latest developments in the blasting engineering modelling field—the Hybrid Stress Blasting Model (HSBM). HSBM includes a rock breakage engine to model detonation, wave propagation, rock fragmentation, and muck pile formation. Results from two controlled blasting experiments were used to evaluate the code’s ability to predict the extent of damage. Results indicate that the code is capable of adequately predicting both the extent and shape of the damage zone associated with the influence of point-of-initiation and free-face boundary conditions. Radial fractures extending towards a free face are apparent in the modelling output and matched those mapped after the experiment. In the stage 2 validation experiment, the maximum extent of visible damage was of the order of 1.45 m for the fully coupled 38-mm emulsion charge. Peak radial velocities were predicted within a relative difference of only 1.59% at the nearest history point at 0.3 m from the explosive charge. Discrepancies were larger further away from the charge, with relative differences of −22.4% and −42.9% at distances of 0.46 m and 0.61 m, respectively, meaning that the model overestimated particle velocities at these distances. This attenuation deficiency in the modelling produced an overestimation of the damage zone at the corner of the block due to excessive stress reflections. The extent of visible damage in the immediate vicinity of the blasthole adequately matched the measurements. PMID:26412978

  7. Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis

    NASA Technical Reports Server (NTRS)

    Mcanelly, W. B.; Young, C. T. K.

    1973-01-01

    Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.

  8. Ground-water models for water resources planning

    USGS Publications Warehouse

    Moore, John E.

    1980-01-01

    In the past decade hydrologists have emphasized the development of computer-based mathematical models to aid in the understanding of flow, the transport of solutes, transport of heat, and deformation in the groundwater system. These models have been used to provide information and predictions for water managers. Too frequently, groundwater was neglected in water-resource planning because managers believed that it could not be adequately evaluated in terms of availability, quality, and effect of development on surface water supplies. Now, however, with newly developed digital groundwater models, effects of development can be predicted. Such models have been used to predict hydrologic and quality changes under different stresses. These models have grown in complexity over the last 10 years from simple one-layer flow models to three-dimensional simulations of groundwater flow which may include solute transport, heat transport, effects of land subsidence, and encroachment of salt water. This paper illustrates, through case histories, how predictive groundwater models have provided the information needed for the sound planning and management of water resources in the United States. (USGS)

  9. Vacuum ultraviolet line radiation measurements of a shock-heated nitrogen plasma

    NASA Technical Reports Server (NTRS)

    Mcclenahan, J. O.

    1972-01-01

    Line radiation, in the wavelength region from 1040 to 2500 A from nitrogen plasmas, was measured at conditions typical of those produced in the shock layer in front of vehicles entering the earth's atmosphere at superorbital velocities. The radiation was also predicted with a typical radiation transport computer program to determine whether such calculations adequately model plasmas for the conditions tested. The results of the comparison show that the radiant intensities of the lines between 1040 and 1700 A are actually lower than are predicted by such computer models.

  10. Prediction of N-nitrosodimethylamine (NDMA) formation as a disinfection by-product.

    PubMed

    Kim, Jongo; Clevenger, Thomas E

    2007-06-25

    This study investigated the possibility of a statistical model application for the prediction of N-nitrosodimethylamine (NDMA) formation. The NDMA formation was studied as a function of monochloramine concentration (0.001-5mM) at fixed dimethylamine (DMA) concentrations of 0.01mM or 0.05mM. Excellent linear correlations were observed between the molar ratio of monochloramine to DMA and the NDMA formation on a log scale at pH 7 and 8. When a developed prediction equation was applied to a previously reported study, a good result was obtained. The statistical model appears to predict adequately NDMA concentrations if other NDMA precursors are excluded. Using the predictive tool, a simple and approximate calculation of NDMA formation can be obtained in drinking water systems.

  11. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response to Tier 2 Intervention

    ERIC Educational Resources Information Center

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Douglas; Fuchs, Lynn S.; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in…

  12. Modeling of detachment experiments at DIII-D

    DOE PAGES

    Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; ...

    2014-11-26

    Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided thatmore » the discrepancy in radiated power level can be resolved.« less

  13. Environmental Predictors of US County Mortality Patterns on a National Basis.

    PubMed

    Chan, Melissa P L; Weinhold, Robert S; Thomas, Reuben; Gohlke, Julia M; Portier, Christopher J

    2015-01-01

    A growing body of evidence has found that mortality rates are positively correlated with social inequalities, air pollution, elevated ambient temperature, availability of medical care and other factors. This study develops a model to predict the mortality rates for different diseases by county across the US. The model is applied to predict changes in mortality caused by changing environmental factors. A total of 3,110 counties in the US, excluding Alaska and Hawaii, were studied. A subset of 519 counties from the 3,110 counties was chosen by using systematic random sampling and these samples were used to validate the model. Step-wise and linear regression analyses were used to estimate the ability of environmental pollutants, socio-economic factors and other factors to explain variations in county-specific mortality rates for cardiovascular diseases, cancers, chronic obstructive pulmonary disease (COPD), all causes combined and lifespan across five population density groups. The estimated models fit adequately for all mortality outcomes for all population density groups and, adequately predicted risks for the 519 validation counties. This study suggests that, at local county levels, average ozone (0.07 ppm) is the most important environmental predictor of mortality. The analysis also illustrates the complex inter-relationships of multiple factors that influence mortality and lifespan, and suggests the need for a better understanding of the pathways through which these factors, mortality, and lifespan are related at the community level.

  14. Environmental Predictors of US County Mortality Patterns on a National Basis

    PubMed Central

    Thomas, Reuben; Gohlke, Julia M.; Portier, Christopher J.

    2015-01-01

    A growing body of evidence has found that mortality rates are positively correlated with social inequalities, air pollution, elevated ambient temperature, availability of medical care and other factors. This study develops a model to predict the mortality rates for different diseases by county across the US. The model is applied to predict changes in mortality caused by changing environmental factors. A total of 3,110 counties in the US, excluding Alaska and Hawaii, were studied. A subset of 519 counties from the 3,110 counties was chosen by using systematic random sampling and these samples were used to validate the model. Step-wise and linear regression analyses were used to estimate the ability of environmental pollutants, socio-economic factors and other factors to explain variations in county-specific mortality rates for cardiovascular diseases, cancers, chronic obstructive pulmonary disease (COPD), all causes combined and lifespan across five population density groups. The estimated models fit adequately for all mortality outcomes for all population density groups and, adequately predicted risks for the 519 validation counties. This study suggests that, at local county levels, average ozone (0.07 ppm) is the most important environmental predictor of mortality. The analysis also illustrates the complex inter-relationships of multiple factors that influence mortality and lifespan, and suggests the need for a better understanding of the pathways through which these factors, mortality, and lifespan are related at the community level. PMID:26629706

  15. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, Wim; Prinsen, Geert.; Hoogewoud, Jacco; Veldhuizen, Ab; Ruijgh, Erik; Kroon, Timo

    2013-04-01

    Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with? by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses. Regional expertise is welcomed in the calibration phase of NHI. It aims to reduce uncertainties by improving the rules for manmade re-direction of surface water, schematizations & parameters included in the model. This is carried out in workshops and in one-to-one expert meetings on regional models & the NHI. All results of NHI are presented on the internet and any expert may suggest improvements to the model. The final goal of the involvement of regional parties is the acceptation by decision impact receiving authorities

  16. Risk Assessment in Child Sexual Abuse Cases

    ERIC Educational Resources Information Center

    Levenson, Jill S.; Morin, John W.

    2006-01-01

    Despite continuing improvements in risk assessment for child protective services (CPS) and movement toward actuarial prediction of child maltreatment, current models have not adequately addressed child sexual abuse. Sexual abuse cases present unique and ambiguous indicators to the investigating professional, and risk factors differ from those…

  17. Population pharmacokinetic and pharmacodynamic analyses of safinamide in subjects with Parkinson's disease.

    PubMed

    Loprete, Luca; Leuratti, Chiara; Cattaneo, Carlo; Thapar, Mita M; Farrell, Colm; Sardina, Marco

    2016-10-01

    Safinamide is an orally administered α -aminoamide derivative with both dopaminergic and non-dopaminergic properties. Nonlinear mixed effects models for population pharmacokinetic (PK) and pharmacokinetic-pharmacodynamic (PKPD) analyses were developed using records from, respectively, 623 and 668 patients belonging to two Phase 3, randomized, placebo-controlled, double-blind efficacy studies. The aim was to estimate safinamide population PK parameters in patients with Parkinson's disease (PD) on stable levodopa therapy, and to develop a model of safinamide effect on the PD phase of normal functioning (ON-time). The final models were internally evaluated using visual predictive checks (VPCs), prediction corrected-VPC, and nonparametric bootstrap analysis. Safinamide profiles were adequately described by a linear one-compartmental model with first-order absorption and elimination. CL/F, Vd/F, and KA (95% confidence interval [CI]) were 4.96 (4.73-5.21) L/h, 166 (158-174) L, and 0.582 (0.335-0.829) h -1 , respectively. CL/F and Vd/F increased with body weight, while age, gender, renal function, and exposure to levodopa did not influence safinamide PK. The observed ON-time values were adequately described by a linear model, with time in the study period as dependent variable, and rate of ON-time change and baseline plus offset effect as slope and intercept parameters. Safinamide treatment resulted in an increase in ON-time of 0.73 h (week 4), with further ON-time increase with the same slope as placebo. The increase was not influenced by age, levodopa, or safinamide exposure. The population models adequately describe the population PK of safinamide and safinamide effect on ON-time. No dose adjustments in elderly and mild to moderate renally impaired patients are requested.

  18. A Stochastic Model of Plausibility in Live Virtual Constructive Environments

    DTIC Science & Technology

    2017-09-14

    objective in virtual environment research and design is the maintenance of adequate consistency levels in the face of limited system resources such as...provides some commentary with regard to system design considerations and future research directions. II. SYSTEM MODEL DVEs are often designed as a...exceed the system’s requirements. Research into predictive models of virtual environment consistency is needed to provide designers the tools to

  19. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  20. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  1. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  2. Experimental Verification of a Progressive Damage Model for IM7/5260 Laminates Subjected to Tension-Tension Fatigue

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.

    1995-01-01

    The durability and damage tolerance of laminated composites are critical design considerations for airframe composite structures. Therefore, the ability to model damage initiation and growth and predict the life of laminated composites is necessary to achieve structurally efficient and economical designs. The purpose of this research is to experimentally verify the application of a continuum damage model to predict progressive damage development in a toughened material system. Damage due to monotonic and tension-tension fatigue was documented for IM7/5260 graphite/bismaleimide laminates. Crack density and delamination surface area were used to calculate matrix cracking and delamination internal state variables to predict stiffness loss in unnotched laminates. A damage dependent finite element code predicted the stiffness loss for notched laminates with good agreement to experimental data. It was concluded that the continuum damage model can adequately predict matrix damage progression in notched and unnotched laminates as a function of loading history and laminate stacking sequence.

  3. Modeling when and where a secondary accident occurs.

    PubMed

    Wang, Junhua; Liu, Boya; Fu, Ting; Liu, Shuo; Stipancic, Joshua

    2018-01-31

    The occurrence of secondary accidents leads to traffic congestion and road safety issues. Secondary accident prevention has become a major consideration in traffic incident management. This paper investigates the location and time of a potential secondary accident after the occurrence of an initial traffic accident. With accident data and traffic loop data collected over three years from California interstate freeways, a shock wave-based method was introduced to identify secondary accidents. A linear regression model and two machine learning algorithms, including a back-propagation neural network (BPNN) and a least squares support vector machine (LSSVM), were implemented to explore the distance and time gap between the initial and secondary accidents using inputs of crash severity, violation category, weather condition, tow away, road surface condition, lighting, parties involved, traffic volume, duration, and shock wave speed generated by the primary accident. From the results, the linear regression model was inadequate in describing the effect of most variables and its goodness-of-fit and accuracy in prediction was relatively poor. In the training programs, the BPNN and LSSVM demonstrated adequate goodness-of-fit, though the BPNN was superior with a higher CORR and lower MSE. The BPNN model also outperformed the LSSVM in time prediction, while both failed to provide adequate distance prediction. Therefore, the BPNN model could be used to forecast the time gap between initial and secondary accidents, which could be used by decision makers and incident management agencies to prevent or reduce secondary collisions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Predictability of North Atlantic Multidecadal Climate Variability

    PubMed

    Griffies; Bryan

    1997-01-10

    Atmospheric weather systems become unpredictable beyond a few weeks, but climate variations can be predictable over much longer periods because of the coupling of the ocean and atmosphere. With the use of a global coupled ocean-atmosphere model, it is shown that the North Atlantic may have climatic predictability on the order of a decade or longer. These results suggest that variations of the dominant multidecadal sea surface temperature patterns in the North Atlantic, which have been associated with changes in climate over Eurasia, can be predicted if an adequate and sustainable system for monitoring the Atlantic Ocean exists.

  5. Physically based approaches incorporating evaporation for early warning predictions of rainfall-induced landslides

    NASA Astrophysics Data System (ADS)

    Reder, Alfredo; Rianna, Guido; Pagano, Luca

    2018-02-01

    In the field of rainfall-induced landslides on sloping covers, models for early warning predictions require an adequate trade-off between two aspects: prediction accuracy and timeliness. When a cover's initial hydrological state is a determining factor in triggering landslides, taking evaporative losses into account (or not) could significantly affect both aspects. This study evaluates the performance of three physically based predictive models, converting precipitation and evaporative fluxes into hydrological variables useful in assessing slope safety conditions. Two of the models incorporate evaporation, with one representing evaporation as both a boundary and internal phenomenon, and the other only a boundary phenomenon. The third model totally disregards evaporation. Model performances are assessed by analysing a well-documented case study involving a 2 m thick sloping volcanic cover. The large amount of monitoring data collected for the soil involved in the case study, reconstituted in a suitably equipped lysimeter, makes it possible to propose procedures for calibrating and validating the parameters of the models. All predictions indicate a hydrological singularity at the landslide time (alarm). A comparison of the models' predictions also indicates that the greater the complexity and completeness of the model, the lower the number of predicted hydrological singularities when no landslides occur (false alarms).

  6. The evolution of life-history variation in fishes, with particular reference to flatfishes

    NASA Astrophysics Data System (ADS)

    Roff, Derek A.

    This paper explores four aspects of the evolution of life-history variation in fish, with particular reference to the flatfishes: 1. genetic variation and evolutionary response; 2. the size and age at first reproduction; 3. adult lifespan and variation in recruitment; 4. the relationship between reproductive effort and age. Evolutionary response may be limited by previous evolutionary pathways (phylogenetic variation) or by lack of genetic variation due to selection for a single trait. Estimates of heritability suggest, as predicted, that selection is stronger on life-history traits than morphological traits; but there is still adequate genetic variation to permit fairly rapid evolutionary changes. Several approaches to the analysis of the optimal age and size at first reproduction are discussed in the light of a general life-history model based on the assumption that natural selection maximizes r or R 0. It is concluded that one of the most important areas of future research is the relationship between reproduction and mortality. Murphy's hypothesis that the reproductive lifespan should increase with variation in spawning success is shown to be incorrect for fish, at least at the level of interspecific comparison. The model of Charlesworth & León predicting the sufficient condition for reproductive effort to increase with age is tested: in 28 of 31 cases the model predicts an increase of reproductive effort with age. These results suggest that, in general, reproductive effort should increase with age in fish. This prediction is confirmed in the 15 species for which adequate data exist.

  7. Gain degradation and amplitude scintillation due to tropospheric turbulence

    NASA Technical Reports Server (NTRS)

    Theobold, D. M.; Hodge, D. B.

    1978-01-01

    It is shown that a simple physical model is adequate for the prediction of the long term statistics of both the reduced signal levels and increased peak-to-peak fluctuations. The model is based on conventional atmospheric turbulence theory and incorporates both amplitude and angle of arrival fluctuations. This model predicts the average variance of signals observed under clear air conditions at low elevation angles on earth-space paths at 2, 7.3, 20 and 30 GHz. Design curves based on this model for gain degradation, realizable gain, amplitude fluctuation as a function of antenna aperture size, frequency, and either terrestrial path length or earth-space path elevation angle are presented.

  8. Simulation of Chronic Liver Injury Due to Environmental Chemicals

    EPA Science Inventory

    US EPA Virtual Liver (v-Liver) is a cellular systems model of hepatic tissues to predict the effects of chronic exposure to chemicals. Tens of thousands of chemicals are currently in commerce and hundreds more are introduced every year. Few of these chemicals have been adequate...

  9. Interspecies Correlation Estimation (ICE) models predict supplemental toxicity data for SSDs

    EPA Science Inventory

    Species sensitivity distributions (SSD) require a large number of toxicity values for a diversity of taxa to define a hazard level protective of multiple species. For most chemicals, measured toxicity data are limited to a few standard test species that are unlikely to adequately...

  10. Concentration of folate in colorectal tissue biopsies predicts prevalence of adenomatous polyps

    USDA-ARS?s Scientific Manuscript database

    Background and aims: Folate has been implicated as a potential aetiological factor for colorectal cancer. Previous research has not adequately exploited concentrations of folate in normal colonic mucosal biopsies to examine the issue. Methods: Logistic regression models were used to estimate ORs ...

  11. Theoretical and experimental investigation of supersonic aerodynamic characteristics of a twin-fuselage concept

    NASA Technical Reports Server (NTRS)

    Wood, R. M.; Miller, D. S.; Brentner, K. S.

    1983-01-01

    A theoretical and experimental investigation has been conducted to evaluate the fundamental supersonic aerodynamic characteristics of a generic twin-body model at a Mach number of 2.70. Results show that existing aerodynamic prediction methods are adequate for making preliminary aerodynamic estimates.

  12. Nuclear fragmentation studies for microelectronic application

    NASA Technical Reports Server (NTRS)

    Ngo, Duc M.; Wilson, John W.; Buck, Warren W.; Fogarty, Thomas N.

    1989-01-01

    A formalism for target fragment transport is presented with application to energy loss spectra in thin silicon devices. Predicted results are compared to experiments with the surface barrier detectors of McNulty et al. The intranuclear cascade nuclear reaction model does not predict the McNulty experimental data for the highest energy events. A semiempirical nuclear cross section gives an adequate explanation of McNulty's experiments. Application of the formalism to specific electronic devices is discussed.

  13. VTOL in ground effect flows for closely spaced jets. [to predict pressure and upwash forces on aircraft structures

    NASA Technical Reports Server (NTRS)

    Migdal, D.; Hill, W. G., Jr.; Jenkins, R. C.

    1979-01-01

    Results of a series of in ground effect twin jet tests are presented along with flow models for closely spaced jets to help predict pressure and upwash forces on simulated aircraft surfaces. The isolated twin jet tests revealed unstable fountains over a range of spacings and jet heights, regions of below ambient pressure on the ground, and negative pressure differential in the upwash flow field. A separate computer code was developed for vertically oriented, incompressible jets. This model more accurately reflects fountain behavior without fully formed wall jets, and adequately predicts ground isobars, upwash dynamic pressure decay, and fountain lift force variation with height above ground.

  14. Self-Perceived Cooking Skills in Emerging Adulthood Predict Better Dietary Behaviors and Intake 10 Years Later: A Longitudinal Study.

    PubMed

    Utter, Jennifer; Larson, Nicole; Laska, Melissa N; Winkler, Megan; Neumark-Sztainer, Dianne

    2018-05-01

    To determine whether perceived cooking skills in emerging adulthood predicts better nutrition a decade later. Data were collected as part of the Project Eating and Activity in Teens and Young Adults longitudinal study. Participants reported on adequacy of cooking skills in 2002-2003 (age 18-23 years) and subsequently reported on nutrition-related outcomes in 2015-2016 (age 30-35 years) (n = 1,158). Separate regression models were used to examine associations between cooking skills at age 18-23 years and each subsequent outcome. One fourth of participants described their cooking skills as very adequate at 18-23 years, with no statistically significant differences by sociodemographic characteristics. Reports of very adequate cooking skills at age 18-23 years predicted better nutrition-related outcomes 10 years later, such as more frequent preparation of meals including vegetables (P < .001) and less frequent fast food consumption (P < .001). Developing adequate cooking skills by emerging adulthood may have long-term benefits for nutrition over a decade later. Ongoing and new interventions to enhance cooking skills during adolescence and emerging adulthood are warranted but require strong evaluation designs that observe young people over a number of years. Copyright © 2018 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  15. Application of time series analysis in modelling and forecasting emergency department visits in a medical centre in Southern Taiwan.

    PubMed

    Juang, Wang-Chuan; Huang, Sin-Jhih; Huang, Fong-Dee; Cheng, Pei-Wen; Wann, Shue-Ren

    2017-12-01

    Emergency department (ED) overcrowding is acknowledged as an increasingly important issue worldwide. Hospital managers are increasingly paying attention to ED crowding in order to provide higher quality medical services to patients. One of the crucial elements for a good management strategy is demand forecasting. Our study sought to construct an adequate model and to forecast monthly ED visits. We retrospectively gathered monthly ED visits from January 2009 to December 2016 to carry out a time series autoregressive integrated moving average (ARIMA) analysis. Initial development of the model was based on past ED visits from 2009 to 2016. A best-fit model was further employed to forecast the monthly data of ED visits for the next year (2016). Finally, we evaluated the predicted accuracy of the identified model with the mean absolute percentage error (MAPE). The software packages SAS/ETS V.9.4 and Office Excel 2016 were used for all statistical analyses. A series of statistical tests showed that six models, including ARIMA (0, 0, 1), ARIMA (1, 0, 0), ARIMA (1, 0, 1), ARIMA (2, 0, 1), ARIMA (3, 0, 1) and ARIMA (5, 0, 1), were candidate models. The model that gave the minimum Akaike information criterion and Schwartz Bayesian criterion and followed the assumptions of residual independence was selected as the adequate model. Finally, a suitable ARIMA (0, 0, 1) structure, yielding a MAPE of 8.91%, was identified and obtained as Visit t =7111.161+(a t +0.37462 a t -1). The ARIMA (0, 0, 1) model can be considered adequate for predicting future ED visits, and its forecast results can be used to aid decision-making processes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Shuttle active thermal control system development testing. Volume 3: Modular radiator system test data correlation with thermal model

    NASA Technical Reports Server (NTRS)

    Phillips, M. A.

    1973-01-01

    Results are presented of an analysis which compares the performance predictions of a thermal model of a multi-panel modular radiator system with thermal vacuum test data. Comparisons between measured and predicted individual panel outlet temperatures and pressure drops and system outlet temperatures have been made over the full range of heat loads, environments and plumbing arrangements expected for the shuttle radiators. Both two sided and one sided radiation have been included. The model predictions show excellent agreement with the test data for the maximum design conditions of high load and hot environment. Predictions under minimum design conditions of low load-cold environments indicate good agreement with the measured data, but evaluation of low load predictions should consider the possibility of parallel flow instabilities due to main system freezing. Performance predictions under intermediate conditions in which the majority of the flow is not in either the main or prime system are adequate although model improvements in this area may be desired. The primary modeling objective of providing an analytical technique for performance predictions of a multi-panel radiator system under the design conditions has been met.

  17. Experimental and Numerical Analysis of Narrowband Coherent Rayleigh-Brillouin Scattering in Atomic and Molecular Species (Pre Print)

    DTIC Science & Technology

    2012-02-01

    use of polar gas species. While current simplified models have adequately predicted CRS and CRBS line shapes for a wide variety of cases, multiple ...published simplified models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas... models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas properties

  18. Ground-water models for water resource planning

    USGS Publications Warehouse

    Moore, J.E.

    1983-01-01

    In the past decade hydrogeologists have emphasized the development of computer-based mathematical models to aid in the understanding of flow, the transport of solutes, transport of heat, and deformation in the ground-water system. These models have been used to provide information and predictions for water managers. Too frequently, ground-water was neglected in water resource planning because managers believed that it could not be adequately evaluated in terms of availability, quality, and effect of development on surface-water supplies. Now, however, with newly developed digital ground-water models, effects of development can be predicted. Such models have been used to predict hydrologic and quality changes under different stresses. These models have grown in complexity over the last ten years from simple one-layer models to three-dimensional simulations of ground-water flow, which may include solute transport, heat transport, effects of land subsidence, and encroachment of saltwater. Case histories illustrate how predictive ground-water models have provided the information needed for the sound planning and management of water resources in the USA. ?? 1983 D. Reidel Publishing Company.

  19. [Evaluating the performance of species distribution models Biomod2 and MaxEnt using the giant panda distribution data].

    PubMed

    Luo, Mei; Wang, Hao; Lyu, Zhi

    2017-12-01

    Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.

  20. Importance of adequate local spatiotemporal transmission measures in malaria cohort studies: application to the relation between placental malaria and first malaria infection in infants.

    PubMed

    Le Port, Agnès; Cottrell, Gilles; Chandre, Fabrice; Cot, Michel; Massougbodji, Achille; Garcia, André

    2013-07-01

    According to several studies, infants whose mothers had a malaria-infected placenta (MIP) at delivery are at increased risk of a first malaria infection. Immune tolerance caused by intrauterine contact with the parasite could explain this phenomenon, but it is also known that infants who are highly exposed to Anopheles mosquitoes infected with Plasmodium are at greater risk of contracting malaria. Consequently, local malaria transmission must be taken into account to demonstrate the immune tolerance hypothesis. From data collected between 2007 and 2010 on 545 infants followed from birth to age 18 months in southern Benin, we compared estimates of the effect of MIP on time to first malaria infection obtained through different Cox models. In these models, MIP was adjusted for either 1) "village-like" time-independent exposure variables or 2) spatiotemporal exposure prediction derived from local climatic, environmental, and behavioral factors. Only the use of exposure prediction improved the model's goodness of fit (Bayesian Information Criterion) and led to clear conclusions regarding the effect of placental infection, whereas the models using the village-like variables were less successful than the univariate model. This demonstrated clearly the benefit of adequately taking transmission into account in cohort studies of malaria.

  1. Ground deposition of liquid droplets released from a point source in the atmospheric surface layer

    NASA Astrophysics Data System (ADS)

    Panneton, Bernard

    1989-01-01

    A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.

  2. Adaptation of clinical prediction models for application in local settings.

    PubMed

    Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M

    2012-01-01

    When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.

  3. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    Treesearch

    Henrique F. Duarte; Brett M. Raczka; Daniel M. Ricciuto; John C. Lin; Charles D. Koven; Peter E. Thornton; David R. Bowling; Chun-Ta Lai; Kenneth J. Bible; James R. Ehleringer

    2017-01-01

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest...

  4. Grid Resolution Effects on LES of a Piloted Methane-Air Flame

    DTIC Science & Technology

    2009-05-20

    respectively. In the LES momen- tum equation , Eq.(3), the Smagorinsky model is used to obtain the deviatoric part of the unclosed SGS stress τi j... accurately predicted from integra- tion of their LES evolution equations ; and (ii), the flamelet parametrization should adequately approximate the... effect of the complex small-scale turbulence/chemistry interactions is modeled in an affordable way by a combustion model. A question of how a particular

  5. Multi-step rhodopsin inactivation schemes can account for the size variability of single photon responses in Limulus ventral photoreceptors

    PubMed Central

    1994-01-01

    Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085

  6. Identifying and assessing critical uncertainty thresholds in a forest pest risk model

    Treesearch

    Frank H. Koch; Denys Yemshanov

    2015-01-01

    Pest risk maps can provide helpful decision support for invasive alien species management, but often fail to address adequately the uncertainty associated with their predicted risk values. Th is chapter explores how increased uncertainty in a risk model’s numeric assumptions (i.e. its principal parameters) might aff ect the resulting risk map. We used a spatial...

  7. Predicting charmonium and bottomonium spectra with a quark harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Norbury, J. W.; Badavi, F. F.; Townsend, L. W.

    1986-01-01

    The nonrelativistic quark model is applied to heavy (nonrelativistic) meson (two-body) systems to obtain sufficiently accurate predictions of the spin-averaged mass levels of the charmonium and bottomonium spectra as an example of the three-dimensional harmonic oscillator. The present calculations do not include any spin dependence, but rather, mass values are averaged for different spins. Results for a charmed quark mass value of 1500 MeV/c-squared show that the simple harmonic oscillator model provides good agreement with experimental values for 3P states, and adequate agreement for the 3S1 states.

  8. Shock compaction of molybdenum powder

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Kostka, D.; Vreeland, T., Jr.; Schwarz, R. B.; Kasiraj, P.

    1983-01-01

    Shock recovery experiments which were carried out in the 9 to 12 GPa range on 1.4 distension Mo and appear adequate to compact to full density ( 45 (SIGMA)m) powders were examined. The stress levels, however, are below those calculated to be from 100 to approx. 22 GPa which a frictional heating model predicts are required to consolidate approx. 10 to 50 (SIGMA)m particles. The model predicts that powders that have a distension of m=1.6 shock pressures of 14 to 72 GPa are required to consolidate Mo powders in the 50 to 10 (SIGMA)m range.

  9. Spatially Detailed Porosity Prediction From Airborne Electromagnetics and Sparse Borehole Fluid Sampling

    NASA Astrophysics Data System (ADS)

    Macnae, J.; Ley-Cooper, Y.

    2009-05-01

    Sub-surface porosity is of importance in estimating fluid contant and salt-load parameters for hydrological modelling. While sparse boreholes may adequately sample the depth to a sub-horizontal water-table and usually also adequately sample ground-water salinity, they do not provide adequate sampling of the spatial variations in porosity or hydraulic permeability caused by spatial variations in sedimentary and other geological processes.. We show in this presentation that spatially detailed porosity can be estimated by applying Archie's law to conductivity estimates from airborne electromagnetic surveys with interpolated ground-water conductivity values. The prediction was tested on data from the Chowilla flood plain in the Murray-Darling Basin of South Australia. A frequency domain, helicopter-borne electromagnetic system collected data at 6 frequencies and 3 to 4 m spacings on lines spaced 100 m apart. This data was transformed into conductivity-depth sections, from which a 3D bulk-conductivity map could be created with about 30 m spatial resolution and 2 to 5 m vertical depth resolution. For that portion of the volume below the interpolated water-table, we predicted porosity in each cell using Archie's law. Generally, predicted porosities were in the 30 to 50 % range, consistent with expectations for the partially consolidated sediments in the floodplain. Porosities were directly measured on core from eight boreholes in the area, and compared quite well with the predictions. The predicted porosity map was spatially consistent, and when combined with measured salinities in the ground water, was able to provide a detailed 3D map of salt-loads in the saturated zone, and as such contribute to a hazard assessment of the saline threat to the river.

  10. Calibration and validation of a physiologically based model for soman intoxication in the rat, marmoset, guinea pig and pig.

    PubMed

    Chen, Kaizhen; Seng, Kok-Yong

    2012-09-01

    A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model has been developed for low, medium and high levels of soman intoxication in the rat, marmoset, guinea pig and pig. The primary objective of this model was to describe the pharmacokinetics of soman after intravenous, intramuscular and subcutaneous administration in the rat, marmoset, guinea pig, and pig as well as its subsequent pharmacodynamic effects on blood acetylcholinesterase (AChE) levels, relating dosimetry to physiological response. The reactions modelled in each physiologically realistic compartment are: (1) partitioning of C(±)P(±) soman from the blood into the tissue; (2) inhibition of AChE and carboxylesterase (CaE) by soman; (3) elimination of soman by enzymatic hydrolysis; (4) de novo synthesis and degradation of AChE and CaE; and (5) aging of AChE-soman and CaE-soman complexes. The model was first calibrated for the rat, then extrapolated for validation in the marmoset, guinea pig and pig. Adequate fits to experimental data on the time course of soman pharmacokinetics and AChE inhibition were achieved in the mammalian models. In conclusion, the present model adequately predicts the dose-response relationship resulting from soman intoxication and can potentially be applied to predict soman pharmacokinetics and pharmacodynamics in other species, including human. Copyright © 2011 John Wiley & Sons, Ltd.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fan; Parker, Jack C.; Watson, David B

    This study investigates uranium and technetium sorption onto aluminum and iron hydroxides during titration of acidic groundwater. The contaminated groundwater exhibits oxic conditions with high concentrations of NO{sub 3}{sup -}, SO{sub 4}{sup 2-}, U, Tc, and various metal cations. More than 90% of U and Tc was removed from the aqueous phase as Al and Fe precipitated above pH 5.5, but was partially resolublized at higher pH values. An equilibrium hydrolysis and precipitation reaction model adequately described variations in aqueous concentrations of metal cations. An anion exchange reaction model was incorporated to simulate sulfate, U and Tc sorption onto variablymore » charged (pH-dependent) Al and Fe hydroxides. Modeling results indicate that competitive sorption/desorption on mixed mineral phases needs to be considered to adequately predict U and Tc mobility. The model could be useful for future studies of the speciation of U, Tc and co-existing ions during pre- and post-groundwater treatment practices.« less

  12. Using remote sensing environmental data to forecast malaria incidence at a rural district hospital in Western Kenya.

    PubMed

    Sewe, Maquins Odhiambo; Tozan, Yesim; Ahlm, Clas; Rocklöv, Joacim

    2017-06-01

    Malaria surveillance data provide opportunity to develop forecasting models. Seasonal variability in environmental factors correlate with malaria transmission, thus the identification of transmission patterns is useful in developing prediction models. However, with changing seasonal transmission patterns, either due to interventions or shifting weather seasons, traditional modelling approaches may not yield adequate predictive skill. Two statistical models,a general additive model (GAM) and GAMBOOST model with boosted regression were contrasted by assessing their predictive accuracy in forecasting malaria admissions at lead times of one to three months. Monthly admission data for children under five years with confirmed malaria at the Siaya district hospital in Western Kenya for the period 2003 to 2013 were used together with satellite derived data on rainfall, average temperature and evapotranspiration(ET). There was a total of 8,476 confirmed malaria admissions. The peak of malaria season changed and malaria admissions reduced overtime. The GAMBOOST model at 1-month lead time had the highest predictive skill during both the training and test periods and thus can be utilized in a malaria early warning system.

  13. Estimating West Nile virus transmission period in Pennsylvania using an optimized degree-day model.

    PubMed

    Chen, Shi; Blanford, Justine I; Fleischer, Shelby J; Hutchinson, Michael; Saunders, Michael C; Thomas, Matthew B

    2013-07-01

    Abstract We provide calibrated degree-day models to predict potential West Nile virus (WNV) transmission periods in Pennsylvania. We begin by following the standard approach of treating the degree-days necessary for the virus to complete the extrinsic incubation period (EIP), and mosquito longevity as constants. This approach failed to adequately explain virus transmission periods based on mosquito surveillance data from 4 locations (Harrisburg, Philadelphia, Pittsburgh, and Williamsport) in Pennsylvania from 2002 to 2008. Allowing the EIP and adult longevity to vary across time and space improved model fit substantially. The calibrated models increase the ability to successfully predict the WNV transmission period in Pennsylvania to 70-80% compared to less than 30% in the uncalibrated model. Model validation showed the optimized models to be robust in 3 of the locations, although still showing errors for Philadelphia. These models and methods could provide useful tools to predict WNV transmission period from surveillance datasets, assess potential WNV risk, and make informed mosquito surveillance strategies.

  14. Validation of the Registry to Evaluate Early and Long-Term Pulmonary Arterial Hypertension Disease Management (REVEAL) pulmonary hypertension prediction model in a unique population and utility in the prediction of long-term survival.

    PubMed

    Cogswell, Rebecca; Kobashigawa, Erin; McGlothlin, Dana; Shaw, Robin; De Marco, Teresa

    2012-11-01

    The Registry to Evaluate Early and Long-Term Pulmonary Arterial (PAH) Hypertension Disease Management (REVEAL) model was designed to predict 1-year survival in patients with PAH. Multivariate prediction models need to be evaluated in cohorts distinct from the derivation set to determine external validity. In addition, limited data exist on the utility of this model in the prediction of long-term survival. REVEAL model performance was assessed to predict 1-year and 5-year outcomes, defined as survival or composite survival or freedom from lung transplant, in 140 patients with PAH. The validation cohort had a higher proportion of human immunodeficiency virus (7.9% vs 1.9%, p < 0.0001), methamphetamine use (19.3% vs 4.9%, p < 0.0001), and portal hypertension PAH (16.4% vs 5.1%, p < 0.0001) compared with the development cohort. The C-index of the model to predict survival was 0.765 at 1 year and 0.712 at 5 years of follow-up. The C-index of the model to predict composite survival or freedom from lung transplant was 0.805 and 0.724 at 1 and 5 years of follow-up, respectively. Prediction by the model, however, was weakest among patients with intermediate-risk predicted survival. The REVEAL model had adequate discrimination to predict 1-year survival in this small but clinically distinct validation cohort. Although the model also had predictive ability out to 5 years, prediction was limited among patients of intermediate risk, suggesting our prediction methods can still be improved. Copyright © 2012. Published by Elsevier Inc.

  15. Recent progress towards predicting aircraft ground handling performance

    NASA Technical Reports Server (NTRS)

    Yager, T. J.; White, E. J.

    1981-01-01

    Capability implemented in simulating aircraft ground handling performance is reviewed and areas for further expansion and improvement are identified. Problems associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior are discussed and efforts to improve tire/runway friction definition, and simulator fidelity are described. Aircraft braking performance data obtained on several wet runway surfaces are compared to ground vehicle friction measurements. Research to improve methods of predicting tire friction performance are discussed.

  16. Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  17. Kinetic analysis of elastomeric lag damper for helicopter rotors

    NASA Astrophysics Data System (ADS)

    Liu, Yafang; Wang, Jidong; Tong, Yan

    2018-02-01

    The elastomeric lag dampers suppress the ground resonance and air resonance that play a significant role in the stability of the helicopter. In this paper, elastomeric lag damper which is made from silicone rubber is built. And a series of experiments are conducted on this elastomeric lag damper. The stress-strain curves of elastomeric lag dampers employed shear forces at different frequency are obtained. And a finite element model is established based on Burgers model. The result of simulation and tests shows that the simple, linear model will yield good predictions of damper energy dissipation and it is adequate for predicting the stress-strain hysteresis loop within the operating frequency and a small-amplitude oscillation.

  18. Evaluating diagnosis-based case-mix measures: how well do they apply to the VA population?

    PubMed

    Rosen, A K; Loveland, S; Anderson, J J; Rothendler, J A; Hankin, C S; Rakovski, C C; Moskowitz, M A; Berlowitz, D R

    2001-07-01

    Diagnosis-based case-mix measures are increasingly used for provider profiling, resource allocation, and capitation rate setting. Measures developed in one setting may not adequately capture the disease burden in other settings. To examine the feasibility of adapting two such measures, Adjusted Clinical Groups (ACGs) and Diagnostic Cost Groups (DCGs), to the Department of Veterans Affairs (VA) population. A 60% random sample of veterans who used health care services during FY 1997 was obtained from VA inpatient and outpatient administrative databases. A split-sample technique was used to obtain a 40% sample (n = 1,046,803) for development and a 20% sample (n = 524,461) for validation. Concurrent ACG and DCG risk adjustment models, using 1997 diagnoses and demographics to predict FY 1997 utilization (ambulatory provider encounters, and service days-the sum of a patient's inpatient and outpatient visit days), were fitted and cross-validated. Patients were classified into groupings that indicated a population with multiple psychiatric and medical diseases. Model R-squares explained between 6% and 32% of the variation in service utilization. Although reparameterized models did better in predicting utilization than models with external weights, none of the models was adequate in characterizing the entire population. For predicting service days, DCGs were superior to ACGs in most categories, whereas ACGs did better at discriminating among veterans who had the lowest utilization. Although "off-the-shelf" case-mix measures perform moderately well when applied to another setting, modifications may be required to accurately characterize a population's disease burden with respect to the resource needs of all patients.

  19. Disinfection by-products (DBPs) in drinking water and predictive models for their occurrence: a review.

    PubMed

    Sadiq, Rehan; Rodriguez, Manuel J

    2004-04-05

    Disinfection for drinking water reduces the risk of pathogenic infection but may pose chemical threat to human health due to disinfection residues and their by-products (DBPs) when the organic and inorganic precursors are present in water. More than 250 DBPs have been identified, but the behavioural profile of only approximately 20 DBPs are adequately known. In the last 2 decades, many modelling attempts have been made to predict the occurrence of DBPs in drinking water. Models have been developed based on data generated in laboratory-scaled and field-scaled investigations. The objective of this paper is to review DBPs predictive models, identify their advantages and limitations, and examine their potential applications as decision-making tools for water treatment analysis, epidemiological studies and regulatory concerns. The paper concludes with a discussion about the future research needs in this area.

  20. Statistical distribution of mechanical properties for three graphite-epoxy material systems

    NASA Technical Reports Server (NTRS)

    Reese, C.; Sorem, J., Jr.

    1981-01-01

    Graphite-epoxy composites are playing an increasing role as viable alternative materials in structural applications necessitating thorough investigation into the predictability and reproducibility of their material strength properties. This investigation was concerned with tension, compression, and short beam shear coupon testing of large samples from three different material suppliers to determine their statistical strength behavior. Statistical results indicate that a two Parameter Weibull distribution model provides better overall characterization of material behavior for the graphite-epoxy systems tested than does the standard Normal distribution model that is employed for most design work. While either a Weibull or Normal distribution model provides adequate predictions for average strength values, the Weibull model provides better characterization in the lower tail region where the predictions are of maximum design interest. The two sets of the same material were found to have essentially the same material properties, and indicate that repeatability can be achieved.

  1. A new framework to enhance the interpretation of external validation studies of clinical prediction models.

    PubMed

    Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M

    2015-03-01

    It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Predicting forest dieback in Maine, USA: a simple model based on soil frost and drought

    Treesearch

    Allan N.D. Auclair; Warren E. Heilman; Blondel Brinkman

    2010-01-01

    Tree roots of northern hardwoods are shallow rooted, winter active, and minimally frost hardened; dieback is a winter freezing injury to roots incited by frost penetration in the absence of adequate snow cover and exacerbated by drought in summer. High soil water content greatly increases conductivity of frost. We develop a model based on the sum of z-scores of soil...

  3. Developing strategies to initialize landscape-scale vegetation maps from FIA data to enhance resolution of individual species-size cohort representation in the landscape disturbance model SIMPPLLE

    Treesearch

    Jacob John Muller

    2014-01-01

    The ability of forest resource managers to understand and anticipate landscape-scale change in composition and structure relies upon an adequate characterization of the current forest composition and structure of various patches (or stands), along with the capacity of forest landscape models (FLMs) to predict patterns of growth, succession, and disturbance at multiple...

  4. Who or What Contributes to Student Satisfaction in Different Blended Learning Modalities?

    ERIC Educational Resources Information Center

    Diep, Anh-Nguyet; Zhu, Chang; Struyven, Katrien; Blieck, Yves

    2017-01-01

    Different blended learning (BL) modalities and the interaction effect between human and technological factors on student satisfaction need adequately researched to shed more light on successful BL implementation. The objective of the present article is three-fold: (1) to present a model to predict student satisfaction with BL programs, (2) to…

  5. Attitudes and Beliefs of Adolescent Experimental Smokers: A Smoking Prevention Perspective.

    ERIC Educational Resources Information Center

    Wang, Min Qi; And Others

    1996-01-01

    Examines the relationships of smoking-related beliefs, attitudes, and smoking status, with a focus on experimental smoking. Survey results of 9,774 respondents suggest that attitude and belief variables can adequately predict smoking stages of adolescents as defined by the stage model of smoking acquisition. Argues that intervention with…

  6. Simulation of κ-Carbide Precipitation Kinetics in Aged Low-Density Fe-Mn-Al-C Steels and Its Effects on Strengthening

    NASA Astrophysics Data System (ADS)

    Lee, Jaeeun; Park, Siwook; Kim, Hwangsun; Park, Seong-Jun; Lee, Keunho; Kim, Mi-Young; Madakashira, Phaniraj P.; Han, Heung Nam

    2018-03-01

    Fe-Al-Mn-C alloy systems are low-density austenite-based steels that show excellent mechanical properties. After aging such steels at adequate temperatures for adequate time, nano-scale precipitates such as κ-carbide form, which have profound effects on the mechanical properties. Therefore, it is important to predict the amount and size of the generated κ-carbide precipitates in order to control the mechanical properties of low-density steels. In this study, the microstructure and mechanical properties of aged low-density austenitic steel were characterized. Thermo-kinetic simulations of the aging process were used to predict the size and phase fraction of κ-carbide after different aging periods, and these results were validated by comparison with experimental data derived from dark-field transmission electron microscopy images. Based on these results, models for precipitation strengthening based on different mechanisms were assessed. The measured increase in the strength of aged specimens was compared with that calculated from the models to determine the exact precipitation strengthening mechanism.

  7. Dyadic associations between cancer-related stress and fruit and vegetable consumption among colorectal cancer patients and their family caregivers.

    PubMed

    Shaffer, Kelly M; Kim, Youngmee; Llabre, Maria M; Carver, Charles S

    2016-02-01

    This study examined how stress from cancer affects fruit and vegetable consumption (FVC) in cancer patients and their family caregivers during the year following diagnosis. Colorectal cancer patients and their caregivers (92 dyads) completed questionnaires at two (T1), six (T2), and 12 months post-diagnosis (T3). Individuals reported perceived cancer-related stress (CRS) at T1 and days of adequate FVC at T1 through T3. Both patients and caregivers reported inadequate FVC during the first year post-diagnosis. Latent growth modeling with actor-partner interdependence modeling revealed that, at T1, one's own greater CRS was associated with one's partner having fewer concurrent days of adequate FVC (ps = .01). Patients' greater CRS predicted their own more pronounced rebound pattern in FVC (p = .01); both patients' and caregivers' CRS marginally predicted their partners' change in FVC (p = .09). Findings suggest that perceived stress from cancer hinders FVC around the diagnosis, but motivates positive dietary changes by the end of the first year.

  8. Health-based risk adjustment: is inpatient and outpatient diagnostic information sufficient?

    PubMed

    Lamers, L M

    Adequate risk adjustment is critical to the success of market-oriented health care reforms in many countries. Currently used risk adjusters based on demographic and diagnostic cost groups (DCGs) do not reflect expected costs accurately. This study examines the simultaneous predictive accuracy of inpatient and outpatient morbidity measures and prior costs. DCGs, pharmacy cost groups (PCGs), and prior year's costs improve the predictive accuracy of the demographic model substantially. DCGs and PCGs seem complementary in their ability to predict future costs. However, this study shows that the combination of DCGs and PCGs still leaves room for cream skimming.

  9. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast‑growing Eucalyptus forest plantation using airborne LiDAR data

    Treesearch

    Carlos Alberto Silva; Andrew Thomas Hudak; Carine Klauberg; Lee Alexandre Vierling; Carlos Gonzalez‑Benecke; Samuel de Padua Chaves Carvalho; Luiz Carlos Estraviz Rodriguez; Adrian Cardil

    2017-01-01

    LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m− 2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations...

  10. Runoff as a factor in USLE/RUSLE technology

    NASA Astrophysics Data System (ADS)

    Kinnell, Peter

    2014-05-01

    Modelling erosion for prediction purposes started with the development of the Universal Soil Loss Equation the focus of which was the prediction of long term (~20) average annul soil loss from field sized areas. That purpose has been maintained in the subsequent revision RUSLE, the most widely used erosion prediction model in the world. The lack of ability to predict short term soil loss saw the development of so-called process based models like WEPP and EUROSEM which focussed on predicting event erosion but failed to improve the prediction of long term erosion where the RUSLE worked well. One of the features of erosion recognised in the so-called process based modes is the fact that runoff is a primary factor in rainfall erosion and some modifications of USLE/RUSLE model have been proposed have included runoff as in independent factor in determining event erosivity. However, these models have ignored fundamental mathematical rules. The USLE-M which replaces the EI30 index by the product of the runoff ratio and EI30 was developed from the concept that soil loss is the product of runoff and sediment concentration and operates in a way that obeys the mathematical rules upon which the USLE/RUSLE model was based. In accounts for event soil loss better that the EI30 index where runoff values are known or predicted adequately. RUSLE2 now includes a capacity to model runoff driven erosion.

  11. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-01-20

    Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  12. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-02-01

    Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Stichting European Society for Clinical Investigation Journal Foundation.

  13. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-06

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  14. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD)

    PubMed Central

    Reitsma, Johannes B.; Altman, Douglas G.; Moons, Karel G.M.

    2015-01-01

    Background— Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. Methods— The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. Results— The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. Conclusions— To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25561516

  15. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement

    PubMed Central

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-01-01

    Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25562432

  16. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-02-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Royal College of Obstetricians and Gynaecologists.

  17. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. The TRIPOD Group.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-13

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 The Authors.

  18. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-06

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  19. Transparent reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-02-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Improving Flash Flood Prediction in Multiple Environments

    NASA Astrophysics Data System (ADS)

    Broxton, P. D.; Troch, P. A.; Schaffner, M.; Unkrich, C.; Goodrich, D.; Wagener, T.; Yatheendradas, S.

    2009-12-01

    Flash flooding is a major concern in many fast responding headwater catchments . There are many efforts to model and to predict these flood events, though it is not currently possible to adequately predict the nature of flash flood events with a single model, and furthermore, many of these efforts do not even consider snow, which can, by itself, or in combination with rainfall events, cause destructive floods. The current research is aimed at broadening the applicability of flash flood modeling. Specifically, we will take a state of the art flash flood model that is designed to work with warm season precipitation in arid environments, the KINematic runoff and EROSion model (KINEROS2), and combine it with a continuous subsurface flow model and an energy balance snow model. This should improve its predictive capacity in humid environments where lateral subsurface flow significantly contributes to streamflow, and it will make possible the prediction of flooding events that involve rain-on-snow or rapid snowmelt. By modeling changes in the hydrologic state of a catchment before a flood begins, we can also better understand the factors or combination of factors that are necessary to produce large floods. Broadening the applicability of an already state of the art flash flood model, such as KINEROS2, is logical because flash floods can occur in all types of environments, and it may lead to better predictions, which are necessary to preserve life and property.

  1. Asteroid Bennu Temperature Maps for OSIRIS-REx Spacecraft and Instrument Thermal Analyses

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.; Emery, Josh; Delbo, Marco

    2014-01-01

    A thermophysical model has been developed to generate asteroid Bennu surface temperature maps for OSIRIS-REx spacecraft and instrument thermal design and analyses at the Critical Design Review (CDR). Two-dimensional temperature maps for worst hot and worst cold cases are used in Thermal Desktop to assure adequate thermal design margins. To minimize the complexity of the Bennu geometry in Thermal Desktop, it is modeled as a sphere instead of the radar shape. The post-CDR updated thermal inertia and a modified approach show that the new surface temperature predictions are more benign. Therefore the CDR Bennu surface temperature predictions are conservative.

  2. Elastohydrodynamic film thickness model for heavily loaded contacts

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.; Parker, R. J.; Zaretsky, E. V.

    1973-01-01

    An empirical elastohydrodynamic (EHD) film thickness formula for predicting the minimum film thickness occurring within heavily loaded contacts (maximum Hertz stresses above 150,000 psi) was developed. The formula was based upon X-ray film thickness measurements made with synthetic paraffinic, fluorocarbon, Type II ester and polyphenyl ether fluids covering a wide range of test conditions. Comparisons were made between predictions from an isothermal EHD theory and the test data. The deduced relationship was found to adequately reflect the high-load dependence exhibited by the measured data. The effects of contact geometry, material and lubricant properties on the form of the empirical model are also discussed.

  3. Challenges in predicting climate change impacts on pome fruit phenology

    NASA Astrophysics Data System (ADS)

    Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.

    2014-08-01

    Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.

  4. Intensity dependence of focused ultrasound lesion position

    NASA Astrophysics Data System (ADS)

    Meaney, Paul M.; Cahill, Mark D.; ter Haar, Gail R.

    1998-04-01

    Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical to predicting lesion formation in focused ultrasound surgery. To date most models have used linear propagation models to predict the intensity profiles needed to compute the temporally varying temperature distributions. These can be used to compute thermal dose contours that can in turn be used to predict the extent of thermal damage. However, these simulations fail to adequately describe the abnormal lesion formation behavior observed for in vitro experiments in cases where the transducer drive levels are varied over a wide range. For these experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increasing transducer drive levels than would be predicted using linear propagation models. The simulations described herein, utilize the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused ultrasound waves, to demonstrate that the positions of the peak intensity and the lesion do indeed move closer to the transducer. This illustrates that for accurate modeling of heating during FUS, nonlinear effects must be considered.

  5. Modeling of Pedestrian Flows Using Hybrid Models of Euler Equations and Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Bärwolff, Günter; Slawig, Thomas; Schwandt, Hartmut

    2007-09-01

    In the last years various systems have been developed for controlling, planning and predicting the traffic of persons and vehicles, in particular under security aspects. Going beyond pure counting and statistical models, approaches were found to be very adequate and accurate which are based on well-known concepts originally developed in very different research areas, namely continuum mechanics and computer science. In the present paper, we outline a continuum mechanical approach for the description of pedestrain flow.

  6. [A prognostic model for assessment of outcome of root canal treatment in teeth with pulpitis or apical periodontitis].

    PubMed

    Zhang, M M; Zheng, Y D; Liang, Y H

    2018-02-18

    To present a prognostic model for evaluating the outcome of root canal treatment in teeth with pulpitis or apical periodontitis 2 years after treatment. The implementation of this study was based on a retrospective study on the 2-year outcome of root canal treatment. A cohort of 360 teeth, which received treatment and review, were chosen to build up the total sample size. In the study, 143 teeth with vital pulp and 217 teeth with apical periodontitis were included. About 67% of the samples were selected randomly to derive a training date set for modeling, and the others were used as validating date set for testing. Logistic regression models were used to produce the prognostic models. The dependent variable was defined as absence of periapical lesion or reduction of periapical lesion. The predictability of the models was evaluated by the area under the receiver-operating characteristic (ROC) curve (AUC). Four predictors were included in model one (absence of apical lesion): pre-operative periapical radiolucency, canal curvature, density and apical extent of root fillings. The AUC was 0.802 (95%CI: 0.744-0.859). And the AUC of the testing date was 0.688. Only the density and apical extent of root fillings were included to present model two (reduction of apical lesion). The AUC of training dates and testing dates were 0.734 (95%CI: 0.612-0.856) and 0.681, respectively. As predicted by model one, the probability of absence of periapical lesion 2 years after endodontic treatment was 90% in pulpitis teeth with sever root-canal curvature and adequate root canal fillings, but 51% in teeth with apical periodontitis. When using prognostic model two for prediction, in teeth with apical periodontitis, the probability of detecting lesion reduction with adequate or inadequate root fillings was 95% and 39% 2 years after treatment. The pre-operative periapical status, canal curvature and quality of root canal treatment could be used to predict the 2-year outcome of root canal treatment.

  7. The strengths and weaknesses of inverted pendulum models of human walking.

    PubMed

    McGrath, Michael; Howard, David; Baker, Richard

    2015-02-01

    An investigation into the kinematic and kinetic predictions of two "inverted pendulum" (IP) models of gait was undertaken. The first model consisted of a single leg, with anthropometrically correct mass and moment of inertia, and a point mass at the hip representing the rest of the body. A second model incorporating the physiological extension of a head-arms-trunk (HAT) segment, held upright by an actuated hip moment, was developed for comparison. Simulations were performed, using both models, and quantitatively compared with empirical gait data. There was little difference between the two models' predictions of kinematics and ground reaction force (GRF). The models agreed well with empirical data through mid-stance (20-40% of the gait cycle) suggesting that IP models adequately simulate this phase (mean error less than one standard deviation). IP models are not cyclic, however, and cannot adequately simulate double support and step-to-step transition. This is because the forces under both legs augment each other during double support to increase the vertical GRF. The incorporation of an actuated hip joint was the most novel change and added a new dimension to the classic IP model. The hip moment curve produced was similar to those measured during experimental walking trials. As a result, it was interpreted that the primary role of the hip musculature in stance is to keep the HAT upright. Careful consideration of the differences between the models throws light on what the different terms within the GRF equation truly represent. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Using modelling to predict impacts of sea level rise and increased turbidity on seagrass distributions in estuarine embayments

    NASA Astrophysics Data System (ADS)

    Davis, Tom R.; Harasti, David; Smith, Stephen D. A.; Kelaher, Brendan P.

    2016-11-01

    Climate change induced sea level rise will affect shallow estuarine habitats, which are already under threat from multiple anthropogenic stressors. Here, we present the results of modelling to predict potential impacts of climate change associated processes on seagrass distributions. We use a novel application of relative environmental suitability (RES) modelling to examine relationships between variables of physiological importance to seagrasses (light availability, wave exposure, and current flow) and seagrass distributions within 5 estuarine embayments. Models were constructed separately for Posidonia australis and Zostera muelleri subsp. capricorni using seagrass data from Port Stephens estuary, New South Wales, Australia. Subsequent testing of models used independent datasets from four other estuarine embayments (Wallis Lake, Lake Illawarra, Merimbula Lake, and Pambula Lake) distributed along 570 km of the east Australian coast. Relative environmental suitability models provided adequate predictions for seagrass distributions within Port Stephens and the other estuarine embayments, indicating that they may have broad regional application. Under the predictions of RES models, both sea level rise and increased turbidity are predicted to cause substantial seagrass losses in deeper estuarine areas, resulting in a net shoreward movement of seagrass beds. Seagrass species distribution models developed in this study provide a valuable tool to predict future shifts in estuarine seagrass distributions, allowing identification of areas for protection, monitoring and rehabilitation.

  9. Behavior of Americium in Simulated Wounds in Nonhuman Primates

    DOE PAGES

    Poudel, Deepesh; Guilmette, Raymond A.; Bertelli, Luiz; ...

    2017-06-01

    An americium solution injected intramuscularly into several nonhuman primates (NHPs) was found to behave differently than predicted by the wound models described in the NCRP Report 156. This was because the injection was made along with a citrate solution, which is known to be more soluble than chlorides, oxides, or nitrates on which the NCRP Report was based. We developed a multi-exponential wound model specific to the injected americium solution based on the retention in the intramuscular sites. The model was coupled with the americium systemic model to interpret the urinary excretion data and assess the intake, and it wasmore » determined that the models were adequate to predict early urinary excretion in most cases but unable to predict late urinary excretion. This was attributed to the differences in the systemic handling of americium between humans and nonhuman primates. Furthermore, information on the type of wounds, solubility, particle size, mass, chemical form, etc., should always be considered when performing wound dosimetry.« less

  10. Three-dimensional effects on pure tone fan noise due to inflow distortion. [rotor blade noise prediction

    NASA Technical Reports Server (NTRS)

    Kobayashi, H.

    1978-01-01

    Two dimensional, quasi three dimensional and three dimensional theories for the prediction of pure tone fan noise due to the interaction of inflow distortion with a subsonic annular blade row were studied with the aid of an unsteady three dimensional lifting surface theory. The effects of compact and noncompact source distributions on pure tone fan noise in an annular cascade were investigated. Numerical results show that the strip theory and quasi three-dimensional theory are reasonably adequate for fan noise prediction. The quasi three-dimensional method is more accurate for acoustic power and model structure prediction with an acoustic power estimation error of about plus or minus 2db.

  11. One-dimensional wave bottom boundary layer model comparison: specific eddy viscosity and turbulence closure models

    USGS Publications Warehouse

    Puleo, J.A.; Mouraenko, O.; Hanes, D.M.

    2004-01-01

    Six one-dimensional-vertical wave bottom boundary layer models are analyzed based on different methods for estimating the turbulent eddy viscosity: Laminar, linear, parabolic, k—one equation turbulence closure, k−ε—two equation turbulence closure, and k−ω—two equation turbulence closure. Resultant velocity profiles, bed shear stresses, and turbulent kinetic energy are compared to laboratory data of oscillatory flow over smooth and rough beds. Bed shear stress estimates for the smooth bed case were most closely predicted by the k−ω model. Normalized errors between model predictions and measurements of velocity profiles over the entire computational domain collected at 15° intervals for one-half a wave cycle show that overall the linear model was most accurate. The least accurate were the laminar and k−ε models. Normalized errors between model predictions and turbulence kinetic energy profiles showed that the k−ω model was most accurate. Based on these findings, when the smallest overall velocity profile prediction error is required, the processing requirements and error analysis suggest that the linear eddy viscosity model is adequate. However, if accurate estimates of bed shear stress and TKE are required then, of the models tested, the k−ω model should be used.

  12. A salaried compensation model for postanesthesia nurses.

    PubMed

    Mushala, M E; Henderson, M A

    1995-08-01

    Health care organizations involved in innovative and creative work redesign projects may find traditional pay structures inadequate to meet the needs of the changing environment. The idea of salaried compensation for registered nurses is not unprecedented. However, few salaried compensation models for nurses are described in the literature. This article presents a model that we believe will be of particular interest to nurses in PACUs, because its design allows for adequate call coverage plus flexibility in scheduling. In addition, this compensation model eliminates incidental overtime, thus allowing for a more predictable salary budget.

  13. Evaluation of a Wake Vortex Upset Model Based on Simultaneous Measurements of Wake Velocities and Probe-Aircraft Accelerations

    NASA Technical Reports Server (NTRS)

    Short, B. J.; Jacobsen, R. A.

    1979-01-01

    Simultaneous measurements were made of the upset responses experienced and the wake velocities encountered by an instrumented Learjet probe aircraft behind a Boeing 747 vortex-generating aircraft. The vortex-induced angular accelerations experienced could be predicted within 30% by a mathematical upset response model when the characteristics of the wake were well represented by the vortex model. The vortex model used in the present study adequately represented the wake flow field when the vortices dissipated symmetrically and only one vortex pair existed in the wake.

  14. Shape-based approach for the estimation of individual facial mimics in craniofacial surgery planning

    NASA Astrophysics Data System (ADS)

    Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian

    2002-05-01

    Besides the static soft tissue prediction, the estimation of basic facial emotion expressions is another important criterion for the evaluation of craniofacial surgery planning. For a realistic simulation of facial mimics, an adequate biomechanical model of soft tissue including the mimic musculature is needed. In this work, we present an approach for the modeling of arbitrarily shaped muscles and the estimation of basic individual facial mimics, which is based on the geometrical model derived from the individual tomographic data and the general finite element modeling of soft tissue biomechanics.

  15. An analytical technique for predicting the characteristics of a flexible wing equipped with an active flutter-suppression system and comparison with wind-tunnel data

    NASA Technical Reports Server (NTRS)

    Abel, I.

    1979-01-01

    An analytical technique for predicting the performance of an active flutter-suppression system is presented. This technique is based on the use of an interpolating function to approximate the unsteady aerodynamics. The resulting equations are formulated in terms of linear, ordinary differential equations with constant coefficients. This technique is then applied to an aeroelastic model wing equipped with an active flutter-suppression system. Comparisons between wind-tunnel data and analysis are presented for the wing both with and without active flutter suppression. Results indicate that the wing flutter characteristics without flutter suppression can be predicted very well but that a more adequate model of wind-tunnel turbulence is required when the active flutter-suppression system is used.

  16. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-02-01

    Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. A complete checklist is available at http://www.tripod-statement.org. © 2015 American College of Physicians.

  17. Forecasting municipal solid waste generation using prognostic tools and regression analysis.

    PubMed

    Ghinea, Cristina; Drăgoi, Elena Niculina; Comăniţă, Elena-Diana; Gavrilescu, Marius; Câmpean, Teofil; Curteanu, Silvia; Gavrilescu, Maria

    2016-11-01

    For an adequate planning of waste management systems the accurate forecast of waste generation is an essential step, since various factors can affect waste trends. The application of predictive and prognosis models are useful tools, as reliable support for decision making processes. In this paper some indicators such as: number of residents, population age, urban life expectancy, total municipal solid waste were used as input variables in prognostic models in order to predict the amount of solid waste fractions. We applied Waste Prognostic Tool, regression analysis and time series analysis to forecast municipal solid waste generation and composition by considering the Iasi Romania case study. Regression equations were determined for six solid waste fractions (paper, plastic, metal, glass, biodegradable and other waste). Accuracy Measures were calculated and the results showed that S-curve trend model is the most suitable for municipal solid waste (MSW) prediction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Nonlinear Unsteady Aerodynamic Modeling Using Wind Tunnel and Computational Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.

    2016-01-01

    Extensions to conventional aircraft aerodynamic models are required to adequately predict responses when nonlinear unsteady flight regimes are encountered, especially at high incidence angles and under maneuvering conditions. For a number of reasons, such as loss of control, both military and civilian aircraft may extend beyond normal and benign aerodynamic flight conditions. In addition, military applications may require controlled flight beyond the normal envelope, and civilian flight may require adequate recovery or prevention methods from these adverse conditions. These requirements have led to the development of more general aerodynamic modeling methods and provided impetus for researchers to improve both techniques and the degree of collaboration between analytical and experimental research efforts. In addition to more general mathematical model structures, dynamic test methods have been designed to provide sufficient information to allow model identification. This paper summarizes research to develop a modeling methodology appropriate for modeling aircraft aerodynamics that include nonlinear unsteady behaviors using both experimental and computational test methods. This work was done at Langley Research Center, primarily under the NASA Aviation Safety Program, to address aircraft loss of control, prevention, and recovery aerodynamics.

  19. Dental caries and beverage consumption in young children.

    PubMed

    Marshall, Teresa A; Levy, Steven M; Broffitt, Barbara; Warren, John J; Eichenberger-Gilmore, Julie M; Burns, Trudy L; Stumbo, Phyllis J

    2003-09-01

    Dental caries is a common, chronic disease of childhood. The impact of contemporary changes in beverage patterns, specifically decreased milk intakes and increased 100% juice and soda pop intakes, on dental caries in young children is unknown. We describe associations among caries experience and intakes of dairy foods, sugared beverages, and nutrients and overall diet quality in young children. Subjects (n = 642) are members of the Iowa Fluoride Study, a cohort followed from birth. Food and nutrient intakes were obtained from 3-day diet records analyzed at 1 (n = 636), 2 (n = 525), 3 (n = 441), 4 (n = 410), and 5 (n = 417) years and cumulatively for 1 through 5 (n = 396) years of age. Diet quality was defined by nutrient adequacy ratios (NARs) and calculated as the ratio of nutrient intake to Recommended Dietary Allowance/Adequate Intake. Caries were identified during dental examinations by 2 trained and calibrated dentists at 4 to 7 years of age. Examinations were visual, but a dental explorer was used to confirm questionable findings. Caries experience was assessed at both the tooth and the surface levels. Data were analyzed using SAS. The Wilcoxon rank sum test was used to compare food intakes, nutrient intakes, and NARs of subjects with and without caries experience. Logistic and Tobit regression analyses were used to identify associations among diet variables and caries experience and to develop models to predict caries experience. Not all relationships between food intakes and NARs and caries experience were linear; therefore, categorical variables were used to develop models to predict caries experience. Food and beverage intakes were categorized as none, low, and high intakes, and NARs were categorized as inadequate, low adequate, and high adequate. Subjects with caries had lower median intakes of milk at 2 and 3 years of age than subjects without caries. Subjects with caries had higher median intakes of regular (sugared) soda pop at 2, 3, 4, and 5 years and for 1 through 5 years; regular beverages from powder at 1, 4, and 5 years and for 1 through 5 years; and total sugared beverages at 4 and 5 years than subjects without caries. Logistic regression models were developed for exposure variables at 1, 2, 3, 4, and 5 years and for 1 through 5 years to predict any caries experience at 4 to 7 years of age. Age at dental examination was retained in models at all ages. Children with 0 intake (vs low and high intakes) of regular beverages from powder at 1 year, regular soda pop at 2 and 3 years, and sugar-free beverages from powder at 5 years had a decreased risk of caries experience. High intakes of regular beverages from powder at 4 and 5 years and for 1 through 5 years and regular soda pop at 5 years and for 1 through 5 years were associated with significantly increased odds of caries experience relative to subjects with none or low intakes. Low (vs none or high) intakes of 100% juice at 5 years were associated with decreased caries experience. In general, inadequate intakes (vs low adequate or high adequate intakes) of nutrients (eg, riboflavin, copper, vitamin D, vitamin B(12)) were associated with increased caries experience and low adequate intakes (vs inadequate or high adequate intakes) of nutrients (eg, vitamin B(12), vitamin C) were associated with decreased caries experience. An exception was vitamin E; either low or high adequate intakes were associated with increased caries experience at various ages. Multivariable Tobit regression models were developed for 1- through 5-year exposure variables to predict the number of tooth surfaces with caries experience at 4 to 7 years of age. Age at dental examination showed a significant positive association and fluoride exposure showed a significant negative association with the number of tooth surfaces with caries experience in the final model. Low intakes of nonmilk dairy foods (vs high intakes; all subjects had some nonmilk dairy intakes) and high adequate intakes of vitamin C (vs inadequate and low adequate intakes) were associated with fewer tooth surfaces having caries experience. High intakes of regular soda pop (vs none and low intakes) were associated with more tooth surfaces having caries experience. Results of our study suggest that contemporary changes in beverage patterns, particularly the increase in soda pop consumption, have the potential to increase dental caries rates in children. Consumption of regular soda pop, regular powdered beverages, and, to a lesser extent, 100% juice was associated with increased caries risk. Milk had a neutral association with caries. Associations between different types of sugared beverages and caries experience were not equivalent, which could be attributable to the different sugar compositions of the beverages or different roles in the diet. Our data support contemporary dietary guidelines for children: consume 2 or more servings of dairy foods daily, limit intake of 100% juice to 4 to 6 oz daily, and restrict other sugared beverages to occasional use. Pediatricians, pediatric nurse practitioners, and dietitians are in a position to support pediatric dentists in providing preventive guidance to parents of young children.

  20. Use of statistical and neural net approaches in predicting toxicity of chemicals.

    PubMed

    Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D

    2000-01-01

    Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.

  1. A physiologically based pharmacokinetic model for atrazine and its main metabolites in the adult male C57BL/6 mouse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin Zhoumeng; Interdisciplinary Toxicology Program, University of Georgia, Athens, GA 30602; Fisher, Jeffrey W.

    Atrazine (ATR) is a chlorotriazine herbicide that is widely used and relatively persistent in the environment. In laboratory rodents, excessive exposure to ATR is detrimental to the reproductive, immune, and nervous systems. To better understand the toxicokinetics of ATR and to fill the need for a mouse model, a physiologically based pharmacokinetic (PBPK) model for ATR and its main chlorotriazine metabolites (Cl-TRIs) desethyl atrazine (DE), desisopropyl atrazine (DIP), and didealkyl atrazine (DACT) was developed for the adult male C57BL/6 mouse. Taking advantage of all relevant and recently made available mouse-specific data, a flow-limited PBPK model was constructed. The ATR andmore » DACT sub-models included blood, brain, liver, kidney, richly and slowly perfused tissue compartments, as well as plasma protein binding and red blood cell binding, whereas the DE and DIP sub-models were constructed as simple five-compartment models. The model adequately simulated plasma levels of ATR and Cl-TRIs and urinary dosimetry of Cl-TRIs at four single oral dose levels (250, 125, 25, and 5 mg/kg). Additionally, the model adequately described the dose dependency of brain and liver ATR and DACT concentrations. Cumulative urinary DACT amounts were accurately predicted across a wide dose range, suggesting the model's potential use for extrapolation to human exposures by performing reverse dosimetry. The model was validated using previously reported data for plasma ATR and DACT in mice and rats. Overall, besides being the first mouse PBPK model for ATR and its Cl-TRIs, this model, by analogy, provides insights into tissue dosimetry for rats. The model could be used in tissue dosimetry prediction and as an aid in the exposure assessment to this widely used herbicide.« less

  2. Directional stability of crack propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streit, R.D.; Finnie, I.

    Despite many alternative models, the original Erdogan and Sih (1963) hypothesis that a crack will grow in the direction perpendicular to the maximum circumferential stress sigma/sub theta/ is seen to be adequate for predicting the angle of crack growth under the condition of mixed mode loading. Their predictions, which were based on the singularity terms in the series expansion for the Mode I and Mode II stress fields, can be improved if the second term in the series is also included. Although conceptually simple, their predictions of the crack growth direction fit very closely to the data obtained from manymore » sources.« less

  3. An evaluation of the predictive performance of distributional models for flora and fauna in north-east New South Wales.

    PubMed

    Pearce, J; Ferrier, S; Scotts, D

    2001-06-01

    To use models of species distributions effectively in conservation planning, it is important to determine the predictive accuracy of such models. Extensive modelling of the distribution of vascular plant and vertebrate fauna species within north-east New South Wales has been undertaken by linking field survey data to environmental and geographical predictors using logistic regression. These models have been used in the development of a comprehensive and adequate reserve system within the region. We evaluate the predictive accuracy of models for 153 small reptile, arboreal marsupial, diurnal bird and vascular plant species for which independent evaluation data were available. The predictive performance of each model was evaluated using the relative operating characteristic curve to measure discrimination capacity. Good discrimination ability implies that a model's predictions provide an acceptable index of species occurrence. The discrimination capacity of 89% of the models was significantly better than random, with 70% of the models providing high levels of discrimination. Predictions generated by this type of modelling therefore provide a reasonably sound basis for regional conservation planning. The discrimination ability of models was highest for the less mobile biological groups, particularly the vascular plants and small reptiles. In the case of diurnal birds, poor performing models tended to be for species which occur mainly within specific habitats not well sampled by either the model development or evaluation data, highly mobile species, species that are locally nomadic or those that display very broad habitat requirements. Particular care needs to be exercised when employing models for these types of species in conservation planning.

  4. Evaluating Antarctic sea ice predictability at seasonal to interannual timescales in global climate models

    NASA Astrophysics Data System (ADS)

    Marchi, Sylvain; Fichefet, Thierry; Goosse, Hugues; Zunz, Violette; Tietsche, Steffen; Day, Jonny; Hawkins, Ed

    2016-04-01

    Unlike the rapid sea ice losses reported in the Arctic, satellite observations show an overall increase in Antarctic sea ice extent over recent decades. Although many processes have already been suggested to explain this positive trend, it remains the subject of current investigations. Understanding the evolution of the Antarctic sea ice turns out to be more complicated than for the Arctic for two reasons: the lack of observations and the well-known biases of climate models in the Southern Ocean. Irrespective of those issues, another one is to determine whether the positive trend in sea ice extent would have been predictable if adequate observations and models were available some decades ago. This study of Antarctic sea ice predictability is carried out using 6 global climate models (HadGEM1.2, MPI-ESM-LR, GFDL CM3, EC-Earth V2, MIROC 5.2 and ECHAM 6-FESOM) which are all part of the APPOSITE project. These models are used to perform hindcast simulations in a perfect model approach. The predictive skill is estimated thanks to the PPP (Potential Prognostic Predictability) and the ACC (Anomaly Correlation Coefficient). The former is a measure of the uncertainty of the ensemble while the latter assesses the accuracy of the prediction. These two indicators are applied to different variables related to sea ice, in particular the total sea ice extent and the ice edge location. This first model intercomparison study about sea ice predictability in the Southern Ocean aims at giving a general overview of Antarctic sea ice predictability in current global climate models.

  5. Nonlinear analysis of AS4/PEEK thermoplastic composite laminate using a one parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1990-01-01

    A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  6. Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1992-01-01

    A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  7. Regression models for predicting peak and continuous three-dimensional spinal loads during symmetric and asymmetric lifting tasks.

    PubMed

    Fathallah, F A; Marras, W S; Parnianpour, M

    1999-09-01

    Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.

  8. Progressive failure methodologies for predicting residual strength and life of laminated composites

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Allen, David H.; Obrien, T. Kevin

    1991-01-01

    Two progressive failure methodologies currently under development by the Mechanics of Materials Branch at NASA Langley Research Center are discussed. The damage tolerance/fail safety methodology developed by O'Brien is an engineering approach to ensuring adequate durability and damage tolerance by treating only delamination onset and the subsequent delamination accumulation through the laminate thickness. The continuum damage model developed by Allen and Harris employs continuum damage laws to predict laminate strength and life. The philosophy, mechanics framework, and current implementation status of each methodology are presented.

  9. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, W. J.

    2014-05-01

    Wim J. de Lange, Geert F. Prinsen, Jacco H. Hoogewoud, Ab A Veldhuizen, Joachim Hunink, Erik F.W. Ruijgh, Timo Kroon Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses. Regional expertise is welcomed in the calibration phase of NHI. It aims to reduce uncertainties by improving the rules for manmade re-direction of surface water, schematizations & parameters included in the model. This is carried out in workshops and in one-to-one expert meetings on regional models & the NHI. All results of NHI are presented on the internet and any expert may suggest improvements to the model. The final goal of the involvement of regional parties is the acceptation by decision impact receiving authorities. The presentation will give an overview of the experiences and results of the participation process both technically and in the national policy making context.

  10. In-hospital risk prediction for post-stroke depression: development and validation of the Post-stroke Depression Prediction Scale.

    PubMed

    de Man-van Ginkel, Janneke M; Hafsteinsdóttir, Thóra B; Lindeman, Eline; Ettema, Roelof G A; Grobbee, Diederick E; Schuurmans, Marieke J

    2013-09-01

    The timely detection of post-stroke depression is complicated by a decreasing length of hospital stay. Therefore, the Post-stroke Depression Prediction Scale was developed and validated. The Post-stroke Depression Prediction Scale is a clinical prediction model for the early identification of stroke patients at increased risk for post-stroke depression. The study included 410 consecutive stroke patients who were able to communicate adequately. Predictors were collected within the first week after stroke. Between 6 to 8 weeks after stroke, major depressive disorder was diagnosed using the Composite International Diagnostic Interview. Multivariable logistic regression models were fitted. A bootstrap-backward selection process resulted in a reduced model. Performance of the model was expressed by discrimination, calibration, and accuracy. The model included a medical history of depression or other psychiatric disorders, hypertension, angina pectoris, and the Barthel Index item dressing. The model had acceptable discrimination, based on an area under the receiver operating characteristic curve of 0.78 (0.72-0.85), and calibration (P value of the U-statistic, 0.96). Transforming the model to an easy-to-use risk-assessment table, the lowest risk category (sum score, <-10) showed a 2% risk of depression, which increased to 82% in the highest category (sum score, >21). The clinical prediction model enables clinicians to estimate the degree of the depression risk for an individual patient within the first week after stroke.

  11. Modeling Forest Biomass and Growth: Coupling Long-Term Inventory and Lidar Data

    NASA Technical Reports Server (NTRS)

    Babcock, Chad; Finley, Andrew O.; Cook, Bruce D.; Weiskittel, Andrew; Woodall, Christopher W.

    2016-01-01

    Combining spatially-explicit long-term forest inventory and remotely sensed information from Light Detection and Ranging (LiDAR) datasets through statistical models can be a powerful tool for predicting and mapping above-ground biomass (AGB) at a range of geographic scales. We present and examine a novel modeling approach to improve prediction of AGB and estimate AGB growth using LiDAR data. The proposed model accommodates temporal misalignment between field measurements and remotely sensed data-a problem pervasive in such settings-by including multiple time-indexed measurements at plot locations to estimate AGB growth. We pursue a Bayesian modeling framework that allows for appropriately complex parameter associations and uncertainty propagation through to prediction. Specifically, we identify a space-varying coefficients model to predict and map AGB and its associated growth simultaneously. The proposed model is assessed using LiDAR data acquired from NASA Goddard's LiDAR, Hyper-spectral & Thermal imager and field inventory data from the Penobscot Experimental Forest in Bradley, Maine. The proposed model outperformed the time-invariant counterpart models in predictive performance as indicated by a substantial reduction in root mean squared error. The proposed model adequately accounts for temporal misalignment through the estimation of forest AGB growth and accommodates residual spatial dependence. Results from this analysis suggest that future AGB models informed using remotely sensed data, such as LiDAR, may be improved by adapting traditional modeling frameworks to account for temporal misalignment and spatial dependence using random effects.

  12. Thermal math model analysis of FRSI test article subjected to cold soak and entry environments. [Flexible Reuseable Surface Insulation in Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Gallegos, J. J.

    1978-01-01

    A multi-objective test program was conducted at the NASA/JSC Radiant Heat Test Facility in which an aluminum skin/stringer test panel insulated with FRSI (Flexible Reusable Surface Insulation) was subjected to 24 simulated Space Shuttle Orbiter ascent/entry heating cycles with a cold soak in between in the 10th and 20th cycles. A two-dimensional thermal math model was developed and utilized to predict the thermal performance of the FRSI. Results are presented which indicate that the modeling techniques and property values have been proven adequate in predicting peak structure temperatures and entry thermal responses from both an ambient and cold soak condition of an FRSI covered aluminum structure.

  13. Refraction of microwave signals by water vapor

    NASA Technical Reports Server (NTRS)

    Goldfinger, A. D.

    1980-01-01

    Tropospheric water vapor causes a refractive path length effect which is typically 5-10% of the 'dry' tropospheric effect and as large as several meters at elevation angles below 5 deg. The vertical water vapor profile is quite variable, and measurements of intensive atmospheric parameters such as temperature and humidity limited to the surface do not adequately predict the refractive effect. It is suggested that a water vapor refraction model that is a function of the amount of precipitable water alone can be successful at low elevation angles. From an extensive study of numerical ray tracings through radiosonde balloon data, such a model has been constructed. The model predicts the effect at all latitudes and elevation angles between 2 and 10 deg to an accuracy of better than 4% (11 cm at 3 deg elevation angle).

  14. Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design

    NASA Technical Reports Server (NTRS)

    Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan

    2005-01-01

    A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.

  15. Development of a Physiologically-Based Pharmacokinetic Model of the Rat Central Nervous System

    PubMed Central

    Badhan, Raj K. Singh; Chenel, Marylore; Penny, Jeffrey I.

    2014-01-01

    Central nervous system (CNS) drug disposition is dictated by a drug’s physicochemical properties and its ability to permeate physiological barriers. The blood–brain barrier (BBB), blood-cerebrospinal fluid barrier and centrally located drug transporter proteins influence drug disposition within the central nervous system. Attainment of adequate brain-to-plasma and cerebrospinal fluid-to-plasma partitioning is important in determining the efficacy of centrally acting therapeutics. We have developed a physiologically-based pharmacokinetic model of the rat CNS which incorporates brain interstitial fluid (ISF), choroidal epithelial and total cerebrospinal fluid (CSF) compartments and accurately predicts CNS pharmacokinetics. The model yielded reasonable predictions of unbound brain-to-plasma partition ratio (Kpuu,brain) and CSF:plasma ratio (CSF:Plasmau) using a series of in vitro permeability and unbound fraction parameters. When using in vitro permeability data obtained from L-mdr1a cells to estimate rat in vivo permeability, the model successfully predicted, to within 4-fold, Kpuu,brain and CSF:Plasmau for 81.5% of compounds simulated. The model presented allows for simultaneous simulation and analysis of both brain biophase and CSF to accurately predict CNS pharmacokinetics from preclinical drug parameters routinely available during discovery and development pathways. PMID:24647103

  16. Development and Evaluation of a Mobile Personalized Blood Glucose Prediction System for Patients With Gestational Diabetes Mellitus.

    PubMed

    Pustozerov, Evgenii; Popova, Polina; Tkachuk, Aleksandra; Bolotko, Yana; Yuldashev, Zafar; Grineva, Elena

    2018-01-09

    Personalized blood glucose (BG) prediction for diabetes patients is an important goal that is pursued by many researchers worldwide. Despite many proposals, only a few projects are dedicated to the development of complete recommender system infrastructures that incorporate BG prediction algorithms for diabetes patients. The development and implementation of such a system aided by mobile technology is of particular interest to patients with gestational diabetes mellitus (GDM), especially considering the significant importance of quickly achieving adequate BG control for these patients in a short period (ie, during pregnancy) and a typically higher acceptance rate for mobile health (mHealth) solutions for short- to midterm usage. This study was conducted with the objective of developing infrastructure comprising data processing algorithms, BG prediction models, and an appropriate mobile app for patients' electronic record management to guide BG prediction-based personalized recommendations for patients with GDM. A mobile app for electronic diary management was developed along with data exchange and continuous BG signal processing software. Both components were coupled to obtain the necessary data for use in the personalized BG prediction system. Necessary data on meals, BG measurements, and other events were collected via the implemented mobile app and continuous glucose monitoring (CGM) system processing software. These data were used to tune and evaluate the BG prediction model, which included an algorithm for dynamic coefficients tuning. In the clinical study, 62 participants (GDM: n=49; control: n=13) took part in a 1-week monitoring trial during which they used the mobile app to track their meals and self-measurements of BG and CGM system for continuous BG monitoring. The data on 909 food intakes and corresponding postprandial BG curves as well as the set of patients' characteristics (eg, glycated hemoglobin, body mass index [BMI], age, and lifestyle parameters) were selected as inputs for the BG prediction models. The prediction results by the models for BG levels 1 hour after food intake were root mean square error=0.87 mmol/L, mean absolute error=0.69 mmol/L, and mean absolute percentage error=12.8%, which correspond to an adequate prediction accuracy for BG control decisions. The mobile app for the collection and processing of relevant data, appropriate software for CGM system signals processing, and BG prediction models were developed for a recommender system. The developed system may help improve BG control in patients with GDM; this will be the subject of evaluation in a subsequent study. ©Evgenii Pustozerov, Polina Popova, Aleksandra Tkachuk, Yana Bolotko, Zafar Yuldashev, Elena Grineva. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 09.01.2018.

  17. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-02-01

    Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study, regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Joint copyright. The Authors and Annals of Internal Medicine. Diabetic Medicine published by John Wiley Ltd. on behalf of Diabetes UK.

  18. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection

    USGS Publications Warehouse

    Francy, Donna S.; Stelzer, Erin A.; Duris, Joseph W.; Brady, Amie M.G.; Harrison, John H.; Johnson, Heather E.; Ware, Michael W.

    2013-01-01

    Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.

  19. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection.

    PubMed

    Francy, Donna S; Stelzer, Erin A; Duris, Joseph W; Brady, Amie M G; Harrison, John H; Johnson, Heather E; Ware, Michael W

    2013-03-01

    Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.

  20. Multiscale solute transport upscaling for a three-dimensional hierarchical porous medium

    NASA Astrophysics Data System (ADS)

    Zhang, Mingkan; Zhang, Ye

    2015-03-01

    A laboratory-generated hierarchical, fully heterogeneous aquifer model (FHM) provides a reference for developing and testing an upscaling approach that integrates large-scale connectivity mapping with flow and transport modeling. Based on the FHM, three hydrostratigraphic models (HSMs) that capture lithological (static) connectivity at different resolutions are created, each corresponding to a sedimentary hierarchy. Under increasing system lnK variances (0.1, 1.0, 4.5), flow upscaling is first conducted to calculate equivalent hydraulic conductivity for individual connectivity (or unit) of the HSMs. Given the computed flow fields, an instantaneous, conservative tracer test is simulated by all models. For the HSMs, two upscaling formulations are tested based on the advection-dispersion equation (ADE), implementing space versus time-dependent macrodispersivity. Comparing flow and transport predictions of the HSMs against those of the reference model, HSMs capturing connectivity at increasing resolutions are more accurate, although upscaling errors increase with system variance. Results suggest: (1) by explicitly modeling connectivity, an enhanced degree of freedom in representing dispersion can improve the ADE-based upscaled models by capturing non-Fickian transport of the FHM; (2) when connectivity is sufficiently resolved, the type of data conditioning used to model transport becomes less critical. Data conditioning, however, is influenced by the prediction goal; (3) when aquifer is weakly-to-moderately heterogeneous, the upscaled models adequately capture the transport simulation of the FHM, despite the existence of hierarchical heterogeneity at smaller scales. When aquifer is strongly heterogeneous, the upscaled models become less accurate because lithological connectivity cannot adequately capture preferential flows; (4) three-dimensional transport connectivities of the hierarchical aquifer differ quantitatively from those analyzed for two-dimensional systems. This article was corrected on 7 MAY 2015. See the end of the full text for details.

  1. Dual nozzle aerodynamic and cooling analysis study

    NASA Technical Reports Server (NTRS)

    Meagher, G. M.

    1981-01-01

    Analytical models to predict performance and operating characteristics of dual nozzle concepts were developed and improved. Aerodynamic models are available to define flow characteristics and bleed requirements for both the dual throat and dual expander concepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow, boundary layer, and shock effects within dual nozzle engines. Thermal analyses were performed to define cooling requirements for baseline configurations, and special studies of unique dual nozzle cooling problems defined feasible means of achieving adequate cooling.

  2. Predicting human chronically paralyzed muscle force: a comparison of three mathematical models.

    PubMed

    Frey Law, Laura A; Shields, Richard K

    2006-03-01

    Chronic spinal cord injury (SCI) induces detrimental musculoskeletal adaptations that adversely affect health status, ranging from muscle paralysis and skin ulcerations to osteoporosis. SCI rehabilitative efforts may increasingly focus on preserving the integrity of paralyzed extremities to maximize health quality using electrical stimulation for isometric training and/or functional activities. Subject-specific mathematical muscle models could prove valuable for predicting the forces necessary to achieve therapeutic loading conditions in individuals with paralyzed limbs. Although numerous muscle models are available, three modeling approaches were chosen that can accommodate a variety of stimulation input patterns. To our knowledge, no direct comparisons between models using paralyzed muscle have been reported. The three models include 1) a simple second-order linear model with three parameters and 2) two six-parameter nonlinear models (a second-order nonlinear model and a Hill-derived nonlinear model). Soleus muscle forces from four individuals with complete, chronic SCI were used to optimize each model's parameters (using an increasing and decreasing frequency ramp) and to assess the models' predictive accuracies for constant and variable (doublet) stimulation trains at 5, 10, and 20 Hz in each individual. Despite the large differences in modeling approaches, the mean predicted force errors differed only moderately (8-15% error; P=0.0042), suggesting physiological force can be adequately represented by multiple mathematical constructs. The two nonlinear models predicted specific force characteristics better than the linear model in nearly all stimulation conditions, with minimal differences between the two nonlinear models. Either nonlinear mathematical model can provide reasonable force estimates; individual application needs may dictate the preferred modeling strategy.

  3. Comparison of Heat and Moisture Fluxes from a Modified Soil-plant-atmosphere Model with Observations from BOREAS. Chapter 3

    NASA Technical Reports Server (NTRS)

    Lee, Young-Hee; Mahrt, L.

    2005-01-01

    This study evaluates the prediction of heat and moisture fluxes from a new land surface scheme with eddy correlation data collected at the old aspen site during the Boreal Ecosystem-Atmosphere Study (BOREAS) in 1994. The model used in this study couples a multilayer vegetation model with a soil model. Inclusion of organic material in the upper soil layer is required to adequately simulate exchange between the soil and subcanopy air. Comparisons between the model and observations are discussed to reveal model misrepresentation of some aspects of the diurnal variation of subcanopy processes. Evapotranspiration

  4. Risk prediction models for graft failure in kidney transplantation: a systematic review.

    PubMed

    Kaboré, Rémi; Haller, Maria C; Harambat, Jérôme; Heinze, Georg; Leffondré, Karen

    2017-04-01

    Risk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005-15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  5. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  6. A new model integrating short- and long-term aging of copper added to soils

    PubMed Central

    Zeng, Saiqi; Li, Jumei; Wei, Dongpu

    2017-01-01

    Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888

  7. The viscoelastic standard nonlinear solid model: predicting the response of the lumbar intervertebral disk to low-frequency vibrations.

    PubMed

    Groth, Kevin M; Granata, Kevin P

    2008-06-01

    Due to the mathematical complexity of current musculoskeletal spine models, there is a need for computationally efficient models of the intervertebral disk (IVD). The aim of this study is to develop a mathematical model that will adequately describe the motion of the IVD under axial cyclic loading as well as maintain computational efficiency for use in future musculoskeletal spine models. Several studies have successfully modeled the creep characteristics of the IVD using the three-parameter viscoelastic standard linear solid (SLS) model. However, when the SLS model is subjected to cyclic loading, it underestimates the load relaxation, the cyclic modulus, and the hysteresis of the human lumbar IVD. A viscoelastic standard nonlinear solid (SNS) model was used to predict the response of the human lumbar IVD subjected to low-frequency vibration. Nonlinear behavior of the SNS model was simulated by a strain-dependent elastic modulus on the SLS model. Parameters of the SNS model were estimated from experimental load deformation and stress-relaxation curves obtained from the literature. The SNS model was able to predict the cyclic modulus of the IVD at frequencies of 0.01 Hz, 0.1 Hz, and 1 Hz. Furthermore, the SNS model was able to quantitatively predict the load relaxation at a frequency of 0.01 Hz. However, model performance was unsatisfactory when predicting load relaxation and hysteresis at higher frequencies (0.1 Hz and 1 Hz). The SLS model of the lumbar IVD may require strain-dependent elastic and viscous behavior to represent the dynamic response to compressive strain.

  8. Predicting mutant selection in competition experiments with ciprofloxacin-exposed Escherichia coli.

    PubMed

    Khan, David D; Lagerbäck, Pernilla; Malmberg, Christer; Kristoffersson, Anders N; Wistrand-Yuen, Erik; Sha, Cao; Cars, Otto; Andersson, Dan I; Hughes, Diarmaid; Nielsen, Elisabet I; Friberg, Lena E

    2018-03-01

    Predicting competition between antibiotic-susceptible wild-type (WT) and less susceptible mutant (MT) bacteria is valuable for understanding how drug concentrations influence the emergence of resistance. Pharmacokinetic/pharmacodynamic (PK/PD) models predicting the rate and extent of takeover of resistant bacteria during different antibiotic pressures can thus be a valuable tool in improving treatment regimens. The aim of this study was to evaluate a previously developed mechanism-based PK/PD model for its ability to predict in vitro mixed-population experiments with competition between Escherichia coli (E. coli) WT and three well-defined E. coli resistant MTs when exposed to ciprofloxacin. Model predictions for each bacterial strain and ciprofloxacin concentration were made for in vitro static and dynamic time-kill experiments measuring CFU (colony forming units)/mL up to 24 h with concentrations close to or below the minimum inhibitory concentration (MIC), as well as for serial passage experiments with concentrations well below the MIC measuring ratios between the two strains with flow cytometry. The model was found to reasonably well predict the initial bacterial growth and killing of most static and dynamic time-kill competition experiments without need for parameter re-estimation. With parameter re-estimation of growth rates, an adequate fit was also obtained for the 6-day serial passage competition experiments. No bacterial interaction in growth was observed. This study demonstrates the predictive capacity of a PK/PD model and further supports the application of PK/PD modelling for prediction of bacterial kill in different settings, including resistance selection. Copyright © 2017 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  9. Artificial neural network prediction of ischemic tissue fate in acute stroke imaging

    PubMed Central

    Huang, Shiliang; Shen, Qiang; Duong, Timothy Q

    2010-01-01

    Multimodal magnetic resonance imaging of acute stroke provides predictive value that can be used to guide stroke therapy. A flexible artificial neural network (ANN) algorithm was developed and applied to predict ischemic tissue fate on three stroke groups: 30-, 60-minute, and permanent middle cerebral artery occlusion in rats. Cerebral blood flow (CBF), apparent diffusion coefficient (ADC), and spin–spin relaxation time constant (T2) were acquired during the acute phase up to 3 hours and again at 24 hours followed by histology. Infarct was predicted on a pixel-by-pixel basis using only acute (30-minute) stroke data. In addition, neighboring pixel information and infarction incidence were also incorporated into the ANN model to improve prediction accuracy. Receiver-operating characteristic analysis was used to quantify prediction accuracy. The major findings were the following: (1) CBF alone poorly predicted the final infarct across three experimental groups; (2) ADC alone adequately predicted the infarct; (3) CBF+ADC improved the prediction accuracy; (4) inclusion of neighboring pixel information and infarction incidence further improved the prediction accuracy; and (5) prediction was more accurate for permanent occlusion, followed by 60- and 30-minute occlusion. The ANN predictive model could thus provide a flexible and objective framework for clinicians to evaluate stroke treatment options on an individual patient basis. PMID:20424631

  10. Design and validation of a model to predict early mortality in haemodialysis patients.

    PubMed

    Mauri, Joan M; Clèries, Montse; Vela, Emili

    2008-05-01

    Mortality and morbidity rates are higher in patients receiving haemodialysis therapy than in the general population. Detection of risk factors related to early death in these patients could be of aid for clinical and administrative decision making. Objectives. The aims of this study were (1) to identify risk factors (comorbidity and variables specific to haemodialysis) associated with death in the first year following the start of haemodialysis and (2) to design and validate a prognostic model to quantify the probability of death for each patient. An analysis was carried out on all patients starting haemodialysis treatment in Catalonia during the period 1997-2003 (n = 5738). The data source was the Renal Registry of Catalonia, a mandatory population registry. Patients were randomly divided into two samples: 60% (n = 3455) of the total were used to develop the prognostic model and the remaining 40% (n = 2283) to validate the model. Logistic regression analysis was used to construct the model. One-year mortality in the total study population was 16.5%. The predictive model included the following variables: age, sex, primary renal disease, grade of functional autonomy, chronic obstructive pulmonary disease, malignant processes, chronic liver disease, cardiovascular disease, initial vascular access and malnutrition. The analyses showed adequate calibration for both the sample to develop the model and the validation sample (Hosmer-Lemeshow statistic 0.97 and P = 0.49, respectively) as well as adequate discrimination (ROC curve 0.78 in both cases). Risk factors implicated in mortality at one year following the start of haemodialysis have been determined and a prognostic model designed. The validated, easy-to-apply model quantifies individual patient risk attributable to various factors, some of them amenable to correction by directed interventions.

  11. A microstructurally based model of solder joints under conditions of thermomechanical fatigue

    NASA Astrophysics Data System (ADS)

    Frear, D. R.; Burchett, S. N.; Rashid, M. M.

    The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue. We present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.

  12. Scientific reporting is suboptimal for aspects that characterize genetic risk prediction studies: a review of published articles based on the Genetic RIsk Prediction Studies statement.

    PubMed

    Iglesias, Adriana I; Mihaescu, Raluca; Ioannidis, John P A; Khoury, Muin J; Little, Julian; van Duijn, Cornelia M; Janssens, A Cecile J W

    2014-05-01

    Our main objective was to raise awareness of the areas that need improvements in the reporting of genetic risk prediction articles for future publications, based on the Genetic RIsk Prediction Studies (GRIPS) statement. We evaluated studies that developed or validated a prediction model based on multiple DNA variants, using empirical data, and were published in 2010. A data extraction form based on the 25 items of the GRIPS statement was created and piloted. Forty-two studies met our inclusion criteria. Overall, more than half of the evaluated items (34 of 62) were reported in at least 85% of included articles. Seventy-seven percentage of the articles were identified as genetic risk prediction studies through title assessment, but only 31% used the keywords recommended by GRIPS in the title or abstract. Seventy-four percentage mentioned which allele was the risk variant. Overall, only 10% of the articles reported all essential items needed to perform external validation of the risk model. Completeness of reporting in genetic risk prediction studies is adequate for general elements of study design but is suboptimal for several aspects that characterize genetic risk prediction studies such as description of the model construction. Improvements in the transparency of reporting of these aspects would facilitate the identification, replication, and application of genetic risk prediction models. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): The TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-06-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  14. A site-specific screening comparison of modeled and monitored air dispersion and deposition for perfluorooctanoate.

    PubMed

    Barton, Catherine A; Zarzecki, Charles J; Russell, Mark H

    2010-04-01

    This work assessed the usefulness of a current air quality model (American Meteorological Society/Environmental Protection Agency Regulatory Model [AERMOD]) for predicting air concentrations and deposition of perfluorooctanoate (PFO) near a manufacturing facility. Air quality models play an important role in providing information for verifying permitting conditions and for exposure assessment purposes. It is important to ensure traditional modeling approaches are applicable to perfluorinated compounds, which are known to have unusual properties. Measured field data were compared with modeling predictions to show that AERMOD adequately located the maximum air concentration in the study area, provided representative or conservative air concentration estimates, and demonstrated bias and scatter not significantly different than that reported for other compounds. Surface soil/grass concentrations resulting from modeled deposition flux also showed acceptable bias and scatter compared with measured concentrations of PFO in soil/grass samples. Errors in predictions of air concentrations or deposition may be best explained by meteorological input uncertainty and conservatism in the PRIME algorithm used to account for building downwash. In general, AERMOD was found to be a useful screening tool for modeling the dispersion and deposition of PFO in air near a manufacturing facility.

  15. Measurement and modeling of unsaturated hydraulic conductivity

    USGS Publications Warehouse

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  16. Prediction of cavity growth by solution of salt around boreholes. (Report No. IITRI-C--6313-14)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, R.H. Chang, D.S.

    1975-06-30

    A mathematical model is developed to simulate the process of salt dissolution in a salt formation. The calibration of this model using Detroit Mine data is done systematically by the method of nonlinear regression. The brine concentrations calculated from the regression fit the measured data from Detroit Mine experiment within 10 percent. Because the Detroit data includes periods when the inlet flow is shut off, the agreement with Detroit data indicates that the model adequately represents natural convection effects to predict the cavity growth at very slow feed rates. The prediction has been done to calculate the cavity growth atmore » feed rate of one gal/h and one gal/day over a period of 10,000 y. Result shows that the cavity growth is a wide-flaring type and that the significant growth of the cavity only occurs at top layer. The prediction involves a very great extrapolation of time from the Detroit data, but it will be valid if the mechanism of solution does not change.« less

  17. Poor early response to methotrexate portends inadequate long-term outcomes in patients with moderate-to-severe psoriasis: Evidence from 2 phase 3 clinical trials.

    PubMed

    Gordon, Kenneth B; Betts, Keith A; Sundaram, Murali; Signorovitch, James E; Li, Junlong; Xie, Meng; Wu, Eric Q; Okun, Martin M

    2017-12-01

    Most methotrexate-treated psoriasis patients do not achieve a long-term PASI75 (75% reduction from baseline Psoriasis Area and Severity Index score) response. Indications of nonresponse can be apparent after only 4 weeks of treatment. To develop a prediction rule to identify patients unlikely to respond adequately to methotrexate. Patient-level data from CHAMPION (NCT00235820, N = 110) was used to construct a prediction model for week 16 PASI75 by using patient baseline characteristics and week 4 PASI25. A prediction rule was determined on the basis of the sensitivity and specificity and validated in terms of week 16 PASI75 response in an independent validation sample from trial M10-255 (NCT00679731, N = 163). PASI25 achievement at week 4 (odds ratio = 8.917) was highly predictive of response with methotrexate at week 16. Patients with a predicted response probability <30% were recommended to discontinue methotrexate. The rates of week 16 PASI75 response were 65.8% and 21.1% (P < .001) for patients recommended to continue and discontinue methotrexate, respectively. The CHAMPION trial excluded patients previously treated with biologics, and the M10-255 trial had no restrictions. A prediction rule was developed and validated to identify patients unlikely to respond adequately to methotrexate. The rule indicates that 4 weeks of methotrexate might be sufficient to predict long-term response with limited safety risk. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  18. [Population pharmacokinetics applied to optimising cisplatin doses in cancer patients].

    PubMed

    Ramón-López, A; Escudero-Ortiz, V; Carbonell, V; Pérez-Ruixo, J J; Valenzuela, B

    2012-01-01

    To develop and internally validate a population pharmacokinetics model for cisplatin and assess its prediction capacity for personalising doses in cancer patients. Cisplatin plasma concentrations in forty-six cancer patients were used to determine the pharmacokinetic parameters of a two-compartment pharmacokinetic model implemented in NONMEN VI software. Pharmacokinetic parameter identification capacity was assessed using the parametric bootstrap method and the model was validated using the nonparametric bootstrap method and standardised visual and numerical predictive checks. The final model's prediction capacity was evaluated in terms of accuracy and precision during the first (a priori) and second (a posteriori) chemotherapy cycles. Mean population cisplatin clearance is 1.03 L/h with an interpatient variability of 78.0%. Estimated distribution volume at steady state was 48.3 L, with inter- and intrapatient variabilities of 31,3% and 11,7%, respectively. Internal validation confirmed that the population pharmacokinetics model is appropriate to describe changes over time in cisplatin plasma concentrations, as well as its variability in the study population. The accuracy and precision of a posteriori prediction of cisplatin concentrations improved by 21% and 54% compared to a priori prediction. The population pharmacokinetic model developed adequately described the changes in cisplatin plasma concentrations in cancer patients and can be used to optimise cisplatin dosing regimes accurately and precisely. Copyright © 2011 SEFH. Published by Elsevier Espana. All rights reserved.

  19. Can species distribution models really predict the expansion of invasive species?

    PubMed

    Barbet-Massin, Morgane; Rome, Quentin; Villemant, Claire; Courchamp, Franck

    2018-01-01

    Predictive studies are of paramount importance for biological invasions, one of the biggest threats for biodiversity. To help and better prioritize management strategies, species distribution models (SDMs) are often used to predict the potential invasive range of introduced species. Yet, SDMs have been regularly criticized, due to several strong limitations, such as violating the equilibrium assumption during the invasion process. Unfortunately, validation studies-with independent data-are too scarce to assess the predictive accuracy of SDMs in invasion biology. Yet, biological invasions allow to test SDMs usefulness, by retrospectively assessing whether they would have accurately predicted the latest ranges of invasion. Here, we assess the predictive accuracy of SDMs in predicting the expansion of invasive species. We used temporal occurrence data for the Asian hornet Vespa velutina nigrithorax, a species native to China that is invading Europe with a very fast rate. Specifically, we compared occurrence data from the last stage of invasion (independent validation points) to the climate suitability distribution predicted from models calibrated with data from the early stage of invasion. Despite the invasive species not being at equilibrium yet, the predicted climate suitability of validation points was high. SDMs can thus adequately predict the spread of V. v. nigrithorax, which appears to be-at least partially-climatically driven. In the case of V. v. nigrithorax, SDMs predictive accuracy was slightly but significantly better when models were calibrated with invasive data only, excluding native data. Although more validation studies for other invasion cases are needed to generalize our results, our findings are an important step towards validating the use of SDMs in invasion biology.

  20. Can species distribution models really predict the expansion of invasive species?

    PubMed Central

    Rome, Quentin; Villemant, Claire; Courchamp, Franck

    2018-01-01

    Predictive studies are of paramount importance for biological invasions, one of the biggest threats for biodiversity. To help and better prioritize management strategies, species distribution models (SDMs) are often used to predict the potential invasive range of introduced species. Yet, SDMs have been regularly criticized, due to several strong limitations, such as violating the equilibrium assumption during the invasion process. Unfortunately, validation studies–with independent data–are too scarce to assess the predictive accuracy of SDMs in invasion biology. Yet, biological invasions allow to test SDMs usefulness, by retrospectively assessing whether they would have accurately predicted the latest ranges of invasion. Here, we assess the predictive accuracy of SDMs in predicting the expansion of invasive species. We used temporal occurrence data for the Asian hornet Vespa velutina nigrithorax, a species native to China that is invading Europe with a very fast rate. Specifically, we compared occurrence data from the last stage of invasion (independent validation points) to the climate suitability distribution predicted from models calibrated with data from the early stage of invasion. Despite the invasive species not being at equilibrium yet, the predicted climate suitability of validation points was high. SDMs can thus adequately predict the spread of V. v. nigrithorax, which appears to be—at least partially–climatically driven. In the case of V. v. nigrithorax, SDMs predictive accuracy was slightly but significantly better when models were calibrated with invasive data only, excluding native data. Although more validation studies for other invasion cases are needed to generalize our results, our findings are an important step towards validating the use of SDMs in invasion biology. PMID:29509789

  1. Micromechanics Modeling of Fracture in Nanocrystalline Metals

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Piascik, R. S.; Raju, I. S.; Harris, C. E.

    2002-01-01

    Nanocrystalline metals have very high theoretical strength, but suffer from a lack of ductility and toughness. Therefore, it is critical to understand the mechanisms of deformation and fracture of these materials before their full potential can be achieved. Because classical fracture mechanics is based on the comparison of computed fracture parameters, such as stress intlmsity factors, to their empirically determined critical values, it does not adequately describe the fundamental physics of fracture required to predict the behavior of nanocrystalline metals. Thus, micromechanics-based techniques must be considered to quanti@ the physical processes of deformation and fracture within nanocrystalline metals. This paper discusses hndamental physicsbased modeling strategies that may be useful for the prediction Iof deformation, crack formation and crack growth within nanocrystalline metals.

  2. An improved method for predicting the lightning performance of high and extra-high-voltage substation shielding

    NASA Astrophysics Data System (ADS)

    Vinh, T.

    1980-08-01

    There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.

  3. Prediction of medial and lateral contact force of the knee joint during normal and turning gait after total knee replacement.

    PubMed

    Purevsuren, Tserenchimed; Dorj, Ariunzaya; Kim, Kyungsoo; Kim, Yoon Hyuk

    2016-04-01

    The computational modeling approach has commonly been used to predict knee joint contact forces, muscle forces, and ligament loads during activities of daily living. Knowledge of these forces has several potential applications, for example, within design of equipment to protect the knee joint from injury and to plan adequate rehabilitation protocols, although clinical applications of computational models are still evolving and one of the limiting factors is model validation. The objective of this study was to extend previous modeling technique and to improve the validity of the model prediction using publicly available data set of the fifth "Grand Challenge Competition to Predict In Vivo Knee Loads." A two-stage modeling approach, which combines conventional inverse dynamic analysis (the first stage) with a multi-body subject-specific lower limb model (the second stage), was used to calculate medial and lateral compartment contact forces. The validation was performed by direct comparison of model predictions and experimental measurement of medial and lateral compartment contact forces during normal and turning gait. The model predictions of both medial and lateral contact forces showed strong correlations with experimental measurements in normal gait (r = 0.75 and 0.71) and in turning gait trials (r = 0.86 and 0.72), even though the current technique over-estimated medial compartment contact forces in swing phase. The correlation coefficient, Sprague and Geers metrics, and root mean squared error indicated that the lateral contact forces were predicted better than medial contact forces in comparison with the experimental measurements during both normal and turning gait trials. © IMechE 2016.

  4. Monitoring, modelling and environmental exposure assessment of industrial chemicals in the aquatic environment.

    PubMed

    Holt, M S; Fox, K; Griessbach, E; Johnsen, S; Kinnunen, J; Lecloux, A; Murray-Smith, R; Peterson, D R; Schröder, R; Silvani, M; ten Berge, W F; Toy, R J; Feijtel, T C

    2000-12-01

    Monitoring and laboratory data play integral roles alongside fate and exposure models in comprehensive risk assessments. The principle in the European Union Technical Guidance Documents for risk assessment is that measured data may take precedence over model results but only after they are judged to be of adequate reliability and to be representative of the particular environmental compartments to which they are applied. In practice, laboratory and field data are used to provide parameters for the models, while monitoring data are used to validate the models' predictions. Thus, comprehensive risk assessments require the integration of laboratory and monitoring data with the model predictions. However, this interplay is often overlooked. Discrepancies between the results of models and monitoring should be investigated in terms of the representativeness of both. Certainly, in the context of the EU risk assessment of existing chemicals, the specific requirements for monitoring data have not been adequately addressed. The resources required for environmental monitoring, both in terms of manpower and equipment, can be very significant. The design of monitoring programmes to optimise the use of resources and the use of models as a cost-effective alternative are increasing in importance. Generic considerations and criteria for the design of new monitoring programmes to generate representative quality data for the aquatic compartment are outlined and the criteria for the use of existing data are discussed. In particular, there is a need to improve the accessibility to data sets, to standardise the data sets, to promote communication and harmonisation of programmes and to incorporate the flexibility to change monitoring protocols to amend the chemicals under investigation in line with changing needs and priorities.

  5. Monitoring cosmic radiation on aircraft

    NASA Astrophysics Data System (ADS)

    Bentley, Robert D.; Iles, R. H. A.; Jones, J. B. L.; Hunter, R.; Taylor, G. C.; Thomas, D. J.

    2002-03-01

    The Earth is constantly bombarded by cosmic radiation that can be either galactic or solar in origin. At aircraft altitudes, the radiation levels are much higher than at sea level and recent European legislation has classified aircrew as radiation workers. University College London is working with Virgin Atlantic Airways on a 3 year project to monitor the levels of cosmic radiation on long-haul flights. The study will determine whether models currently used to predict radiation exposure of aircrew are adequate. It will also try to determine whether solar flare activity can cause significant enhancement to the predicted doses.

  6. Neural network modeling of drying of rice in BAU-STR dryer

    NASA Astrophysics Data System (ADS)

    Alam, Md. Ashraful; Saha, Chayan Kumer; Alam, Md. Monjurul; Ashraf, Md. Ali; Bala, Bilash Kanti; Harvey, Jagger

    2018-05-01

    The experimental performance and artificial neural network modeling of rice drying in BAU-STR dryer is presented in this paper. The dryer consists of a biomass stove as a heat source, a perforated inner bin and a perforated outer bin with annular space for grains, and a blower (1 hp) to supply heated air. The dryer capacity was 500 kg of freshly harvested rice. Twenty experimental runs were conducted to investigate the experimental performance of the dryer for drying of rice. An independent multilayer neural network approach was used to predict the performance of the BAU-STR dryer for drying of rice. Ten sets of experimental data were used for training using back propagation algorithm and another ten sets of data were used for testing the artificial neural network model. The prediction of the performance of the dryer was found to be excellent after it was adequately trained. The statistical analysis showed that the errors (MSE and RMSE) were within and acceptable range of ±5% with a coefficient of determination (R2) of 99%. The model can be used to predict the potential of the dryer for different locations, and can also be used in a predictive optimal control algorithm.

  7. Preliminary study: Moisture-polymer interaction. Stuby objectives

    NASA Technical Reports Server (NTRS)

    Wen, L. C.

    1985-01-01

    The problems associated with mathematically modeling water-module interaction phenomena, including sorption and desorption, diffusion, and permeation are discussed. With reliable analytical models, an extensive materials data base, and solar radiation surface meteorological observations (SOLMET) weather data, predicting module lifetimes in realistic environments can become a practical reality. The status of the present techniques of simulating the various transport mechanisms was reported. The Dent model (a modified Brunauer-Emmet-Teller) approach represented polyvinyl butyral (PVB) sorption data. A 100-layer material model and Fick's diffusion model gave diffusivity values exhibiting adequate agreement with those measured for PVB. Diffusivity of PVB is concentration dependent, decreasing as the water content in PVB increases. The temperature dependence of diffusion in PVB is well modeled by the Arrhenius rate equation. Equilibrium conductivity and leakage current data are well represented by Hearle's model for bulk ionic conductivity. A nodal network analysis using the Systems Improved Numerical Differencing Analyzer (SINDA) Thermal Analyzer gave reasonable correlation with measurable data. It is concluded that realistic lifetime predictions seem to be feasible.

  8. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    PubMed

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Development of a Novel Simplified PBPK Absorption Model to Explain the Higher Relative Bioavailability of the OROS® Formulation of Oxybutynin.

    PubMed

    Olivares-Morales, Andrés; Ghosh, Avijit; Aarons, Leon; Rostami-Hodjegan, Amin

    2016-11-01

    A new minimal Segmented Transit and Absorption model (mSAT) model has been recently proposed and combined with intrinsic intestinal effective permeability (P eff,int ) to predict the regional gastrointestinal (GI) absorption (f abs ) of several drugs. Herein, this model was extended and applied for the prediction of oral bioavailability and pharmacokinetics of oxybutynin and its enantiomers to provide a mechanistic explanation of the higher relative bioavailability observed for oxybutynin's modified-release OROS® formulation compared to its immediate-release (IR) counterpart. The expansion of the model involved the incorporation of mechanistic equations for the prediction of release, transit, dissolution, permeation and first-pass metabolism. The predicted pharmacokinetics of oxybutynin enantiomers after oral administration for both the IR and OROS® formulations were in close agreement with the observed data. The predicted absolute bioavailability for the IR formulation was within 5% of the observed value, and the model adequately predicted the higher relative bioavailability observed for the OROS® formulation vs. the IR counterpart. From the model predictions, it can be noticed that the higher bioavailability observed for the OROS® formulation was mainly attributable to differences in the intestinal availability (F G ) rather than due to a higher colonic f abs , thus confirming previous hypotheses. The predicted f abs was almost 70% lower for the OROS® formulation compared to the IR formulation, whereas the F G was almost eightfold higher than in the IR formulation. These results provide further support to the hypothesis of an increased F G as the main factor responsible for the higher bioavailability of oxybutynin's OROS® formulation vs. the IR.

  10. Predicting acute aquatic toxicity of structurally diverse chemicals in fish using artificial intelligence approaches.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali

    2013-09-01

    The research aims to develop global modeling tools capable of categorizing structurally diverse chemicals in various toxicity classes according to the EEC and European Community directives, and to predict their acute toxicity in fathead minnow using set of selected molecular descriptors. Accordingly, artificial intelligence approach based classification and regression models, such as probabilistic neural networks (PNN), generalized regression neural networks (GRNN), multilayer perceptron neural network (MLPN), radial basis function neural network (RBFN), support vector machines (SVM), gene expression programming (GEP), and decision tree (DT) were constructed using the experimental toxicity data. Diversity and non-linearity in the chemicals' data were tested using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Predictive and generalization abilities of various models constructed here were compared using several statistical parameters. PNN and GRNN models performed relatively better than MLPN, RBFN, SVM, GEP, and DT. Both in two and four category classifications, PNN yielded a considerably high accuracy of classification in training (95.85 percent and 90.07 percent) and validation data (91.30 percent and 86.96 percent), respectively. GRNN rendered a high correlation between the measured and model predicted -log LC50 values both for the training (0.929) and validation (0.910) data and low prediction errors (RMSE) of 0.52 and 0.49 for two sets. Efficiency of the selected PNN and GRNN models in predicting acute toxicity of new chemicals was adequately validated using external datasets of different fish species (fathead minnow, bluegill, trout, and guppy). The PNN and GRNN models showed good predictive and generalization abilities and can be used as tools for predicting toxicities of structurally diverse chemical compounds. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Predicting preparatory behaviours for condom use in female undergraduate students: a one-year follow-up study.

    PubMed

    Gebhardt, W A; van Empelen, P; van Beurden, D

    2009-03-01

    The objective of this study is to investigate whether the Theory of Planned Behaviour (i.e. attitude, subjective norm, perceived behavioural control and intention), fluctuations in motivation over time, and variables from the Prototype-Willingness Model (i.e. behavioural expectation and behavioural willingness to have unprotected sex) predict preparatory behaviours for condom use. Sixty-two female undergraduates completed baseline and one-year follow-up questionnaires. Having condoms at home and carrying condoms were predicted by behavioural willingness to have unsafe sex at baseline. Having bought condoms was predicted by the behavioural expectation to use condoms with new partners at baseline. Intention and fluctuations in motivation did not emerge as significant predictors of preparatory actions. Female undergraduates, who are more willing to have unprotected sex under risk-conducive circumstances, are also less likely to prepare adequately for condom use, and thereby increase their chances of encountering such situations. Overall, the findings are in support of the Prototype-Willingness Model.

  12. Examining Predictive Validity of Oral Reading Fluency Slope in Upper Elementary Grades Using Quantile Regression.

    PubMed

    Cho, Eunsoo; Capin, Philip; Roberts, Greg; Vaughn, Sharon

    2017-07-01

    Within multitiered instructional delivery models, progress monitoring is a key mechanism for determining whether a child demonstrates an adequate response to instruction. One measure commonly used to monitor the reading progress of students is oral reading fluency (ORF). This study examined the extent to which ORF slope predicts reading comprehension outcomes for fifth-grade struggling readers ( n = 102) participating in an intensive reading intervention. Quantile regression models showed that ORF slope significantly predicted performance on a sentence-level fluency and comprehension assessment, regardless of the students' reading skills, controlling for initial ORF performance. However, ORF slope was differentially predictive of a passage-level comprehension assessment based on students' reading skills when controlling for initial ORF status. Results showed that ORF explained unique variance for struggling readers whose posttest performance was at the upper quantiles at the end of the reading intervention, but slope was not a significant predictor of passage-level comprehension for students whose reading problems were the most difficult to remediate.

  13. The Generalized Problematic Internet Use Scale 2: Validation and test of the model to Facebook use.

    PubMed

    Assunção, Raquel S; Matos, Paula Mena

    2017-01-01

    The main goals of the present study were to test the psychometric properties of a Portuguese version of the GPIUS2 (Generalized Problematic Internet Use Scale 2, Caplan, 2010), and to test whether the cognitive-behavioral model proposed by Caplan (2010) replicated in the context of Facebook use. We used a sample of 761 Portuguese adolescents (53.7% boys, 46.3% girls, mean age = 15.8). Our results showed that the data presented an adequate fit to the original model using confirmatory factor analysis. The scale presented also good internal consistency and adequate construct validity. The cognitive-behavioral model was also applicable to the Facebook context, presenting good fit. Consistently with previous findings we found that preference for online social interaction and the use of Facebook to mood regulation purposes, predicted positively and significantly the deficient self-regulation in Facebook use, which in turn was a significant predictor of the negative outcomes associated with this use. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  14. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  15. Modeling the risk of water pollution by pesticides from imbalanced data.

    PubMed

    Trajanov, Aneta; Kuzmanovski, Vladimir; Real, Benoit; Perreau, Jonathan Marks; Džeroski, Sašo; Debeljak, Marko

    2018-04-30

    The pollution of ground and surface waters with pesticides is a serious ecological issue that requires adequate treatment. Most of the existing water pollution models are mechanistic mathematical models. While they have made a significant contribution to understanding the transfer processes, they face the problem of validation because of their complexity, the user subjectivity in their parameterization, and the lack of empirical data for validation. In addition, the data describing water pollution with pesticides are, in most cases, very imbalanced. This is due to strict regulations for pesticide applications, which lead to only a few pollution events. In this study, we propose the use of data mining to build models for assessing the risk of water pollution by pesticides in field-drained outflow water. Unlike the mechanistic models, the models generated by data mining are based on easily obtainable empirical data, while the parameterization of the models is not influenced by the subjectivity of ecological modelers. We used empirical data from field trials at the La Jaillière experimental site in France and applied the random forests algorithm to build predictive models that predict "risky" and "not-risky" pesticide application events. To address the problems of the imbalanced classes in the data, cost-sensitive learning and different measures of predictive performance were used. Despite the high imbalance between risky and not-risky application events, we managed to build predictive models that make reliable predictions. The proposed modeling approach can be easily applied to other ecological modeling problems where we encounter empirical data with highly imbalanced classes.

  16. Model and system learners, optimal process constructors and kinetic theory-based goal-oriented design: A new paradigm in materials and processes informatics

    NASA Astrophysics Data System (ADS)

    Abisset-Chavanne, Emmanuelle; Duval, Jean Louis; Cueto, Elias; Chinesta, Francisco

    2018-05-01

    Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, … obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering.

  17. Predictive Models for Escherichia coli Concentrations at Inland Lake Beaches and Relationship of Model Variables to Pathogen Detection

    PubMed Central

    Stelzer, Erin A.; Duris, Joseph W.; Brady, Amie M. G.; Harrison, John H.; Johnson, Heather E.; Ware, Michael W.

    2013-01-01

    Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public. PMID:23291550

  18. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  19. Social cognitive predictors of first- and non-first-generation college students' academic and life satisfaction.

    PubMed

    Garriott, Patton O; Hudyma, Aaron; Keene, Chesleigh; Santiago, Dana

    2015-04-01

    The present study tested Lent's (2004) social-cognitive model of normative well-being in a sample (N = 414) of first- and non-first-generation college students. A model depicting relationships between: positive affect, environmental supports, college self-efficacy, college outcome expectations, academic progress, academic satisfaction, and life satisfaction was examined using structural equation modeling. The moderating roles of perceived importance of attending college and intrinsic goal motivation were also explored. Results suggested the hypothesized model provided an adequate fit to the data while hypothesized relationships in the model were partially supported. Environmental supports predicted college self-efficacy, college outcome expectations, and academic satisfaction. Furthermore, college self-efficacy predicted academic progress while college outcome expectations predicted academic satisfaction. Academic satisfaction, but not academic progress predicted life satisfaction. The structural model explained 44% of the variance in academic progress, 56% of the variance in academic satisfaction, and 28% of the variance in life satisfaction. Mediation analyses indicated several significant indirect effects between variables in the model while moderation analyses revealed a 3-way interaction between academic satisfaction, intrinsic motivation for attending college, and first-generation college student status on life satisfaction. Results are discussed in terms of applying the normative model of well-being to promote first- and non-first-generation college students' academic and life satisfaction. (c) 2015 APA, all rights reserved).

  20. Is height the best predictor for adequacy of semitendinosus-alone anterior cruciate ligament reconstruction? A study of hamstring graft dimensions and anthropometric measurements.

    PubMed

    Sundararajan, S R; Rajagopalakrishnan, Ramakanth; Rajasekaran, S

    2016-05-01

    To predict adequacy of semitendinosus (ST) graft dimension for ACLR from anthropometric measures. Single tendon harvest for autograft hamstring ACLR could be beneficial to limit donor site morbidity; however, concerns for reconstruction failure based upon inadequate graft size may limit this surgical technique. To predict adequacy, prospectively, 108 patients who underwent ACLR by hamstring graft (STG graft) were enrolled for the study. Mean age was 33.028 years ± 9.539 SD (14-59) with 88 males and 20 females. Anthropometric measurements (height, weight, BMI, thigh and total limb length) and intraoperative data (graft dimensions and bone tunnel measurements) were collected for analysis. Semitendinosus graft can be used as 3-strand (ST3) or 4-strand (ST4) graft. Adequacy criteria for ST3 and ST4 graft dimensions were determined from data analysis. SPSS (v.17) Pearson's correlation coefficient and ROC curves were used for statistical analyses. A total of 74 out of 108 patients (68.52 %) had adequate graft dimensions for ST3 reconstruction. Height equal or greater than 158 cm was predictive of adequate graft for ST3 reconstruction. Only 23 patients (21.3 %) had adequate graft dimensions for ST4 reconstruction. Height equal or greater than 170 cm was predictive of adequate graft for ST4 reconstruction. Height variable had the highest ROC curve area of 0.840 and 0.910 for both ST3 graft and ST4 graft, respectively. Hence, height was used as best predictor to determine adequacy of the graft. Height can be predictive of adequate graft for single tendon ACL reconstruction.

  1. Effects of phase vector and history extension on prediction power of adaptive-network based fuzzy inference system (ANFIS) model for a real scale anaerobic wastewater treatment plant operating under unsteady state.

    PubMed

    Perendeci, Altinay; Arslan, Sever; Tanyolaç, Abdurrahman; Celebi, Serdar S

    2009-10-01

    A conceptual neural fuzzy model based on adaptive-network based fuzzy inference system, ANFIS, was proposed using available input on-line and off-line operational variables for a sugar factory anaerobic wastewater treatment plant operating under unsteady state to estimate the effluent chemical oxygen demand, COD. The predictive power of the developed model was improved as a new approach by adding the phase vector and the recent values of COD up to 5-10 days, longer than overall retention time of wastewater in the system. History of last 10 days for COD effluent with two-valued phase vector in the input variable matrix including all parameters had more predictive power. History of 7 days with two-valued phase vector in the matrix comprised of only on-line variables yielded fairly well estimations. The developed ANFIS model with phase vector and history extension has been able to adequately represent the behavior of the treatment system.

  2. Water injection into vapor- and liquid-dominated reservoirs: Modeling of heat transfer and mass transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruess, K.; Oldenburg, C.; Moridis, G.

    1997-12-31

    This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less

  3. Quality Evaluation of Shelled and Unshelled Macadamia Nuts by Means of Near-Infrared Spectroscopy (NIR).

    PubMed

    Canneddu, Giovanna; Júnior, Luis Carlos Cunha; de Almeida Teixeira, Gustavo Henrique

    2016-07-01

    The quality of shelled and unshelled macadamia nuts was assessed by means of Fourier transformed near-infrared (FT-NIR) spectroscopy. Shelled macadamia nuts were sorted as sound nuts; nuts infected by Ecdytolopha aurantiana and Leucopteara coffeella; and cracked nuts caused by germination. Unshelled nuts were sorted as intact nuts (<10% half nuts, 2014); half nuts (March, 2013; November, 2013); and crushed nuts (2014). Peroxide value (PV) and acidity index (AI) were determined according to AOAC. PCA-LDA shelled macadamia nuts classification resulted in 93.2% accurate classification. PLS PV prediction model resulted in a square error of prediction (SEP) of 3.45 meq/kg, and a prediction coefficient determination value (Rp (2) ) of 0.72. The AI PLS prediction model was better (SEP = 0.14%, Rp (2) = 0.80). Although adequate classification was possible (93.2%), shelled nuts must not contain live insects, therefore the classification accuracy was not satisfactory. FT-NIR spectroscopy can be successfully used to predict PV and AI in unshelled macadamia nuts, though. © 2016 Institute of Food Technologists®

  4. Parenting and antisocial behavior: a model of the relationship between adolescent self-disclosure, parental closeness, parental control, and adolescent antisocial behavior.

    PubMed

    Vieno, Alessio; Nation, Maury; Pastore, Massimiliano; Santinello, Massimo

    2009-11-01

    This study used data collected from a sample of 840 Italian adolescents (418 boys; M age = 12.58) and their parents (657 mothers; M age = 43.78) to explore the relations between parenting, adolescent self-disclosure, and antisocial behavior. In the hypothesized model, parenting practices (e.g., parental monitoring and control) have direct effects on parental knowledge and antisocial behavior. Parenting style (e.g., parent-child closeness), on the other hand, is directly related to adolescent self-disclosure, which in turn is positively related to parental knowledge and negatively related to adolescents' antisocial behavior. A structural equation model, which incorporated data from parents and adolescents, largely supported the hypothesized model. Gender-specific models also found some gender differences among adolescents and parents, as the hypothesized model adequately fit the subsample of mothers but not fathers. Mothers' closeness to girls predicted their knowledge of their daughters' behavior; mothers' control predicted boys' antisocial behavior.

  5. Non-parallel coevolution of sender and receiver in the acoustic communication system of treefrogs.

    PubMed

    Schul, Johannes; Bush, Sarah L

    2002-09-07

    Advertisement calls of closely related species often differ in quantitative features such as the repetition rate of signal units. These differences are important in species recognition. Current models of signal-receiver coevolution predict two possible patterns in the evolution of the mechanism used by receivers to recognize the call: (i) classical sexual selection models (Fisher process, good genes/indirect benefits, direct benefits models) predict that close relatives use qualitatively similar signal recognition mechanisms tuned to different values of a call parameter; and (ii) receiver bias models (hidden preference, pre-existing bias models) predict that if different signal recognition mechanisms are used by sibling species, evidence of an ancestral mechanism will persist in the derived species, and evidence of a pre-existing bias will be detectable in the ancestral species. We describe qualitatively different call recognition mechanisms in sibling species of treefrogs. Whereas Hyla chrysoscelis uses pulse rate to recognize male calls, Hyla versicolor uses absolute measurements of pulse duration and interval duration. We found no evidence of either hidden preferences or pre-existing biases. The results are compared with similar data from katydids (Tettigonia sp.). In both taxa, the data are not adequately explained by current models of signal-receiver coevolution.

  6. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  7. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    PubMed

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.

  8. Cost and mortality prediction using polymerase chain reaction pathogen detection in sepsis: evidence from three observational trials

    PubMed Central

    2010-01-01

    Introduction Delays in adequate antimicrobial treatment contribute to high cost and mortality in sepsis. Polymerase chain reaction (PCR) assays are used alongside conventional cultures to accelerate the identification of microorganisms. We analyze the impact on medical outcomes and healthcare costs if improved adequacy of antimicrobial therapy is achieved by providing immediate coverage after positive PCR reports. Methods A mathematical prediction model describes the impact of PCR-based rapid adjustment of antimicrobial treatment. The model is applied to predict cost and medical outcomes for 221 sepsis episodes of 189 post-surgical and intensive care unit (ICU) sepsis patients with available PCR data from a prospective, observational trial of a multiplex PCR assay in five hospitals. While this trial demonstrated reduction of inadequate treatment days, data on outcomes associated with reduced inadequate initial antimicrobial treatment had to be obtained from two other, bigger, studies which involved 1,147 (thereof 316 inadequately treated) medical or surgical ICU patients. Our results are reported with the (5% to 95%) percentile ranges from Monte Carlo simulation in which the input parameters were randomly and independently varied according to their statistical characterization in the three underlying studies. The model allows predictions also for different patient groups or PCR assays. Results A total of 13.1% of PCR tests enabled earlier adequate treatment. We predict that cost for PCR testing (300 €/test) can be fully recovered for patients above 717 € (605 € to 1,710 €) daily treatment cost. A 2.6% (2.0 to 3.2%) absolute reduction of mortality is expected. Cost per incremental survivor calculates to 11,477 € (9,321 € to 14,977 €) and incremental cost-effectiveness ratio to 3,107 € (2,523 € to 4,055 €) per quality-adjusted life-year. Generally, for ICU patients with >25% incidence of inadequate empiric antimicrobial treatment, and at least 15% with a positive blood culture, PCR represents a cost-neutral adjunct method. Conclusions Rapid PCR identification of microorganisms has the potential to become a cost-effective component for managing sepsis. The prediction model tested with data from three observational trials should be utilized as a framework to deepen insights when integrating more complementary data associated with utilization of molecular assays in the management of sepsis. PMID:20950442

  9. Strontium-90 Biokinetics from Simulated Wound Intakes in Non-human Primates Compared with Combined Model Predictions from National Council on Radiation Protection and Measurements Report 156 and International Commission on Radiological Protection Publication 67.

    PubMed

    Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh

    2016-01-01

    This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.

  10. Aeroelastic stability analysis of a Darrieus wind turbine

    NASA Astrophysics Data System (ADS)

    Popelka, D.

    1982-02-01

    An aeroelastic stability analysis was developed for predicting flutter instabilities on vertical axis wind turbines. The analytical model and mathematical formulation of the problem are described as well as the physical mechanism that creates flutter in Darrieus turbines. Theoretical results are compared with measured experimental data from flutter tests of the Sandia 2 Meter turbine. Based on this comparison, the analysis appears to be an adequate design evaluation tool.

  11. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    PubMed

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Water quality characterization and mathematical modeling of dissolved oxygen in the East and West Ponds, Jamaica Bay Wildlife Refuge.

    PubMed

    Maillacheruvu, Krishnanand; Roy, D; Tanacredi, J

    2003-09-01

    The current study was undertaken to characterize the East and West Ponds and develop a mathematical model of the effects of nutrient and BOD loading on dissolved oxygen (DO) concentrations in these ponds. The model predicted that both ponds will recover adequately given the average expected range of nutrient and BOD loading due to waste from surface runoff and migratory birds. The predicted dissolved oxygen levels in both ponds were greater than 5.0 mg/L, and were supported by DO levels in the field which were typically above 5.0 mg/L during the period of this study. The model predicted a steady-state NBOD concentration of 12.0-14.0 mg/L in the East Pond, compared to an average measured value of 3.73 mg/L in 1994 and an average measured value of 12.51 mg/L in a 1996-97 study. The model predicted that the NBOD concentration in the West Pond would be under 3.0 mg/L compared to the average measured values of 7.50 mg/L in 1997, and 8.51 mg/L in 1994. The model predicted that phosphorus (as PO4(3-)) concentration in the East Pond will approach 4.2 mg/L in 4 months, compared to measured average value of 2.01 mg/L in a 1994 study. The model predicted that phosphorus concentration in the West Pond will approach 1.00 mg/L, compared to a measured average phosphorus (as PO4(3-)) concentration of 1.57 mg/L in a 1994 study.

  13. Biogeochemical modeling of CO2 and CH4 production in anoxic Arctic soil microcosms

    NASA Astrophysics Data System (ADS)

    Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng; Yang, Ziming; Graham, David E.; Gu, Baohua; Painter, Scott L.; Thornton, Peter E.

    2016-09-01

    Soil organic carbon turnover to CO2 and CH4 is sensitive to soil redox potential and pH conditions. However, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximately describe the observed pH evolution without additional parameterization. Although Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. The equilibrium speciation predicts a substantial increase in CO2 solubility as pH increases, and taking into account CO2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO2 production from closed microcosms can be substantially underestimated based on headspace CO2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.

  14. Near-Surface Wind Predictions in Complex Terrain with a CFD Approach Optimized for Atmospheric Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Wagenbrenner, N. S.; Forthofer, J.; Butler, B.; Shannon, K.

    2014-12-01

    Near-surface wind predictions are important for a number of applications, including transport and dispersion, wind energy forecasting, and wildfire behavior. Researchers and forecasters would benefit from a wind model that could be readily applied to complex terrain for use in these various disciplines. Unfortunately, near-surface winds in complex terrain are not handled well by traditional modeling approaches. Numerical weather prediction models employ coarse horizontal resolutions which do not adequately resolve sub-grid terrain features important to the surface flow. Computational fluid dynamics (CFD) models are increasingly being applied to simulate atmospheric boundary layer (ABL) flows, especially in wind energy applications; however, the standard functionality provided in commercial CFD models is not suitable for ABL flows. Appropriate CFD modeling in the ABL requires modification of empirically-derived wall function parameters and boundary conditions to avoid erroneous streamwise gradients due to inconsistences between inlet profiles and specified boundary conditions. This work presents a new version of a near-surface wind model for complex terrain called WindNinja. The new version of WindNinja offers two options for flow simulations: 1) the native, fast-running mass-consistent method available in previous model versions and 2) a CFD approach based on the OpenFOAM modeling framework and optimized for ABL flows. The model is described and evaluations of predictions with surface wind data collected from two recent field campaigns in complex terrain are presented. A comparison of predictions from the native mass-consistent method and the new CFD method is also provided.

  15. Diesel engine emissions and combustion predictions using advanced mixing models applicable to fuel sprays

    NASA Astrophysics Data System (ADS)

    Abani, Neerav; Reitz, Rolf D.

    2010-09-01

    An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.

  16. A New Prediction Model for Evaluating Treatment-Resistant Depression.

    PubMed

    Kautzky, Alexander; Baldinger-Melich, Pia; Kranz, Georg S; Vanicek, Thomas; Souery, Daniel; Montgomery, Stuart; Mendlewicz, Julien; Zohar, Joseph; Serretti, Alessandro; Lanzenberger, Rupert; Kasper, Siegfried

    2017-02-01

    Despite a broad arsenal of antidepressants, about a third of patients suffering from major depressive disorder (MDD) do not respond sufficiently to adequate treatment. Using the data pool of the Group for the Study of Resistant Depression and machine learning, we intended to draw new insights featuring 48 clinical, sociodemographic, and psychosocial predictors for treatment outcome. Patients were enrolled starting from January 2000 and diagnosed according to DSM-IV. Treatment-resistant depression (TRD) was defined by a 17-item Hamilton Depression Rating Scale (HDRS) score ≥ 17 after at least 2 antidepressant trials of adequate dosage and length. Remission was defined by an HDRS score < 8. Stepwise predictor reduction using randomForest was performed to find the optimal number for classification of treatment outcome. After importance values were generated, prediction for remission and resistance was performed in a training sample of 400 patients. For prediction, we used a set of 80 patients not featured in the training sample and computed receiver operating characteristics. The most useful predictors for treatment outcome were the timespan between first and last depressive episode, age at first antidepressant treatment, response to first antidepressant treatment, severity, suicidality, melancholia, number of lifetime depressive episodes, patients' admittance type, education, occupation, and comorbid diabetes, panic, and thyroid disorder. While single predictors could not reach a prediction accuracy much different from random guessing, by combining all predictors, we could detect resistance with an accuracy of 0.737 and remission with an accuracy of 0.850. Consequently, 65.5% of predictions for TRD and 77.7% for remission can be expected to be accurate. Using machine learning algorithms, we could demonstrate success rates of 0.737 for predicting TRD and 0.850 for predicting remission, surpassing predictive capabilities of clinicians. Our results strengthen data mining and suggest the benefit of focus on interaction-based statistics. Considering that all predictors can easily be obtained in a clinical setting, we hope that our model can be tested by other research groups. © Copyright 2017 Physicians Postgraduate Press, Inc.

  17. Mechanistic modeling to predict the transporter- and enzyme-mediated drug-drug interactions of repaglinide.

    PubMed

    Varma, Manthena V S; Lai, Yurong; Kimoto, Emi; Goosen, Theunis C; El-Kattan, Ayman F; Kumar, Vikas

    2013-04-01

    Quantitative prediction of complex drug-drug interactions (DDIs) is challenging. Repaglinide is mainly metabolized by cytochrome-P-450 (CYP)2C8 and CYP3A4, and is also a substrate of organic anion transporting polypeptide (OATP)1B1. The purpose is to develop a physiologically based pharmacokinetic (PBPK) model to predict the pharmacokinetics and DDIs of repaglinide. In vitro hepatic transport of repaglinide, gemfibrozil and gemfibrozil 1-O-β-glucuronide was characterized using sandwich-culture human hepatocytes. A PBPK model, implemented in Simcyp (Sheffield, UK), was developed utilizing in vitro transport and metabolic clearance data. In vitro studies suggested significant active hepatic uptake of repaglinide. Mechanistic model adequately described repaglinide pharmacokinetics, and successfully predicted DDIs with several OATP1B1 and CYP3A4 inhibitors (<10% error). Furthermore, repaglinide-gemfibrozil interaction at therapeutic dose was closely predicted using in vitro fraction metabolism for CYP2C8 (0.71), when primarily considering reversible inhibition of OATP1B1 and mechanism-based inactivation of CYP2C8 by gemfibrozil and gemfibrozil 1-O-β-glucuronide. This study demonstrated that hepatic uptake is rate-determining in the systemic clearance of repaglinide. The model quantitatively predicted several repaglinide DDIs, including the complex interactions with gemfibrozil. Both OATP1B1 and CYP2C8 inhibition contribute significantly to repaglinide-gemfibrozil interaction, and need to be considered for quantitative rationalization of DDIs with either drug.

  18. An Efficient and Imperfect Model for Gravel-Bed Braided River Morphodynamics: Numerical Simulations as Exploratory Tools

    NASA Astrophysics Data System (ADS)

    Kasprak, A.; Brasington, J.; Hafen, K.; Wheaton, J. M.

    2015-12-01

    Numerical models that predict channel evolution through time are an essential tool for investigating processes that occur over timescales which render field observation intractable. However, available morphodynamic models generally take one of two approaches to the complex problem of computing morphodynamics, resulting in oversimplification of the relevant physics (e.g. cellular models) or faithful, yet computationally intensive, representations of the hydraulic and sediment transport processes at play. The practical implication of these approaches is that river scientists must often choose between unrealistic results, in the case of the former, or computational demands that render modeling realistic spatiotemporal scales of channel evolution impossible. Here we present a new modeling framework that operates at the timescale of individual competent flows (e.g. floods), and uses a highly-simplified sediment transport routine that moves volumes of material according to morphologically-derived characteristic transport distances, or path lengths. Using this framework, we have constructed an open-source morphodynamic model, termed MoRPHED, which is here applied, and its validity investigated, at timescales ranging from a single event to a decade on two braided rivers in the UK and New Zealand. We do not purport that MoRPHED is the best, nor even an adequate, tool for modeling braided river dynamics at this range of timescales. Rather, our goal in this research is to explore the utility, feasibility, and sensitivity of an event-scale, path-length-based modeling framework for predicting braided river dynamics. To that end, we further explore (a) which processes are naturally emergent and which must be explicitly parameterized in the model, (b) the sensitivity of the model to the choice of particle travel distance, and (c) whether an event-scale model timestep is adequate for producing braided channel dynamics. The results of this research may inform techniques for future morphodynamic modeling that seeks to maximize computational resources while modeling fluvial dynamics at the timescales of change.

  19. A systems approach to college drinking: development of a deterministic model for testing alcohol control policies.

    PubMed

    Scribner, Richard; Ackleh, Azmy S; Fitzpatrick, Ben G; Jacquez, Geoffrey; Thibodeaux, Jeremy J; Rommel, Robert; Simonsen, Neal

    2009-09-01

    The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by "wetness" and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately "dry" campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately "wet" campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses.

  20. A Systems Approach to College Drinking: Development of a Deterministic Model for Testing Alcohol Control Policies*

    PubMed Central

    Scribner, Richard; Ackleh, Azmy S.; Fitzpatrick, Ben G.; Jacquez, Geoffrey; Thibodeaux, Jeremy J.; Rommel, Robert; Simonsen, Neal

    2009-01-01

    Objective: The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. Method: A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. Results: First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by “wetness” and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately “dry” campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately “wet” campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). Conclusions: A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses. PMID:19737506

  1. Enviromental influences on the {sup 137}Cs kinetics of the yellow-bellied turtle (Trachemys Scripta)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, E.L.; Brisbin, L.I. Jr.

    1996-02-01

    Assessments of ecological risk require accurate predictions of contaminant dynamics in natural populations. However, simple deterministic models that assume constant uptake rates and elimination fractions may compromise both their ecological realism and their general application to animals with variable metabolism or diets. In particular, the temperature-dependent model of metabolic rates characteristic of ectotherms may lead to significant differences between observed and predicted contaminant kinetics. We examined the influence of a seasonally variable thermal environment on predicting the uptake and annual cycling of contaminants by ectotherms, using a temperature-dependent model of {sup 137}Cs kinetics in free-living yellow-bellied turtles, Trachemys scripta. Wemore » compared predictions from this model with those of deterministics negative exponential and flexibly shaped Richards sigmoidal models. Concentrations of {sup 137}Cs in a population if this species in Pond B, a radionuclide-contaminated nuclear reactor cooling reservoir, and {sup 137}Cs uptake by the uncontaminated turtles held captive in Pond B for 4 yr confirmed both the pattern of uptake and the equilibrium concentrations predicted by the temperature-dependent model. Almost 90% of the variance on the predicted time-integrated {sup 137}Cs concentration was explainable by linear relationships with model paramaters. The model was also relatively insensitive to uncertainties in the estimates of ambient temperature, suggesting that adequate estimates of temperature-dependent ingestion and elimination may require relatively few measurements of ambient conditions at sites of interest. Analyses of Richards sigmoidal models of {sup 137}Cs uptake indicated significant differences from a negative exponential trajectory in the 1st yr after the turtles` release into Pond B. 76 refs., 7 figs., 5 tabs.« less

  2. Validity of Models for Predicting BRCA1 and BRCA2 Mutations

    PubMed Central

    Parmigiani, Giovanni; Chen, Sining; Iversen, Edwin S.; Friebel, Tara M.; Finkelstein, Dianne M.; Anton-Culver, Hoda; Ziogas, Argyrios; Weber, Barbara L.; Eisen, Andrea; Malone, Kathleen E.; Daling, Janet R.; Hsu, Li; Ostrander, Elaine A.; Peterson, Leif E.; Schildkraut, Joellen M.; Isaacs, Claudine; Corio, Camille; Leondaridis, Leoni; Tomlinson, Gail; Amos, Christopher I.; Strong, Louise C.; Berry, Donald A.; Weitzel, Jeffrey N.; Sand, Sharon; Dutson, Debra; Kerber, Rich; Peshkin, Beth N.; Euhus, David M.

    2008-01-01

    Background Deleterious mutations of the BRCA1 and BRCA2 genes confer susceptibility to breast and ovarian cancer. At least 7 models for estimating the probabilities of having a mutation are used widely in clinical and scientific activities; however, the merits and limitations of these models are not fully understood. Objective To systematically quantify the accuracy of the following publicly available models to predict mutation carrier status: BRCAPRO, family history assessment tool, Finnish, Myriad, National Cancer Institute, University of Pennsylvania, and Yale University. Design Cross-sectional validation study, using model predictions and BRCA1 or BRCA2 mutation status of patients different from those used to develop the models. Setting Multicenter study across Cancer Genetics Network participating centers. Patients 3 population-based samples of participants in research studies and 8 samples from genetic counseling clinics. Measurements Discrimination between individuals testing positive for a mutation in BRCA1 or BRCA2 from those testing negative, as measured by the c-statistic, and sensitivity and specificity of model predictions. Results The 7 models differ in their predictions. The better-performing models have a c-statistic around 80%. BRCAPRO has the largest c-statistic overall and in all but 2 patient subgroups, although the margin over other models is narrow in many strata. Outside of high-risk populations, all models have high false-negative and false-positive rates across a range of probability thresholds used to refer for mutation testing. Limitation Three recently published models were not included. Conclusions All models identify women who probably carry a deleterious mutation of BRCA1 or BRCA2 with adequate discrimination to support individualized genetic counseling, although discrimination varies across models and populations. PMID:17909205

  3. On the use of three hydrological models as hypotheses to investigate the behaviour of a small Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Ruiz Pérez, Guiomar; Latron, Jérôme; Llorens, Pilar; Gallart, Francesc; Francés, Félix

    2017-04-01

    Selecting an adequate hydrological model is the first step to carry out a rainfall-runoff modelling exercise. A hydrological model is a hypothesis of catchment functioning, encompassing a description of dominant hydrological processes and predicting how these processes interact to produce the catchment's response to external forcing. Current research lines emphasize the importance of multiple working hypotheses for hydrological modelling instead of only using a single model. In line with this philosophy, here different hypotheses were considered and analysed to simulate the nonlinear response of a small Mediterranean catchment and to progress in the analysis of its hydrological behaviour. In particular, three hydrological models were considered representing different potential hypotheses: two lumped models called LU3 and LU4, and one distributed model called TETIS. To determine how well each specific model performed and to assess whether a model was more adequate than another, we raised three complementary tests: one based on the analysis of residual errors series, another based on a sensitivity analysis and the last one based on using multiple evaluation criteria associated to the concept of Pareto frontier. This modelling approach, based on multiple working hypotheses, helped to improve our perceptual model of the catchment behaviour and, furthermore, could be used as a guidance to improve the performance of other environmental models.

  4. High-resolution vertical profiles of groundwater electrical conductivity (EC) and chloride from direct-push EC logs

    NASA Astrophysics Data System (ADS)

    Bourke, Sarah A.; Hermann, Kristian J.; Hendry, M. Jim

    2017-11-01

    Elevated groundwater salinity associated with produced water, leaching from landfills or secondary salinity can degrade arable soils and potable water resources. Direct-push electrical conductivity (EC) profiling enables rapid, relatively inexpensive, high-resolution in-situ measurements of subsurface salinity, without requiring core collection or installation of groundwater wells. However, because the direct-push tool measures the bulk EC of both solid and liquid phases (ECa), incorporation of ECa data into regional or historical groundwater data sets requires the prediction of pore water EC (ECw) or chloride (Cl-) concentrations from measured ECa. Statistical linear regression and physically based models for predicting ECw and Cl- from ECa profiles were tested on a brine plume in central Saskatchewan, Canada. A linear relationship between ECa/ECw and porosity was more accurate for predicting ECw and Cl- concentrations than a power-law relationship (Archie's Law). Despite clay contents of up to 96%, the addition of terms to account for electrical conductance in the solid phase did not improve model predictions. In the absence of porosity data, statistical linear regression models adequately predicted ECw and Cl- concentrations from direct-push ECa profiles (ECw = 5.48 ECa + 0.78, R 2 = 0.87; Cl- = 1,978 ECa - 1,398, R 2 = 0.73). These statistical models can be used to predict ECw in the absence of lithologic data and will be particularly useful for initial site assessments. The more accurate linear physically based model can be used to predict ECw and Cl- as porosity data become available and the site-specific ECw-Cl- relationship is determined.

  5. Software sensors for biomass concentration in a SSC process using artificial neural networks and support vector machine.

    PubMed

    Acuña, Gonzalo; Ramirez, Cristian; Curilem, Millaray

    2014-01-01

    The lack of sensors for some relevant state variables in fermentation processes can be coped by developing appropriate software sensors. In this work, NARX-ANN, NARMAX-ANN, NARX-SVM and NARMAX-SVM models are compared when acting as software sensors of biomass concentration for a solid substrate cultivation (SSC) process. Results show that NARMAX-SVM outperforms the other models with an SMAPE index under 9 for a 20 % amplitude noise. In addition, NARMAX models perform better than NARX models under the same noise conditions because of their better predictive capabilities as they include prediction errors as inputs. In the case of perturbation of initial conditions of the autoregressive variable, NARX models exhibited better convergence capabilities. This work also confirms that a difficult to measure variable, like biomass concentration, can be estimated on-line from easy to measure variables like CO₂ and O₂ using an adequate software sensor based on computational intelligence techniques.

  6. Forecasting dengue hemorrhagic fever cases using ARIMA model: a case study in Asahan district

    NASA Astrophysics Data System (ADS)

    Siregar, Fazidah A.; Makmur, Tri; Saprin, S.

    2018-01-01

    Time series analysis had been increasingly used to forecast the number of dengue hemorrhagic fever in many studies. Since no vaccine exist and poor public health infrastructure, predicting the occurrence of dengue hemorrhagic fever (DHF) is crucial. This study was conducted to determine trend and forecasting the occurrence of DHF in Asahan district, North Sumatera Province. Monthly reported dengue cases for the years 2012-2016 were obtained from the district health offices. A time series analysis was conducted by Autoregressive integrated moving average (ARIMA) modeling to forecast the occurrence of DHF. The results demonstrated that the reported DHF cases showed a seasonal variation. The SARIMA (1,0,0)(0,1,1)12 model was the best model and adequate for the data. The SARIMA model for DHF is necessary and could applied to predict the incidence of DHF in Asahan district and assist with design public health maesures to prevent and control the diseases.

  7. Vibration-translation energy transfer in vibrationally excited diatomic molecules. Ph.D. Thesis - York Univ., Toronto

    NASA Technical Reports Server (NTRS)

    Mckenzie, R. L.

    1976-01-01

    A semiclassical collision model is applied to the study of energy transfer rates between a vibrationally excited diatomic molecule and a structureless atom. The molecule is modeled as an anharmonic oscillator with a multitude of dynamically coupled vibrational states. Three main aspects in the prediction of vibrational energy transfer rates are considered. The applicability of the semiclassical model to an anharmonic oscillator is first evaluated for collinear encounters. Second, the collinear semiclassical model is applied to obtain numerical predictions of the vibrational energy transfer rate dependence on the initial vibrational state quantum number. Thermally averaged vibration-translation rate coefficients are predicted and compared with CO-He experimental values for both ground and excited initial states. The numerical model is also used as a basis for evaluating several less complete but analytic models. Third, the role of rational motion in the dynamics of vibrational energy transfer is examined. A three-dimensional semiclassical collision model is constructed with coupled rotational motion included. Energy transfer within the molecule is shown to be dominated by vibration-rotation transitions with small changes in angular momentum. The rates of vibrational energy transfer in molecules with rational frequencies that are very small in comparison to their vibrational frequency are shown to be adequately treated by the preceding collinear models.

  8. Physiologically Based Pharmacokinetic Modeling in Lead Optimization. 1. Evaluation and Adaptation of GastroPlus To Predict Bioavailability of Medchem Series.

    PubMed

    Daga, Pankaj R; Bolger, Michael B; Haworth, Ian S; Clark, Robert D; Martin, Eric J

    2018-03-05

    When medicinal chemists need to improve bioavailability (%F) within a chemical series during lead optimization, they synthesize new series members with systematically modified properties mainly by following experience and general rules of thumb. More quantitative models that predict %F of proposed compounds from chemical structure alone have proven elusive. Global empirical %F quantitative structure-property (QSPR) models perform poorly, and projects have too little data to train local %F QSPR models. Mechanistic oral absorption and physiologically based pharmacokinetic (PBPK) models simulate the dissolution, absorption, systemic distribution, and clearance of a drug in preclinical species and humans. Attempts to build global PBPK models based purely on calculated inputs have not achieved the <2-fold average error needed to guide lead optimization. In this work, local GastroPlus PBPK models are instead customized for individual medchem series. The key innovation was building a local QSPR for a numerically fitted effective intrinsic clearance (CL loc ). All inputs are subsequently computed from structure alone, so the models can be applied in advance of synthesis. Training CL loc on the first 15-18 rat %F measurements gave adequate predictions, with clear improvements up to about 30 measurements, and incremental improvements beyond that.

  9. Dynamic contraction behaviour of pneumatic artificial muscle

    NASA Astrophysics Data System (ADS)

    Doumit, Marc D.; Pardoel, Scott

    2017-07-01

    The development of a dynamic model for the Pneumatic Artificial Muscle (PAM) is an imperative undertaking for understanding and analyzing the behaviour of the PAM as a function of time. This paper proposes a Newtonian based dynamic PAM model that includes the modeling of the muscle geometry, force, inertia, fluid dynamic, static and dynamic friction, heat transfer and valve flow while ignoring the effect of bladder elasticity. This modeling contribution allows the designer to predict, analyze and optimize PAM performance prior to its development. Thus advancing successful implementations of PAM based powered exoskeletons and medical systems. To date, most muscle dynamic properties are determined experimentally, furthermore, no analytical models that can accurately predict the muscle's dynamic behaviour are found in the literature. Most developed analytical models adequately predict the muscle force in static cases but neglect the behaviour of the system in the transient response. This could be attributed to the highly challenging task of deriving such a dynamic model given the number of system elements that need to be identified and the system's highly non-linear properties. The proposed dynamic model in this paper is successfully simulated through MATLAB programing and validated the pressure, contraction distance and muscle temperature with experimental testing that is conducted with in-house built prototype PAM's.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng

    Soil organic carbon turnover to CO 2 and CH 4 is sensitive to soil redox potential and pH conditions. But, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximatelymore » describe the observed pH evolution without additional parameterization. Though Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. Furthermore, the equilibrium speciation predicts a substantial increase in CO 2 solubility as pH increases, and taking into account CO 2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO 2 production from closed microcosms can be substantially underestimated based on headspace CO 2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.« less

  11. Validating a spatially distributed hydrological model with soil morphology data

    NASA Astrophysics Data System (ADS)

    Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.

    2014-09-01

    Spatially distributed models are popular tools in hydrology claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for inputs of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography and artificial drainage. We translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to observed groundwater levels and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the the groundwater level predictions were not accurate enough to be used for the prediction of saturated areas. Groundwater level dynamics were not adequately reproduced and the predicted spatial saturation patterns did not correspond to those estimated from the soil map. Our results indicate that an accurate prediction of the groundwater level dynamics of the shallow groundwater in our catchment that is subject to artificial drainage would require a model that better represents processes at the boundary between the unsaturated and the saturated zone. However, data needed for such a more detailed model are not generally available. This severely hampers the practical use of such models despite their usefulness for scientific purposes.

  12. Rough-to-smooth transition of an equilibrium neutral constant stress layer

    NASA Technical Reports Server (NTRS)

    Logan, E., Jr.; Fichtl, G. H.

    1975-01-01

    Purpose of research on rough-to-smooth transition of an equilibrium neutral constant stress layer is to develop a model for low-level atmospheric flow over terrains of abruptly changing roughness, such as those occurring near the windward end of a landing strip, and to use the model to derive functions which define the extent of the region affected by the roughness change and allow adequate prediction of wind and shear stress profiles at all points within the region. A model consisting of two bounding logarithmic layers and an intermediate velocity defect layer is assumed, and dimensionless velocity and stress distribution functions which meet all boundary and matching conditions are hypothesized. The functions are used in an asymptotic form of the equation of motion to derive a relation which governs the growth of the internal boundary layer. The growth relation is used to predict variation of surface shear stress.

  13. Spring-back simulation of unidirectional carbon/epoxy L- shaped laminate composites manufactured through autoclave processing

    NASA Astrophysics Data System (ADS)

    Nasir, M. N. M.; Mezeix, L.; Aminanda, Y.; Seman, M. A.; Rivai, A.; Ali, K. M.

    2016-02-01

    This paper presents an original method in predicting the spring-back for composite aircraft structures using non-linear Finite Element Analysis (FEA) and is an extension of the previous accompanying study on flat geometry samples. Firstly, unidirectional prepreg lay-up samples are fabricated on moulds with different corner angles (30°, 45° and 90°) and the effect on spring-back deformation are observed. Then, the FEA model that was developed in the previous study on flat samples is utilized. The model maintains the physical mechanisms of spring-back such as ply stretching and tool-part interface properties with the additional mechanism in the corner effect and geometrical changes in the tool, part and the tool-part interface components. The comparative study between the experimental data and FEA results show that the FEA model predicts adequately the spring-back deformation within the range of corner angle tested.

  14. Annual Research Briefs, 1987

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Reynolds, William C.

    1988-01-01

    Lagrangian techniques have found widespread application to the prediction and understanding of turbulent transport phenomena and have yielded satisfactory results for different cases of shear flow problems. However, it must be kept in mind that in most experiments what is really available are Eulerian statistics, and it is far from obvious how to extract from them the information relevant to the Lagrangian behavior of the flow; in consequence, Lagrangian models still include some hypothesis for which no adequate supporting evidence was until now available. Direct numerical simulation of turbulence offers a new way to obtain Lagrangian statistics and so verify the validity of the current predictive models and the accuracy of their results. After the pioneering work of Riley (Riley and Patterson, 1974) in the 70's, some such results have just appeared in the literature (Lee et al, Yeung and Pope). The present contribution follows in part similar lines, but focuses on two particle statistics and comparison with existing models.

  15. Recent Progress Towards Predicting Aircraft Ground Handling Performance

    NASA Technical Reports Server (NTRS)

    Yager, T. J.; White, E. J.

    1981-01-01

    The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.

  16. Introducing the VRT gas turbine combustor

    NASA Technical Reports Server (NTRS)

    Melconian, Jerry O.; Mostafa, Abdu A.; Nguyen, Hung Lee

    1990-01-01

    An innovative annular combustor configuration is being developed for aircraft and other gas turbine engines. This design has the potential of permitting higher turbine inlet temperatures by reducing the pattern factor and providing a major reduction in NO(x) emission. The design concept is based on a Variable Residence Time (VRT) technique which allows large fuel particles adequate time to completely burn in the circumferentially mixed primary zone. High durability of the combustor is achieved by dual function use of the incoming air. The feasibility of the concept was demonstrated by water analogue tests and 3-D computer modeling. The computer model predicted a 50 percent reduction in pattern factor when compared to a state of the art conventional combustor. The VRT combustor uses only half the number of fuel nozzles of the conventional configuration. The results of the chemical kinetics model require further investigation, as the NO(x) predictions did not correlate with the available experimental and analytical data base.

  17. Predicting future protection of respirator users: Statistical approaches and practical implications.

    PubMed

    Hu, Chengcheng; Harber, Philip; Su, Jing

    2016-01-01

    The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.

  18. A microstructurally based model of solder joints under conditions of thermomechanical fatigue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frear, D.R.; Burchett, S.N.; Rashid, M.M.

    The thermomechanical fatigue failure of solder joints in increasingly becoming an important reliability issue. In this paper we present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. Themore » single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.« less

  19. Use of mobile and passive badge air monitoring data for NOX and ozone air pollution spatial exposure prediction models.

    PubMed

    Xu, Wei; Riley, Erin A; Austin, Elena; Sasakura, Miyoko; Schaal, Lanae; Gould, Timothy R; Hartin, Kris; Simpson, Christopher D; Sampson, Paul D; Yost, Michael G; Larson, Timothy V; Xiu, Guangli; Vedal, Sverre

    2017-03-01

    Air pollution exposure prediction models can make use of many types of air monitoring data. Fixed location passive samples typically measure concentrations averaged over several days to weeks. Mobile monitoring data can generate near continuous concentration measurements. It is not known whether mobile monitoring data are suitable for generating well-performing exposure prediction models or how they compare with other types of monitoring data in generating exposure models. Measurements from fixed site passive samplers and mobile monitoring platform were made over a 2-week period in Baltimore in the summer and winter months in 2012. Performance of exposure prediction models for long-term nitrogen oxides (NO X ) and ozone (O 3 ) concentrations were compared using a state-of-the-art approach for model development based on land use regression (LUR) and geostatistical smoothing. Model performance was evaluated using leave-one-out cross-validation (LOOCV). Models performed well using the mobile peak traffic monitoring data for both NO X and O 3 , with LOOCV R 2 s of 0.70 and 0.71, respectively, in the summer, and 0.90 and 0.58, respectively, in the winter. Models using 2-week passive samples for NO X had LOOCV R 2 s of 0.60 and 0.65 in the summer and winter months, respectively. The passive badge sampling data were not adequate for developing models for O 3 . Mobile air monitoring data can be used to successfully build well-performing LUR exposure prediction models for NO X and O 3 and are a better source of data for these models than 2-week passive badge data.

  20. Bioconcentration of gaseous organic chemicals in plant leaves: Comparison of experimental data with model predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polder, M.D.; Hulzebos, E.M.; Jager, D.T.

    1998-01-01

    This literature study is performed to support the implementation of two models in a risk assessment system for the evaluation of chemicals and their risk for human health and the environment. One of the exposure pathways for humans and cattle is the uptake of chemicals by plants. In this risk assessment system the transfer of gaseous organic substances from air to plants modeled by Riederer is included. A similar model with a more refined approach, including dilution by growth, is proposed by Trapp and Matthies, which was implemented in the European version of this risk assessment system (EUSES). In thismore » study both models are evaluated by comparison with experimental data on leaf/air partition coefficients found in the literature. For herbaceous plants both models give good estimations for the leaf/air partition coefficient up to 10{sup 7}, with deviations for most substances within a factor of five. For the azalea and spruce group the fit between experimental BCF values and the calculated model values is less adequate. For substances for which Riederer estimates a leaf/air partition coefficient above 10{sup 7}, the approach of Trapp and Matthies seems more adequate; however, few data were available.« less

  1. A mobile-mobile transport model for simulating reactive transport in connected heterogeneous fields

    NASA Astrophysics Data System (ADS)

    Lu, Chunhui; Wang, Zhiyuan; Zhao, Yue; Rathore, Saubhagya Singh; Huo, Jinge; Tang, Yuening; Liu, Ming; Gong, Rulan; Cirpka, Olaf A.; Luo, Jian

    2018-05-01

    Mobile-immobile transport models can be effective in reproducing heavily tailed breakthrough curves of concentration. However, such models may not adequately describe transport along multiple flow paths with intermediate velocity contrasts in connected fields. We propose using the mobile-mobile model for simulating subsurface flow and associated mixing-controlled reactive transport in connected fields. This model includes two local concentrations, one in the fast- and the other in the slow-flow domain, which predict both the concentration mean and variance. The normalized total concentration variance within the flux is found to be a non-monotonic function of the discharge ratio with a maximum concentration variance at intermediate values of the discharge ratio. We test the mobile-mobile model for mixing-controlled reactive transport with an instantaneous, irreversible bimolecular reaction in structured and connected random heterogeneous domains, and compare the performance of the mobile-mobile to the mobile-immobile model. The results indicate that the mobile-mobile model generally predicts the concentration breakthrough curves (BTCs) of the reactive compound better. Particularly, for cases of an elliptical inclusion with intermediate hydraulic-conductivity contrasts, where the travel-time distribution shows bimodal behavior, the prediction of both the BTCs and maximum product concentration is significantly improved. Our results exemplify that the conceptual model of two mobile domains with diffusive mass transfer in between is in general good for predicting mixing-controlled reactive transport, and particularly so in cases where the transfer in the low-conductivity zones is by slow advection rather than diffusion.

  2. Development of a 3D finite element acoustic model to predict the sound reduction index of stud based double-leaf walls

    NASA Astrophysics Data System (ADS)

    Arjunan, A.; Wang, C. J.; Yahiaoui, K.; Mynors, D. J.; Morgan, T.; Nguyen, V. B.; English, M.

    2014-11-01

    Building standards incorporating quantitative acoustical criteria to ensure adequate sound insulation are now being implemented. Engineers are making great efforts to design acoustically efficient double-wall structures. Accordingly, efficient simulation models to predict the acoustic insulation of double-leaf wall structures are needed. This paper presents the development of a numerical tool that can predict the frequency dependent sound reduction index R of stud based double-leaf walls at one-third-octave band frequency range. A fully vibro-acoustic 3D model consisting of two rooms partitioned using a double-leaf wall, considering the structure and acoustic fluid coupling incorporating the existing fluid and structural solvers are presented. The validity of the finite element (FE) model is assessed by comparison with experimental test results carried out in a certified laboratory. Accurate representation of the structural damping matrix to effectively predict the R values are studied. The possibilities of minimising the simulation time using a frequency dependent mesh model was also investigated. The FEA model presented in this work is capable of predicting the weighted sound reduction index Rw along with A-weighted pink noise C and A-weighted urban noise Ctr within an error of 1 dB. The model developed can also be used to analyse the acoustically induced frequency dependent geometrical behaviour of the double-leaf wall components to optimise them for best acoustic performance. The FE modelling procedure reported in this paper can be extended to other building components undergoing fluid-structure interaction (FSI) to evaluate their acoustic insulation.

  3. Dietary Iodine Sufficiency and Moderate Insufficiency in the Lactating Mother and Nursing Infant: A Computational Perspective

    PubMed Central

    Fisher, W.; Wang, Jian; George, Nysia I.; Gearhart, Jeffery M.; McLanahan, Eva D.

    2016-01-01

    The Institute of Medicine recommends that lactating women ingest 290 μg iodide/d and a nursing infant, less than two years of age, 110 μg/d. The World Health Organization, United Nations Children’s Fund, and International Council for the Control of Iodine Deficiency Disorders recommend population maternal and infant urinary iodide concentrations ≥ 100 μg/L to ensure iodide sufficiency. For breast milk, researchers have proposed an iodide concentration range of 150–180 μg/L indicates iodide sufficiency for the mother and infant, however no national or international guidelines exist for breast milk iodine concentration. For the first time, a lactating woman and nursing infant biologically based model, from delivery to 90 days postpartum, was constructed to predict maternal and infant urinary iodide concentration, breast milk iodide concentration, the amount of iodide transferred in breast milk to the nursing infant each day and maternal and infant serum thyroid hormone kinetics. The maternal and infant models each consisted of three sub-models, iodide, thyroxine (T4), and triiodothyronine (T3). Using our model to simulate a maternal intake of 290 μg iodide/d, the average daily amount of iodide ingested by the nursing infant, after 4 days of life, gradually increased from 50 to 101 μg/day over 90 days postpartum. The predicted average lactating mother and infant urinary iodide concentrations were both in excess of 100 μg/L and the predicted average breast milk iodide concentration, 157 μg/L. The predicted serum thyroid hormones (T4, free T4 (fT4), and T3) in both the nursing infant and lactating mother were indicative of euthyroidism. The model was calibrated using serum thyroid hormone concentrations for lactating women from the United States and was successful in predicting serum T4 and fT4 levels (within a factor of two) for lactating women in other countries. T3 levels were adequately predicted. Infant serum thyroid hormone levels were adequately predicted for most data. For moderate iodide deficient conditions, where dietary iodide intake may range from 50 to 150 μg/d for the lactating mother, the model satisfactorily described the iodide measurements, although with some variation, in urine and breast milk. Predictions of serum thyroid hormones in moderately iodide deficient lactating women (50 μg/d) and nursing infants did not closely agree with mean reported serum thyroid hormone levels, however, predictions were usually within a factor of two. Excellent agreement between prediction and observation was obtained for a recent moderate iodide deficiency study in lactating women. Measurements included iodide levels in urine of infant and mother, iodide in breast milk, and serum thyroid hormone levels in infant and mother. A maternal iodide intake of 50 μg/d resulted in a predicted 29–32% reduction in serum T4 and fT4 in nursing infants, however the reduced serum levels of T4 and fT4 were within most of the published reference intervals for infant. This biologically based model is an important first step at integrating the rapid changes that occur in the thyroid system of the nursing newborn in order to predict adverse outcomes from exposure to thyroid acting chemicals, drugs, radioactive materials or iodine deficiency. PMID:26930410

  4. Dietary Iodine Sufficiency and Moderate Insufficiency in the Lactating Mother and Nursing Infant: A Computational Perspective.

    PubMed

    Fisher, W; Wang, Jian; George, Nysia I; Gearhart, Jeffery M; McLanahan, Eva D

    2016-01-01

    The Institute of Medicine recommends that lactating women ingest 290 μg iodide/d and a nursing infant, less than two years of age, 110 μg/d. The World Health Organization, United Nations Children's Fund, and International Council for the Control of Iodine Deficiency Disorders recommend population maternal and infant urinary iodide concentrations ≥ 100 μg/L to ensure iodide sufficiency. For breast milk, researchers have proposed an iodide concentration range of 150-180 μg/L indicates iodide sufficiency for the mother and infant, however no national or international guidelines exist for breast milk iodine concentration. For the first time, a lactating woman and nursing infant biologically based model, from delivery to 90 days postpartum, was constructed to predict maternal and infant urinary iodide concentration, breast milk iodide concentration, the amount of iodide transferred in breast milk to the nursing infant each day and maternal and infant serum thyroid hormone kinetics. The maternal and infant models each consisted of three sub-models, iodide, thyroxine (T4), and triiodothyronine (T3). Using our model to simulate a maternal intake of 290 μg iodide/d, the average daily amount of iodide ingested by the nursing infant, after 4 days of life, gradually increased from 50 to 101 μg/day over 90 days postpartum. The predicted average lactating mother and infant urinary iodide concentrations were both in excess of 100 μg/L and the predicted average breast milk iodide concentration, 157 μg/L. The predicted serum thyroid hormones (T4, free T4 (fT4), and T3) in both the nursing infant and lactating mother were indicative of euthyroidism. The model was calibrated using serum thyroid hormone concentrations for lactating women from the United States and was successful in predicting serum T4 and fT4 levels (within a factor of two) for lactating women in other countries. T3 levels were adequately predicted. Infant serum thyroid hormone levels were adequately predicted for most data. For moderate iodide deficient conditions, where dietary iodide intake may range from 50 to 150 μg/d for the lactating mother, the model satisfactorily described the iodide measurements, although with some variation, in urine and breast milk. Predictions of serum thyroid hormones in moderately iodide deficient lactating women (50 μg/d) and nursing infants did not closely agree with mean reported serum thyroid hormone levels, however, predictions were usually within a factor of two. Excellent agreement between prediction and observation was obtained for a recent moderate iodide deficiency study in lactating women. Measurements included iodide levels in urine of infant and mother, iodide in breast milk, and serum thyroid hormone levels in infant and mother. A maternal iodide intake of 50 μg/d resulted in a predicted 29-32% reduction in serum T4 and fT4 in nursing infants, however the reduced serum levels of T4 and fT4 were within most of the published reference intervals for infant. This biologically based model is an important first step at integrating the rapid changes that occur in the thyroid system of the nursing newborn in order to predict adverse outcomes from exposure to thyroid acting chemicals, drugs, radioactive materials or iodine deficiency.

  5. Multi-objective optimization for model predictive control.

    PubMed

    Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry

    2007-06-01

    This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.

  6. Recent advances, and unresolved issues, in the application of computational modelling to the prediction of the biological effects of nanomaterials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkler, David A., E-mail: dave.winkler@csiro.au

    2016-05-15

    Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based,more » have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.« less

  7. Independent Review of Simulation of Net Infiltration for Present-Day and Potential Future Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Review Panel: Soroosh Sorooshian, Ph.D., Panel Chairperson, University of California, Irvine; Jan M. H. Hendrickx, Ph.D., New Mexico Institute of Mining and Technology; Binayak P. Mohanty, Ph.D., Texas A&M University

    The DOE Office of Civilian Radioactive Waste Management (OCRWM) tasked Oak Ridge Institute for Science and Education (ORISE) with providing an independent expert review of the documented model and prediction results for net infiltration of water into the unsaturated zone at Yucca Mountain. The specific purpose of the model, as documented in the report MDL-NBS-HS-000023, Rev. 01, is “to provide a spatial representation, including epistemic and aleatory uncertainty, of the predicted mean annual net infiltration at the Yucca Mountain site ...” (p. 1-1) The expert review panel assembled by ORISE concluded that the model report does not provide a technicallymore » credible spatial representation of net infiltration at Yucca Mountain. Specifically, the ORISE Review Panel found that: • A critical lack of site-specific meteorological, surface, and subsurface information prevents verification of (i) the net infiltration estimates, (ii) the uncertainty estimates of parameters caused by their spatial variability, and (iii) the assumptions used by the modelers (ranges and distributions) for the characterization of parameters. The paucity of site-specific data used by the modeling team for model implementation and validation is a major deficiency in this effort. • The model does not incorporate at least one potentially important hydrologic process. Subsurface lateral flow is not accounted for by the model, and the assumption that the effect of subsurface lateral flow is negligible is not adequately justified. This issue is especially critical for the wetter climate periods. This omission may be one reason the model results appear to underestimate net infiltration beneath wash environments and therefore imprecisely represent the spatial variability of net infiltration. • While the model uses assumptions consistently, such as uniform soil depths and a constant vegetation rooting depth, such assumptions may not be appropriate for this net infiltration simulation because they oversimplify a complex landscape and associated hydrologic processes, especially since the model assumptions have not been adequately corroborated by field and laboratory observations at Yucca Mountain.« less

  8. The Modern Design of Experiments: A Technical and Marketing Framework

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    A new wind tunnel testing process under development at NASA Langley Research Center, called Modern Design of Experiments (MDOE), differs from conventional wind tunnel testing techniques on a number of levels. Chief among these is that MDOE focuses on the generation of adequate prediction models rather than high-volume data collection. Some cultural issues attached to this and other distinctions between MDOE and conventional wind tunnel testing are addressed in this paper.

  9. The Effects of Educational Technology Usage Profiles and Legally Protected Bio-Demographic Data on Behaviorally-Based Predictive Student Success Models in Learning Analytics: An Exploratory Study

    ERIC Educational Resources Information Center

    Arnold, Kimberly E.

    2017-01-01

    In the 21st century, attainment of a college degree is more important than ever to achieve economic self-sufficiency, employment, and an adequate standard of living. Projections suggest that by 2020, 65% of jobs available in the U.S. will require postsecondary education. This reality creates an unprecedented demand for higher education, and…

  10. Doppler lidar wind measurement on Eos

    NASA Technical Reports Server (NTRS)

    Fitzjarrald, D.; Bilbro, J.; Beranek, R.; Mabry, J.

    1985-01-01

    A polar-orbiting platform segment of the Earth Observing System (EOS) could carry a CO2-laser based Doppler lidar for recording global wind profiles. Development goals would include the manufacture of a 10 J laser with a 2 yr operational life, space-rating the optics and associated software, and the definition of models for global aerosol distributions. Techniques will be needed for optimal scanning and generating computer simulations which will provide adequately accurate weather predictions.

  11. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    NASA Astrophysics Data System (ADS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  12. Prediction models for the risk of spontaneous preterm birth based on maternal characteristics: a systematic review and independent external validation.

    PubMed

    Meertens, Linda J E; van Montfort, Pim; Scheepers, Hubertina C J; van Kuijk, Sander M J; Aardenburg, Robert; Langenveld, Josje; van Dooren, Ivo M A; Zwaan, Iris M; Spaanderman, Marc E A; Smits, Luc J M

    2018-04-17

    Prediction models may contribute to personalized risk-based management of women at high risk of spontaneous preterm delivery. Although prediction models are published frequently, often with promising results, external validation generally is lacking. We performed a systematic review of prediction models for the risk of spontaneous preterm birth based on routine clinical parameters. Additionally, we externally validated and evaluated the clinical potential of the models. Prediction models based on routinely collected maternal parameters obtainable during first 16 weeks of gestation were eligible for selection. Risk of bias was assessed according to the CHARMS guidelines. We validated the selected models in a Dutch multicenter prospective cohort study comprising 2614 unselected pregnant women. Information on predictors was obtained by a web-based questionnaire. Predictive performance of the models was quantified by the area under the receiver operating characteristic curve (AUC) and calibration plots for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation. Clinical value was evaluated by means of decision curve analysis and calculating classification accuracy for different risk thresholds. Four studies describing five prediction models fulfilled the eligibility criteria. Risk of bias assessment revealed a moderate to high risk of bias in three studies. The AUC of the models ranged from 0.54 to 0.67 and from 0.56 to 0.70 for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation, respectively. A subanalysis showed that the models discriminated poorly (AUC 0.51-0.56) for nulliparous women. Although we recalibrated the models, two models retained evidence of overfitting. The decision curve analysis showed low clinical benefit for the best performing models. This review revealed several reporting and methodological shortcomings of published prediction models for spontaneous preterm birth. Our external validation study indicated that none of the models had the ability to predict spontaneous preterm birth adequately in our population. Further improvement of prediction models, using recent knowledge about both model development and potential risk factors, is necessary to provide an added value in personalized risk assessment of spontaneous preterm birth. © 2018 The Authors Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  13. Predicting Patients with Inadequate 24- or 48-Hour Urine Collections at Time of Metabolic Stone Evaluation.

    PubMed

    McGuire, Barry B; Bhanji, Yasin; Sharma, Vidit; Frainey, Brendan T; McClean, Megan; Dong, Caroline; Rimar, Kalen; Perry, Kent T; Nadler, Robert B

    2015-06-01

    We aimed to understand the characteristics of patients who are less likely to submit adequate urine collections at metabolic stone evaluation. Inadequate urine collection was defined using two definitions: (1) Reference ranges for 24-hour creatinine/kilogram (Cr/24) and (2) discrepancy in total 24-hour urine Cr between 24-hour urine collections. There were 1502 patients with ≥1 kidney stone between 1998 and 2014 who performed a 24- or 48-hour urine collection at Northwestern Memorial Hospital and who were identified retrospectively. Multivariate analysis was performed to analyze predictor variables for adequate urine collection. A total of 2852 urine collections were analyzed. Mean age for males was 54.4 years (range 17-86), and for females was 50.2 years (range 8-90). One patient in the study was younger than 17 years old. (1) Analysis based on the Cr 24/kg definition: There were 50.7% of patients who supplied an inadequate sample. Females were nearly 50% less likely to supply an adequate sample compared with men, P<0.001. Diabetes (odds ratio [OR] 1.42 [1.04-1.94], P=0.026) and vitamin D supplementation (OR 0.64 [0.43-0.95], P=0.028) predicted receiving an adequate/inadequate sample, respectively. (2) Analysis based on differences between total urinary Cr: The model was stratified based on percentage differences between samples up to 50%. At 10%, 20%, 30%, 40%, and 50% differences, inadequate collections were achieved in 82.8%, 66.9%, 51.7%, 38.5%, and 26.4% of patients, respectively. Statistical significance was observed based on differences of ≥40%, and this was defined as the threshold for an inadequate sample. Female sex (OR 0.73 [0.54-0.98], P=0.037) predicted supplying inadequate samples. Adequate collections were more likely to be received on a Sunday (OR 1.6 [1.03-2.58], P=0.038) and by sedentary workers (OR 2.3 [1.12-4.72], P=0.023). Urine collections from patients during metabolic evaluation for nephrolithiasis may be considered inadequate based on two commonly used clinical definitions. This may have therapeutic or economic ramifications and the propensity for females to supply inadequate samples should be investigated further.

  14. A physiologically based biodynamic (PBBD) model for estragole DNA binding in rat liver based on in vitro kinetic data and estragole DNA adduct formation in primary hepatocytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paini, Alicia, E-mail: alicia.paini@rdls.nestle.co; Nestle Research Center, PO Box 44, Lausanne; Punt, Ans

    2010-05-15

    Estragole has been shown to be hepatocarcinogenic in rodent species at high-dose levels. Translation of these results into the likelihood of formation of DNA adducts, mutation, and ultimately cancer upon more realistic low-dose exposures remains a challenge. Recently we have developed physiologically based biokinetic (PBBK) models for rat and human predicting bioactivation of estragole. These PBBK models, however, predict only kinetic characteristics. The present study describes the extension of the PBBK model to a so-called physiologically based biodynamic (PBBD) model predicting in vivo DNA adduct formation of estragole in rat liver. This PBBD model was developed using in vitro datamore » on DNA adduct formation in rat primary hepatocytes exposed to 1'-hydroxyestragole. The model was extended by linking the area under the curve for 1'-hydroxyestragole formation predicted by the PBBK model to the area under the curve for 1'-hydroxyestragole in the in vitro experiments. The outcome of the PBBD model revealed a linear increase in DNA adduct formation with increasing estragole doses up to 100 mg/kg bw. Although DNA adduct formation of genotoxic carcinogens is generally seen as a biomarker of exposure rather than a biomarker of response, the PBBD model now developed is one step closer to the ultimate toxic effect of estragole than the PBBK model described previously. Comparison of the PBBD model outcome to available data showed that the model adequately predicts the dose-dependent level of DNA adduct formation. The PBBD model predicts DNA adduct formation at low levels of exposure up to a dose level showing to cause cancer in rodent bioassays, providing a proof of principle for modeling a toxicodynamic in vivo endpoint on the basis of solely in vitro experimental data.« less

  15. Predicting Bacillus coagulans spores inactivation in tomato pulp under nonisothermal heat treatments.

    PubMed

    Zimmermann, Morgana; Longhi, Daniel A; Schaffner, Donald W; Aragão, Gláucia M F

    2014-05-01

    The knowledge and understanding of Bacillus coagulans inactivation during a thermal treatment in tomato pulp, as well as the influence of temperature variation during thermal processes are essential for design, calculation, and optimization of the process. The aims of this work were to predict B. coagulans spores inactivation in tomato pulp under varying time-temperature profiles with Gompertz-inspired inactivation model and to validate the model's predictions by comparing the predicted values with experimental data. B. coagulans spores in pH 4.3 tomato pulp at 4 °Brix were sealed in capillary glass tubes and heated in thermostatically controlled circulating oil baths. Seven different nonisothermal profiles in the range from 95 to 105 °C were studied. Predicted inactivation kinetics showed similar behavior to experimentally observed inactivation curves when the samples were exposed to temperatures in the upper range of this study (99 to 105 °C). Profiles that resulted in less accurate predictions were those where the range of temperatures analyzed were comparatively lower (inactivation profiles starting at 95 °C). The link between fail prediction and both lower starting temperature and magnitude of the temperature shift suggests some chemical or biological mechanism at work. Statistical analysis showed that overall model predictions were acceptable, with bias factors from 0.781 to 1.012, and accuracy factors from 1.049 to 1.351, and confirm that the models used were adequate to estimate B. coagulans spores inactivation under fluctuating temperature conditions in the range from 95 to 105 °C. How can we estimate Bacillus coagulans inactivation during sudden temperature shifts in heat processing? This article provides a validated model that can be used to predict B. coagulans under changing temperature conditions. B. coagulans is a spore-forming bacillus that spoils acidified food products. The mathematical model developed here can be used to predict the spoilage risk following thermal process deviations for tomato products. © 2014 Institute of Food Technologists®

  16. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali

    PubMed Central

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-01-01

    Background The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models. Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. Methods A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data. The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia. Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. Results The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]). The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Conclusion Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation. PMID:19361335

  17. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali.

    PubMed

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A Lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-04-10

    The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models.Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data.The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia.Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]).The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation.

  18. Effect of abdominopelvic abscess drain size on drainage time and probability of occlusion

    PubMed Central

    Rotman, Jessica A.; Getrajdman, George I.; Maybody, Majid; Erinjeri, Joseph P.; Yarmohammadi, Hooman; Sofocleous, Constantinos T.; Solomon, Stephen B.; Boas, F. Edward

    2016-01-01

    Background The purpose of this study is to determine whether larger abdominopelvic abscess drains reduce the time required for abscess resolution, or the probability of tube occlusion. Methods 144 consecutive patients who underwent abscess drainage at a single institution were reviewed retrospectively. Results: Larger initial drain size did not reduce drainage time, drain occlusion, or drain exchanges (p>0.05). Subgroup analysis did not find any type of collection that benefitted from larger drains. A multivariate model predicting drainage time showed that large collections (>200 ml) required 16 days longer drainage time than small collections (<50 ml). Collections with a fistula to bowel required 17 days longer drainage time than collections without a fistula. Initial drain size and the viscosity of the fluid in the collection had no significant effect on drainage time in the multivariate model. Conclusions 8 F drains are adequate for initial drainage of most serous and serosanguineous collections. 10 F drains are adequate for initial drainage of most purulent or bloody collections. PMID:27634422

  19. Statistical optimization of process parameters for lipase-catalyzed synthesis of triethanolamine-based esterquats using response surface methodology in 2-liter bioreactor.

    PubMed

    Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat

    2013-01-01

    Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.

  20. Martian tension fractures and the formation of grabens and collapse features at Valles Marineris

    NASA Technical Reports Server (NTRS)

    Tanaka, K. L.; Golombek, M. P.

    1989-01-01

    Simple models of the Martian crust are summarized that predict extensional deformation style on the basis of depth, material friction and strength, and hydraulic conditions appropriate to the planet. These models indicate that tension fractures may be common features on Mars, given adequate differential stress conditions. Examples of tension fractures on Mars inferred from morphological criteria are examined based on the probable geologic conditions in which they formed and on model constraints. It is proposed that the grabens and collapse features of Valles Marineris are controlled by tension fractures in intact basement rocks that lie below impact ejecta.

  1. Momentum loss in proton-nucleus and nucleus-nucleus collisions

    NASA Technical Reports Server (NTRS)

    Khan, Ferdous; Townsend, Lawrence W.

    1993-01-01

    An optical model description, based on multiple scattering theory, of longitudinal momentum loss in proton-nucleus and nucleus-nucleus collisions is presented. The crucial role of the imaginary component of the nucleon-nucleon transition matrix in accounting for longitudinal momentum transfer is demonstrated. Results obtained with this model are compared with Intranuclear Cascade (INC) calculations, as well as with predictions from Vlasov-Uehling-Uhlenbeck (VUU) and quantum molecular dynamics (QMD) simulations. Comparisons are also made with experimental data where available. These indicate that the present model is adequate to account for longitudinal momentum transfer in both proton-nucleus and nucleus-nucleus collisions over a wide range of energies.

  2. Modelling the growth of Listeria monocytogenes in Mediterranean fish species from aquaculture production.

    PubMed

    Bolívar, Araceli; Costa, Jean Carlos Correia Peres; Posada-Izquierdo, Guiomar D; Valero, Antonio; Zurera, Gonzalo; Pérez-Rodríguez, Fernando

    2018-04-02

    Over the last couple of decades, several studies have evaluated growth dynamics of L. monocytogenes in lightly processed and ready-to-eat (RTE) fishery products mostly consumed in Nordic European countries. Other fish species from aquaculture production are of special interest since their relevant consumption patterns and added value in Mediterranean countries, such as sea bream and sea bass. In the present study, the growth of L. monocytogenes was evaluated in fish-based juice (FBJ) by means of optical density (OD) measurements in a temperature range 2-20 °C under different atmosphere conditions (i.e. reduced oxygen and aerobic). The Baranyi and Roberts model was used to estimate the maximum growth rate (μ max ) from the observed growth curves. The effect of storage temperature on μ max was modelled using the Ratkowsky square root model. The developed models were validated using experimental growth data for L. monocytogenes in sea bream and sea bass fillets stored under static and dynamic temperature conditions. Overall, models developed in FBJ provided fail-safe predictions for L. monocytogenes growth. For the model generated under reduced oxygen conditions, bias and accuracy factor for growth rate predictions were 1.15 and 1.25, respectively, showing good performance to adequately predict L. monocytogenes growth in Mediterranean fish products. The present study provides validated predictive models for L. monocytogenes growth in Mediterranean fish species to be used in microbial risk assessment and shelf-life studies. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. A method for reducing the order of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Masri, S. F.; Miller, R. K.; Sassi, H.; Caughey, T. K.

    1984-06-01

    An approximate method that uses conventional condensation techniques for linear systems together with the nonparametric identification of the reduced-order model generalized nonlinear restoring forces is presented for reducing the order of discrete multidegree-of-freedom dynamic systems that possess arbitrary nonlinear characteristics. The utility of the proposed method is demonstrated by considering a redundant three-dimensional finite-element model half of whose elements incorporate hysteretic properties. A nonlinear reduced-order model, of one-third the order of the original model, is developed on the basis of wideband stationary random excitation and the validity of the reduced-order model is subsequently demonstrated by its ability to predict with adequate accuracy the transient response of the original nonlinear model under a different nonstationary random excitation.

  4. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  5. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2007-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  6. Evaluation of CFD Turbulent Heating Prediction Techniques and Comparison With Hypersonic Experimental Data

    NASA Technical Reports Server (NTRS)

    Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)

    2001-01-01

    Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) < 0.3, with the wall damping heating and skin friction 10-30% above the local damping results. Furthermore, the local damping predictions have reasonable or good agreement with the experimental heating data for all cases. The impact of the two formulations on the van Driest damping function and the turbulent eddy viscosity distribution for a cold wall case indicate the importance of including temperature gradient effects. Grid requirements for accurate turbulent heating predictions are also studied. These results indicate that a cell Reynolds number of 1 is required for grid converged heating predictions, but coarser grids with a y(sup +) less than 2 are adequate for design of hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.

  7. Evaluation of Interindividual Human Variation in Bioactivation and DNA Adduct Formation of Estragole in Liver Predicted by Physiologically Based Kinetic/Dynamic and Monte Carlo Modeling.

    PubMed

    Punt, Ans; Paini, Alicia; Spenkelink, Albertus; Scholz, Gabriele; Schilter, Benoit; van Bladeren, Peter J; Rietjens, Ivonne M C M

    2016-04-18

    Estragole is a known hepatocarcinogen in rodents at high doses following metabolic conversion to the DNA-reactive metabolite 1'-sulfooxyestragole. The aim of the present study was to model possible levels of DNA adduct formation in (individual) humans upon exposure to estragole. This was done by extending a previously defined PBK model for estragole in humans to include (i) new data on interindividual variation in the kinetics for the major PBK model parameters influencing the formation of 1'-sulfooxyestragole, (ii) an equation describing the relationship between 1'-sulfooxyestragole and DNA adduct formation, (iii) Monte Carlo modeling to simulate interindividual human variation in DNA adduct formation in the population, and (iv) a comparison of the predictions made to human data on DNA adduct formation for the related alkenylbenzene methyleugenol. Adequate model predictions could be made, with the predicted DNA adduct levels at the estimated daily intake of estragole of 0.01 mg/kg bw ranging between 1.6 and 8.8 adducts in 10(8) nucleotides (nts) (50th and 99th percentiles, respectively). This is somewhat lower than values reported in the literature for the related alkenylbenzene methyleugenol in surgical human liver samples. The predicted levels seem to be below DNA adduct levels that are linked with tumor formation by alkenylbenzenes in rodents, which were estimated to amount to 188-500 adducts per 10(8) nts at the BMD10 values of estragole and methyleugenol. Although this does not seem to point to a significant health concern for human dietary exposure, drawing firm conclusions may have to await further validation of the model's predictions.

  8. Biogeochemical modeling of CO 2 and CH 4 production in anoxic Arctic soil microcosms

    DOE PAGES

    Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng; ...

    2016-09-12

    Soil organic carbon turnover to CO 2 and CH 4 is sensitive to soil redox potential and pH conditions. But, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximatelymore » describe the observed pH evolution without additional parameterization. Though Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. Furthermore, the equilibrium speciation predicts a substantial increase in CO 2 solubility as pH increases, and taking into account CO 2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO 2 production from closed microcosms can be substantially underestimated based on headspace CO 2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.« less

  9. Getting the tail to wag the dog: Incorporating groundwater transport into catchment solute transport models using rank StorAge Selection (rSAS) functions

    NASA Astrophysics Data System (ADS)

    Harman, C. J.

    2015-12-01

    Surface water hydrologic models are increasingly used to analyze the transport of solutes through the landscape, such as nitrate. However, many of these models cannot adequately capture the effect of groundwater flow paths, which can have long travel times and accumulate legacy contaminants, releasing them to streams over decades. If these long lag times are not accounted for, the short-term efficacy of management activities to reduce nitrogen loads may be overestimated. Models that adopt a simple 'well-mixed' assumption, leading to an exponential transit time distribution at steady state, cannot adequately capture the broadly skewed nature of groundwater transit times in typical watersheds. Here I will demonstrate how StorAge Selection functions can be used to capture the long lag times of groundwater in a typical subwatershed-based hydrologic model framework typical of models like SWAT, HSPF, HBV, PRMS and others. These functions can be selected and calibrated to reproduce historical data where available, but can also be fitted to the results of a steady-state groundwater transport model like MODFLOW/MODPATH, allowing those results to directly inform the parameterization of an unsteady surface water model. The long tails of the transit time distribution predicted by the groundwater model can then be completely captured by the surface water model. Examples of this application in the Chesapeake Bay watersheds and elsewhere will be given.

  10. Thick Galactic Cosmic Radiation Shielding Using Atmospheric Data

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Nurge, Mark A.; Starr, Stanley O.; Koontz, Steven L.

    2013-01-01

    NASA is concerned with protecting astronauts from the effects of galactic cosmic radiation and has expended substantial effort in the development of computer models to predict the shielding obtained from various materials. However, these models were only developed for shields up to about 120 g!cm2 in thickness and have predicted that shields of this thickness are insufficient to provide adequate protection for extended deep space flights. Consequently, effort is underway to extend the range of these models to thicker shields and experimental data is required to help confirm the resulting code. In this paper empirically obtained effective dose measurements from aircraft flights in the atmosphere are used to obtain the radiation shielding function of the earth's atmosphere, a very thick shield. Obtaining this result required solving an inverse problem and the method for solving it is presented. The results are shown to be in agreement with current code in the ranges where they overlap. These results are then checked and used to predict the radiation dosage under thick shields such as planetary regolith and the atmosphere of Venus.

  11. Evaluation of blocking performance in ensemble seasonal integrations

    NASA Astrophysics Data System (ADS)

    Casado, M. J.; Doblas-Reyes, F. J.; Pastor, M. A.

    2003-04-01

    EVALUATION OF BLOCKING PERFOMANCE IN ENSEMBLE SEASONAL INTEGRATIONS M. J. Casado (1), F. J. Doblas-Reyes (2), A. Pastor (1) (1) I Instituto Nacional de Meteorología, c/Leonardo Prieto Castro,8,28071 ,Madrid,Spain, mjcasado@inm.es (2) ECMWF, Shinfield Park,RG2 9AX, Reading, UK, f.doblas-reyes@ecmwf.int Climate models have shown a robust inability to reliably predict blocking onset and frequency. This systematic error has been evaluated using multi-model ensemble seasonal integrations carried out in the framework of the Prediction Of climate Variations On Seasonal and interanual Timescales (PROVOST) project and compared to a blocking features assessment of the NCEP re-analyses. The PROVOST GCMs are able to adequately reproduce the spatial NCEP teleconnection patterns over the Northern Hemisphere, being notorious the great spatial correlation coefficient with some of the corresponding NCEP patterns. In spite of that, the different models show a consistent underestimation of blocking frequency which may impact on the ability to predict the seasonal amplitude of the leading modes of variability over the Northern Hemisphere.

  12. Can biomechanical variables predict improvement in crouch gait?

    PubMed Central

    Hicks, Jennifer L.; Delp, Scott L.; Schwartz, Michael H.

    2011-01-01

    Many patients respond positively to treatments for crouch gait, yet surgical outcomes are inconsistent and unpredictable. In this study, we developed a multivariable regression model to determine if biomechanical variables and other subject characteristics measured during a physical exam and gait analysis can predict which subjects with crouch gait will demonstrate improved knee kinematics on a follow-up gait analysis. We formulated the model and tested its performance by retrospectively analyzing 353 limbs of subjects who walked with crouch gait. The regression model was able to predict which subjects would demonstrate ‘improved’ and ‘unimproved’ knee kinematics with over 70% accuracy, and was able to explain approximately 49% of the variance in subjects’ change in knee flexion between gait analyses. We found that improvement in stance phase knee flexion was positively associated with three variables that were drawn from knowledge about the biomechanical contributors to crouch gait: i) adequate hamstrings lengths and velocities, possibly achieved via hamstrings lengthening surgery, ii) normal tibial torsion, possibly achieved via tibial derotation osteotomy, and iii) sufficient muscle strength. PMID:21616666

  13. Testing Small Variance Priors Using Prior-Posterior Predictive p Values.

    PubMed

    Hoijtink, Herbert; van de Schoot, Rens

    2017-04-03

    Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. [Methodological approach to the use of artificial neural networks for predicting results in medicine].

    PubMed

    Trujillano, Javier; March, Jaume; Sorribas, Albert

    2004-01-01

    In clinical practice, there is an increasing interest in obtaining adequate models of prediction. Within the possible available alternatives, the artificial neural networks (ANN) are progressively more used. In this review we first introduce the ANN methodology, describing the most common type of ANN, the Multilayer Perceptron trained with backpropagation algorithm (MLP). Then we compare the MLP with the Logistic Regression (LR). Finally, we show a practical scheme to make an application based on ANN by means of an example with actual data. The main advantage of the RN is its capacity to incorporate nonlinear effects and interactions between the variables of the model without need to include them a priori. As greater disadvantages, they show a difficult interpretation of their parameters and large empiricism in their process of construction and training. ANN are useful for the computation of probabilities of a given outcome based on a set of predicting variables. Furthermore, in some cases, they obtain better results than LR. Both methodologies, ANN and LR, are complementary and they help us to obtain more valid models.

  15. Aerothermal Testing for Project Orion Crew Exploration Vehicle

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Horvath, Thomas J.; Lillard, Randolph P.; Kirk, Benjamin S.; Fischer-Cassady, Amy

    2009-01-01

    The Project Orion Crew Exploration Vehicle aerothermodynamic experimentation strategy, as it relates to flight database development, is reviewed. Experimental data has been obtained to both validate the computational predictions utilized as part of the database and support the development of engineering models for issues not adequately addressed with computations. An outline is provided of the working groups formed to address the key deficiencies in data and knowledge for blunt reentry vehicles. The facilities utilized to address these deficiencies are reviewed, along with some of the important results obtained thus far. For smooth wall comparisons of computational convective heating predictions against experimental data from several facilities, confidence was gained with the use of algebraic turbulence model solutions as part of the database. For cavities and protuberances, experimental data is being used for screening various designs, plus providing support to the development of engineering models. With the reaction-control system testing, experimental data were acquired on the surface in combination with off-body flow visualization of the jet plumes and interactions. These results are being compared against predictions for improved understanding of aftbody thermal environments and uncertainties.

  16. Characterization of Neutropenia in Advanced Cancer Patients Following Palbociclib Treatment Using a Population Pharmacokinetic-Pharmacodynamic Modeling and Simulation Approach.

    PubMed

    Sun, Wan; O'Dwyer, Peter J; Finn, Richard S; Ruiz-Garcia, Ana; Shapiro, Geoffrey I; Schwartz, Gary K; DeMichele, Angela; Wang, Diane

    2017-09-01

    Neutropenia is the most commonly reported hematologic toxicity following treatment with palbociclib, a cyclin-dependent kinase 4/6 inhibitor approved for metastatic breast cancer. Using data from 185 advanced cancer patients receiving palbociclib in 3 clinical trials, a pharmacokinetic-pharmacodynamic model was developed to describe the time course of absolute neutrophil count (ANC) and quantify the exposure-response relationship for neutropenia. These analyses help in understanding neutropenia associated with palbociclib and its comparison with chemotherapy-induced neutropenia. In the model, palbociclib plasma concentration was related to its antiproliferative effect on precursor cells through drug-related parameters (ie, maximum estimated drug effect and concentration corresponding to 50% of the maximum effect), and neutrophil physiology was mimicked through system-related parameters (ie, mean transit time, baseline ANC, and feedback parameter). Sex and baseline albumin level were significant covariates for baseline ANC. It was demonstrated by different model evaluation approaches (eg, prediction-corrected visual predictive check and standardized visual predictive check) that the final model adequately described longitudinal ANC with good predictive capability. The established model suggested that higher palbociclib exposure was associated with lower longitudinal neutrophil counts. The ANC nadir was reached approximately 21 days after palbociclib treatment initiation. Consistent with their mechanisms of action, neutropenia associated with palbociclib (cytostatic) was rapidly reversible and noncumulative, with a notably weaker antiproliferative effect on precursor cells relative to chemotherapies (cytotoxic). This pharmacokinetic-pharmacodynamic model aids in predicting neutropenia and optimizing dosing for future palbociclib trials with different dosing regimen combinations. © 2017, The American College of Clinical Pharmacology.

  17. A Lagrangian stochastic model for aerial spray transport above an oak forest

    USGS Publications Warehouse

    Wang, Yansen; Miller, David R.; Anderson, Dean E.; McManus, Michael L.

    1995-01-01

    An aerial spray droplets' transport model has been developed by applying recent advances in Lagrangian stochastic simulation of heavy particles. A two-dimensional Lagrangian stochastic model was adopted to simulate the spray droplet dispersion in atmospheric turbulence by adjusting the Lagrangian integral time scale along the drop trajectory. The other major physical processes affecting the transport of spray droplets above a forest canopy, the aircraft wingtip vortices and the droplet evaporation, were also included in each time step of the droplets' transport.The model was evaluated using data from an aerial spray field experiment. In generally neutral stability conditions, the accuracy of the model predictions varied from run-to-run as expected. The average root-mean-square error was 24.61 IU cm−2, and the average relative error was 15%. The model prediction was adequate in two-dimensional steady wind conditions, but was less accurate in variable wind condition. The results indicated that the model can simulate successfully the ensemble; average transport of aerial spray droplets under neutral, steady atmospheric wind conditions.

  18. Comparison of different classification methods for analyzing electronic nose data to characterize sesame oils and blends.

    PubMed

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-10-21

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results.

  19. Modeling the compliance of polyurethane nanofiber tubes for artificial common bile duct

    NASA Astrophysics Data System (ADS)

    Moazeni, Najmeh; Vadood, Morteza; Semnani, Dariush; Hasani, Hossein

    2018-02-01

    The common bile duct is one of the body’s most sensitive organs and a polyurethane nanofiber tube can be used as a prosthetic of the common bile duct. The compliance is one of the most important properties of prosthetic which should be adequately compliant as long as possible to keep the behavioral integrity of prosthetic. In the present paper, the prosthetic compliance was measured and modeled using regression method and artificial neural network (ANN) based on the electrospinning process parameters such as polymer concentration, voltage, tip-to-collector distance and flow rate. Whereas, the ANN model contains different parameters affecting on the prediction accuracy directly, the genetic algorithm (GA) was used to optimize the ANN parameters. Finally, it was observed that the optimized ANN model by GA can predict the compliance with high accuracy (mean absolute percentage error = 8.57%). Moreover, the contribution of variables on the compliance was investigated through relative importance analysis and the optimum values of parameters for ideal compliance were determined.

  20. Test of the technology acceptance model for a Web-based information system in a Hong Kong Chinese sample.

    PubMed

    Cheung, Emily Yee Man; Sachs, John

    2006-12-01

    The modified technology acceptance model was used to predict actual Blackboard usage (a web-based information system) in a sample of 57 Hong Kong student teachers whose mean age was 27.8 yr. (SD = 6.9). While the general form of the model was supported, Application-specific Self-efficacy was a more powerful predictor of system use than Behavioural Intention as predicted by the theory of reasoned action. Thus in this cultural and educational context, it has been shown that the model does not fully mediate the effect of Self-efficacy on System Use. Also, users' Enjoyment exerted considerable influence on the component variables of Usefulness and Ease of Use and on Application-specific Self-efficacy, thus indirectly influencing system usage. Consequently, efforts to gain students' acceptance and, therefore, use of information systems such as Blackboard must pay adequate attention to users' Self-efficacy and motivational variables such as Enjoyment.

  1. Mixing and unmixedness in plasma jets 1: Near-field analysis

    NASA Technical Reports Server (NTRS)

    Ilegbusi, Olusegun J.

    1993-01-01

    The flow characteristics in the near-field of a plasma jet are simulated with a two-fluid model. This model accounts for both gradient-diffusion mixing and uni-directional sifting motion resulting from pressure-gradient-body-force imbalance. This latter mechanism is believed to be responsible for the umixedness observed in plasma jets. The unmixedness is considered to be essentially a Rayleigh-Taylor kind instability. Transport equations are solved for the individual plasma and ambient gas velocities, temperatures and volume fractions. Empirical relations are employed for the interface transfers of mass, momentum and heat. The empirical coefficients are first established by comparison of predictions with available experimental data for shear flows. The model is then applied to an Argon plasma jet ejecting into stagnant air. The predicted results show the significant build-up of unmixed air within the plasma gas, even relatively far downstream of the torch. By adjusting the inlet condition, the model adequately reproduces the experimental data.

  2. Link-prediction to tackle the boundary specification problem in social network surveys

    PubMed Central

    De Wilde, Philippe; Buarque de Lima-Neto, Fernando

    2017-01-01

    Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826

  3. The development of a VBHOM-based outcome model for lower limb amputation performed for critical ischaemia.

    PubMed

    Tang, T Y; Prytherch, D R; Walsh, S R; Athanassoglou, V; Seppi, V; Sadat, U; Lees, T A; Varty, K; Boyle, J R

    2009-01-01

    VBHOM (Vascular Biochemistry and Haematology Outcome Models) adopts the approach of using a minimum data set to model outcome and has been previously shown to be feasible after index arterial operations. This study attempts to model mortality following lower limb amputation for critical limb ischaemia using the VBHOM concept. A binary logistic regression model of risk of mortality was built using National Vascular Database items that contained the complete data required by the model from 269 admissions for lower limb amputation. The subset of NVD data items used were urea, creatinine, sodium, potassium, haemoglobin, white cell count, age on and mode of admission. This model was applied prospectively to a test set of data (n=269), which were not part of the original training set to develop the predictor equation. Outcome following lower limb amputation could be described accurately using the same model. The overall mean predicted risk of mortality was 32%, predicting 86 deaths. Actual number of deaths was 86 (chi(2)=8.05, 8 d.f., p=0.429; no evidence of lack of fit). The model demonstrated adequate discrimination (c-index=0.704). VBHOM provides a single unified model that allows good prediction of surgical mortality in this high risk group of individuals. It uses a small, simple and objective clinical data set that may also simplify comparative audit within vascular surgery.

  4. External validation of preexisting first trimester preeclampsia prediction models.

    PubMed

    Allen, Rebecca E; Zamora, Javier; Arroyo-Manzano, David; Velauthar, Luxmilar; Allotey, John; Thangaratinam, Shakila; Aquilina, Joseph

    2017-10-01

    To validate the increasing number of prognostic models being developed for preeclampsia using our own prospective study. A systematic review of literature that assessed biomarkers, uterine artery Doppler and maternal characteristics in the first trimester for the prediction of preeclampsia was performed and models selected based on predefined criteria. Validation was performed by applying the regression coefficients that were published in the different derivation studies to our cohort. We assessed the models discrimination ability and calibration. Twenty models were identified for validation. The discrimination ability observed in derivation studies (Area Under the Curves) ranged from 0.70 to 0.96 when these models were validated against the validation cohort, these AUC varied importantly, ranging from 0.504 to 0.833. Comparing Area Under the Curves obtained in the derivation study to those in the validation cohort we found statistically significant differences in several studies. There currently isn't a definitive prediction model with adequate ability to discriminate for preeclampsia, which performs as well when applied to a different population and can differentiate well between the highest and lowest risk groups within the tested population. The pre-existing large number of models limits the value of further model development and future research should be focussed on further attempts to validate existing models and assessing whether implementation of these improves patient care. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  5. The Separation and Quantitation of Peptides with and without Oxidation of Methionine and Deamidation of Asparagine Using Hydrophilic Interaction Liquid Chromatography with Mass Spectrometry (HILIC-MS)

    NASA Astrophysics Data System (ADS)

    Badgett, Majors J.; Boyes, Barry; Orlando, Ron

    2017-05-01

    Peptides with deamidated asparagine residues and oxidized methionine residues are often not resolved sufficiently to allow quantitation of their native and modified forms using reversed phase (RP) chromatography. The accurate quantitation of these modifications is vital in protein biotherapeutic analysis because they can affect a protein's function, activity, and stability. We demonstrate here that hydrophilic interaction liquid chromatography (HILIC) adequately and predictably separates peptides with these modifications from their native counterparts. Furthermore, coefficients describing the extent of the hydrophilicity of these modifications have been derived and were incorporated into a previously made peptide retention prediction model that is capable of predicting the retention times of peptides with and without these modifications.

  6. Discovering a vaccine against neosporosis using computers: is it feasible?

    PubMed

    Goodswen, Stephen J; Kennedy, Paul J; Ellis, John T

    2014-08-01

    A vaccine is urgently needed to prevent cattle neosporosis. This infectious disease is caused by the parasite Neospora caninum, a complex biological system with multifaceted life cycles. An in silico vaccine discovery approach attempts to transform digital abstractions of this system into adequate knowledge to predict candidates. Researchers need current information to implement such an approach, such as understanding evasion mechanisms of the immune system, type of immune response to elicit, availability of data and prediction programs, and statistical models to analyze predictions. Taken together, an in silico approach involves assembly of an intricate jigsaw of interdisciplinary and interdependent knowledge. In this review, we focus on the approach influencing vaccine development against Neospora caninum, which can be generalized to other pathogenic apicomplexans. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. An experimental investigation of nacelle-pylon installation on an unswept wing at subsonic and transonic speeds

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Compton, W. B., III

    1984-01-01

    A wind tunnel investigation was conducted to determine the aerodynamic interference associated with the installation of a long duct, flow-through nacelle on a straight unswept untapered supercritical wing. Experimental data was obtained for the verification of computational prediction techniques. The model was tested in the 16-Foot Transonic Tunnel at Mach numbers from 0.20 to 0.875 and at angles of attack from about 0 deg to 5 deg. The results of the investigation show that strong viscous and compressibility effects are present at the transonic Mach numbers. Numerical comparisons show that linear theory is adequate for subsonic Mach number flow prediction, but is inadequate for prediction of the extreme flow conditions that exist at the transonic Mach numbers.

  8. The Separation and Quantitation of Peptides with and without Oxidation of Methionine and Deamidation of Asparagine Using Hydrophilic Interaction Liquid Chromatography with Mass Spectrometry (HILIC-MS).

    PubMed

    Badgett, Majors J; Boyes, Barry; Orlando, Ron

    2017-05-01

    Peptides with deamidated asparagine residues and oxidized methionine residues are often not resolved sufficiently to allow quantitation of their native and modified forms using reversed phase (RP) chromatography. The accurate quantitation of these modifications is vital in protein biotherapeutic analysis because they can affect a protein's function, activity, and stability. We demonstrate here that hydrophilic interaction liquid chromatography (HILIC) adequately and predictably separates peptides with these modifications from their native counterparts. Furthermore, coefficients describing the extent of the hydrophilicity of these modifications have been derived and were incorporated into a previously made peptide retention prediction model that is capable of predicting the retention times of peptides with and without these modifications. Graphical Abstract ᅟ.

  9. Worldwide multi-model intercomparison of clear-sky solar irradiance predictions

    NASA Astrophysics Data System (ADS)

    Ruiz-Arias, Jose A.; Gueymard, Christian A.; Cebecauer, Tomas

    2017-06-01

    Accurate modeling of solar radiation in the absence of clouds is highly important because solar power production peaks during cloud-free situations. The conventional validation approach of clear-sky solar radiation models relies on the comparison between model predictions and ground observations. Therefore, this approach is limited to locations with availability of high-quality ground observations, which are scarce worldwide. As a consequence, many areas of in-terest for, e.g., solar energy development, still remain sub-validated. Here, a worldwide inter-comparison of the global horizontal irradiance (GHI) and direct normal irradiance (DNI) calculated by a number of appropriate clear-sky solar ra-diation models is proposed, without direct intervention of any weather or solar radiation ground-based observations. The model inputs are all gathered from atmospheric reanalyses covering the globe. The model predictions are compared to each other and only their relative disagreements are quantified. The largest differences between model predictions are found over central and northern Africa, the Middle East, and all over Asia. This coincides with areas of high aerosol optical depth and highly varying aerosol distribution size. Overall, the differences in modeled DNI are found about twice larger than for GHI. It is argued that the prevailing weather regimes (most importantly, aerosol conditions) over regions exhibiting substantial divergences are not adequately parameterized by all models. Further validation and scrutiny using conventional methods based on ground observations should be pursued in priority over those specific regions to correctly evaluate the performance of clear-sky models, and select those that can be recommended for solar concentrating applications in particular.

  10. Cryogenic Tank Modeling for the Saturn AS-203 Experiment

    NASA Technical Reports Server (NTRS)

    Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.

    2006-01-01

    A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.

  11. Terrestrial population models for ecological risk assessment: A state-of-the-art review

    USGS Publications Warehouse

    Emlen, J.M.

    1989-01-01

    Few attempts have been made to formulate models for predicting impacts of xenobiotic chemicals on wildlife populations. However, considerable effort has been invested in wildlife optimal exploitation models. Because death from intoxication has a similar effect on population dynamics as death by harvesting, these management models are applicable to ecological risk assessment. An underlying Leslie-matrix bookkeeping formulation is widely applicable to vertebrate wildlife populations. Unfortunately, however, the various submodels that track birth, death, and dispersal rates as functions of the physical, chemical, and biotic environment are by their nature almost inevitably highly species- and locale-specific. Short-term prediction of one-time chemical applications requires only information on mortality before and after contamination. In such cases a simple matrix formulation may be adequate for risk assessment. But generally, risk must be projected over periods of a generation or more. This precludes generic protocols for risk assessment and also the ready and inexpensive predictions of a chemical's influence on a given population. When designing and applying models for ecological risk assessment at the population level, the endpoints (output) of concern must be carefully and rigorously defined. The most easily accessible and appropriate endpoints are (1) pseudoextinction (the frequency or probability of a population falling below a prespecified density), and (2) temporal mean population density. Spatial and temporal extent of predicted changes must be clearly specified a priori to avoid apparent contradictions and confusion.

  12. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  13. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  14. The status and challenge of global fire modelling

    DOE PAGES

    Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; ...

    2016-06-09

    Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central questionmore » underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. In conclusion, we indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.« less

  15. The status and challenge of global fire modelling

    NASA Astrophysics Data System (ADS)

    Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; Kelley, Douglas I.; Prentice, I. Colin; Rabin, Sam S.; Archibald, Sally; Mouillot, Florent; Arnold, Steve R.; Artaxo, Paulo; Bachelet, Dominique; Ciais, Philippe; Forrest, Matthew; Friedlingstein, Pierre; Hickler, Thomas; Kaplan, Jed O.; Kloster, Silvia; Knorr, Wolfgang; Lasslop, Gitta; Li, Fang; Mangeon, Stephane; Melton, Joe R.; Meyn, Andrea; Sitch, Stephen; Spessa, Allan; van der Werf, Guido R.; Voulgarakis, Apostolos; Yue, Chao

    2016-06-01

    Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central question underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. We indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.

  16. The status and challenge of global fire modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.

    Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central questionmore » underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. In conclusion, we indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.« less

  17. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  18. Large eddy simulation of turbulent premixed combustion using tabulated detailed chemistry and presumed probability density function

    NASA Astrophysics Data System (ADS)

    Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin

    2016-03-01

    A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.

  19. Statistical Approaches for Spatiotemporal Prediction of Low Flows

    NASA Astrophysics Data System (ADS)

    Fangmann, A.; Haberlandt, U.

    2017-12-01

    An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.

  20. A screening model analysis of mercury sources, fate and bioaccumulation in the Gulf of Mexico.

    PubMed

    Harris, Reed; Pollman, Curtis; Hutchinson, David; Landing, William; Axelrad, Donald; Morey, Steven L; Dukhovskoy, Dmitry; Vijayaraghavan, Krish

    2012-11-01

    A mass balance model of mercury (Hg) cycling and bioaccumulation was applied to the Gulf of Mexico (Gulf), coupled with outputs from hydrodynamic and atmospheric Hg deposition models. The dominant overall source of Hg to the Gulf is the Atlantic Ocean. Gulf waters do not mix fully however, resulting in predicted spatial differences in the relative importance of external Hg sources to Hg levels in water, sediments and biota. Direct atmospheric Hg deposition, riverine inputs, and Atlantic inputs were each predicted to be the most important source of Hg to at least one of the modeled regions in the Gulf. While incomplete, mixing of Gulf waters is predicted to be sufficient that fish Hg levels in any given location are affected by Hg entering other regions of the Gulf. This suggests that a Gulf-wide approach is warranted to reduce Hg loading and elevated Hg concentrations currently observed in some fish species. Basic data to characterize Hg concentrations and cycling in the Gulf are lacking but needed to adequately understand the relationship between Hg sources and fish Hg concentrations. Copyright © 2012. Published by Elsevier Inc.

  1. A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.

    PubMed

    Marquis-Favre, Catherine; Morel, Julien

    2015-07-21

    Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.

  2. [Approximation to the dynamics of meningococcal meningitis through dynamic systems and time series].

    PubMed

    Canals, M

    1996-02-01

    Meningococcal meningitis is subjected to epidemiological surveillance due to its severity and the occasional presentation of epidemic outbreaks. This work analyses previous disease models, generate new ones and analyses monthly cases using ARIMA time series models. The results show that disease dynamics for closed populations is epidemic and the epidemic size is related to the proportion of carriers and the transmissiveness of the agent. In open populations, disease dynamics depends on the admission rate of susceptible and the relative admission of infected individuals. Our model considers a logistic populational growth and carrier admission proportional to populational size, generating an endemic dynamics. Considering a non-instantaneous system response, a greater realism is obtained establishing that the endemic situation may present a dynamics highly sensitive to initial conditions, depending on the transmissiveness and proportion of susceptible individuals in the population. Time series model showed an adequate predictive capacity in terms no longer than 10 months. The lack of long term predictability was attributed to local changes in the proportion of carriers or on transmissiveness that lead to chaotic dynamics over a seasonal pattern. Predictions for 1995 and 1996 were obtained.

  3. The pitch of vibrato tones: a model based on instantaneous frequency decomposition.

    PubMed

    Mesz, Bruno A; Eguia, Manuel C

    2009-07-01

    We study vibrato as the more ubiquitous manifestation of a nonstationary tone that can evoke a single overall pitch. Some recent results using nonsymmetrical vibrato tones suggest that the perceived pitch could be governed by some stability-sensitive mechanism. For nonstationary sounds the adequate tools are time-frequency representations (TFRs). We show that a recently proposed TFR could be the simplest framework to explain this hypothetical stability-sensitive mechanism. We propose a one-parameter model within this framework that is able to predict previously reported results and we present new results obtained from psychophysical experiments performed in our laboratory.

  4. Ways that our Solar System helps us understand the formation of other planetary systems and ways that it doesn't

    NASA Technical Reports Server (NTRS)

    Wetherill, G. W.

    1996-01-01

    Models of planetary formation can be tested by comparison of their ability to predict features of our Solar System in a consistent way, and then extrapolated to other hypothetical planetary systems by different choice of parameters. When this is done, it is found that the resulting systems are insensitive to direct effects of the mass of the star, but do strongly depend on the properties of the disk, principally its surface density. Major uncertainty results from lack of an adequate theoretical model that predicts the existence, size, and distribution of analogs of our Solar System, particularly the gas giants Jupiter and Saturn. Nevertheless, reasons can be found for expecting that planetary systems, including those containing biologically habitable planets similar to Earth, may be abundant in the Galaxy and Universe.

  5. Ways that our Solar System helps us understand the formation of other planetary systems and ways that it doesn't.

    PubMed

    Wetherill, G W

    1996-01-01

    Models of planetary formation can be tested by comparison of their ability to predict features of our Solar System in a consistent way, and then extrapolated to other hypothetical planetary systems by different choice of parameters. When this is done, it is found that the resulting systems are insensitive to direct effects of the mass of the star, but do strongly depend on the properties of the disk, principally its surface density. Major uncertainty results from lack of an adequate theoretical model that predicts the existence, size, and distribution of analogs of our Solar System, particularly the gas giants Jupiter and Saturn. Nevertheless, reasons can be found for expecting that planetary systems, including those containing biologically habitable planets similar to Earth, may be abundant in the Galaxy and Universe.

  6. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    NASA Astrophysics Data System (ADS)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  7. Dynamic slip of polydisperse linear polymers using partitioned plate

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Marzieh; Konaganti, Vinod Kumar; Hatzikiriakos, Savvas G.

    2018-03-01

    The slip velocity of an industrial grade high molecular weight high-density polyethylene (HDPE) is studied in steady and dynamic shear experiments using a stress/strain controlled rotational rheometer equipped with a parallel partitioned plate geometry. Moreover, fluoroalkyl silane-based coating is used to understand the effect of surface energy on slip in steady and dynamic conditions. The multimode integral Kaye-Bernstein-Kearsley-Zapas constitutive model is applied to predict the transient shear response of the HDPE melt obtained from rotational rheometer. It is found that a dynamic slip model with a slip relaxation time is needed to adequately predict the experimental data at large shear deformations. Comparison of the results before and after coating shows that the slip velocity is largely affected by surface energy. Decreasing surface energy by coating increases slip velocity and decreases the slip relaxation time.

  8. [Academic performance in first year medical students: an explanatory multivariate model].

    PubMed

    Urrutia Aguilar, María Esther; Ortiz León, Silvia; Fouilloux Morales, Claudia; Ponce Rosas, Efrén Raúl; Guevara Guzmán, Rosalinda

    2014-12-01

    Current education is focused in intellectual, affective, and ethical aspects, thus acknowledging their significance in students´ metacognition. Nowadays, it is known that an adequate and motivating environment together with a positive attitude towards studies is fundamental to induce learning. Medical students are under multiple stressful, academic, personal, and vocational situations. To identify psychosocial, vocational, and academic variables of 2010-2011 first year medical students at UNAM that may help predict their academic performance. Academic surveys of psychological and vocational factors were applied; an academic follow-up was carried out to obtain a multivariate model. The data were analyzed considering descriptive, comparative, correlative, and predictive statistics. The main variables that affect students´ academic performance are related to previous knowledge and to psychological variables. The results show the significance of implementing institutional programs to support students throughout their college adaptation.

  9. On the Selection of Models for Runtime Prediction of System Resources

    NASA Astrophysics Data System (ADS)

    Casolari, Sara; Colajanni, Michele

    Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.

  10. A Micromechanics-Based Method for Multiscale Fatigue Prediction

    NASA Astrophysics Data System (ADS)

    Moore, John Allan

    An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.

  11. Cross Validation of Selection of Variables in Multiple Regression.

    DTIC Science & Technology

    1979-12-01

    55 vii CROSS VALIDATION OF SELECTION OF VARIABLES IN MULTIPLE REGRESSION I Introduction Background Long term DoD planning gcals...028545024 .31109000 BF * SS - .008700618 .0471961 Constant - .70977903 85.146786 55 had adequate predictive capabilities; the other two models (the...71ZCO F111D Control 54 73EGO FlIID Computer, General Purpose 55 73EPO FII1D Converter-Multiplexer 56 73HAO flllD Stabilizer Platform 57 73HCO F1ID

  12. Follow-on Low Noise Fan Aerodynamic Study

    NASA Technical Reports Server (NTRS)

    Heidegger, Nathan J.; Hall, Edward J.; Delaney, Robert A.

    1999-01-01

    The focus of the project was to investigate the effects of turbulence models on the prediction of rotor wake structures. The Advanced Ducted Propfan Analysis (ADPAC) code was modified through the incorporation of the Spalart-Allmaras one-equation turbulence model. Suitable test cases were solved numerically using ADPAC employing the Spalart-Allmaras turbulence model and another prediction code for comparison. A near-wall spacing study was also completed to determine the adequate spacing of the first computational cell off the wall. Solutions were also collected using two versions of the algebraic Baldwin-Lomax turbulence model in ADPAC. The effects of the turbulence model on the rotor wake definition was examined by obtaining ADPAC solutions for the Low Noise Fan rotor-only steady-flow case using the standard algebraic Baldwin-Lomax turbulence model, a modified version of the Baldwin-Lomax turbulence model and the one-equation Spalart-Allmaras turbulence model. The results from the three different turbulence modeling techniques were compared with each other and the available experimental data. These results include overall rotor performance, spanwise exit profiles, and contours of axial velocity taken along constant axial locations and along blade-to-blade surfaces. Wake characterizations were also performed on the experimental and ADPAC predicted results including the definition of a wake correlation function. Correlations were evaluated for wake width and wake depth. Similarity profiles of the wake shape were also compared between all numerical solutions and experimental data.

  13. A computer model of solar panel-plasma interactions

    NASA Technical Reports Server (NTRS)

    Cooke, D. L.; Freeman, J. W.

    1980-01-01

    High power solar arrays for satellite power systems are presently being planned with dimensions of kilometers, and with tens of kilovolts distributed over their surface. Such systems face many plasma interaction problems, such as power leakage to the plasma, particle focusing, and anomalous arcing. These effects cannot be adequately modeled without detailed knowledge of the plasma sheath structure and space charge effects. Laboratory studies of 1 by 10 meter solar array in a simulated low Earth orbit plasma are discussed. The plasma screening process is discussed, program theory is outlined, and a series of calibration models is presented. These models are designed to demonstrate that PANEL is capable of accurate self consistant space charge calculations. Such models include PANEL predictions for the Child-Langmuir diode problem.

  14. Benchmarking hydrological model predictive capability for UK River flows and flood peaks.

    NASA Astrophysics Data System (ADS)

    Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten

    2017-04-01

    Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.

  15. Statistical and engineering methods for model enhancement

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Jung

    Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.

  16. A comparison of two infiltration models applied to simulation of overland flow over a two-dimensional flume.

    PubMed

    Mallari, K J B; Kim, H; Pak, G; Aksoy, H; Yoon, J

    2015-01-01

    At the hillslope scale, where the rill-interrill configuration plays a significant role, infiltration is one of the major hydrologic processes affecting the generation of overland flow. As such, it is important to achieve a good understanding and accurate modelling of this process. Horton's infiltration has been widely used in many hydrologic models, though it has been occasionally found limited in handling adequately the antecedent moisture conditions (AMC) of soil. Holtan's model, conversely, is thought to be able to provide better estimation of infiltration rates as it can directly account for initial soil water content in its formulation. In this study, the Holtan model is coupled to an existing overland flow model, originally using Horton's model to account for infiltration, in an attempt to improve the prediction of runoff. For calibration and validation, experimental data from a two-dimensional flume which is incorporated with hillslope configuration have been used. Calibration and validation results showed that Holtan's model was able to improve the modelling results with better performance statistics than the Horton-coupled model. Holtan's infiltration equation, which allows accounting for AMC, provided an advantage and resulted in better runoff prediction of the model.

  17. In vitro ovine articular chondrocyte proliferation: experiments and modelling.

    PubMed

    Mancuso, L; Liuzzo, M I; Fadda, S; Pisu, M; Cincotti, A; Arras, M; La Nasa, G; Concas, A; Cao, G

    2010-06-01

    This study focuses on analysis of in vitro cultures of chondrocytes from ovine articular cartilage. Isolated cells were seeded in Petri dishes, then expanded to confluence and phenotypically characterized by flow cytometry. The sigmoidal temporal profile of total counts was obtained by classic haemocytometry and corresponding cell size distributions were measured electronically using a Coulter Counter. A mathematical model recently proposed (1) was adopted for quantitative interpretation of these experimental data. The model is based on a 1-D (that is, mass-structured), single-staged population balance approach capable of taking into account contact inhibition at confluence. The model's parameters were determined by fitting measured total cell counts and size distributions. Model reliability was verified by predicting cell proliferation counts and corresponding size distributions at culture times longer than those used when tuning the model's parameters. It was found that adoption of cell mass as the intrinsic characteristic of a growing chondrocyte population enables sigmoidal temporal profiles of total counts in the Petri dish, as well as cell size distributions at 'balanced growth', to be adequately predicted.

  18. Wastewater Biosolid Composting Optimization Based on UV-VNIR Spectroscopy Monitoring

    PubMed Central

    Temporal-Lara, Beatriz; Melendez-Pastor, Ignacio; Gómez, Ignacio; Navarro-Pedreño, Jose

    2016-01-01

    Conventional wastewater treatment generates large amounts of organic matter–rich sludge that requires adequate treatment to avoid public health and environmental problems. The mixture of wastewater sludge and some bulking agents produces a biosolid to be composted at adequate composting facilities. The composting process is chemically and microbiologically complex and requires an adequate aeration of the biosolid (e.g., with a turner machine) for proper maturation of the compost. Adequate (near) real-time monitoring of the compost maturity process is highly difficult and the operation of composting facilities is not as automatized as other industrial processes. Spectroscopic analysis of compost samples has been successfully employed for compost maturity assessment but the preparation of the solid compost samples is difficult and time-consuming. This manuscript presents a methodology based on a combination of a less time-consuming compost sample preparation and ultraviolet, visible and short-wave near-infrared spectroscopy. Spectroscopic measurements were performed with liquid compost extract instead of solid compost samples. Partial least square (PLS) models were developed to quantify chemical fractions commonly employed for compost maturity assessment. Effective regression models were obtained for total organic matter (residual predictive deviation—RPD = 2.68), humification ratio (RPD = 2.23), total exchangeable carbon (RPD = 2.07) and total organic carbon (RPD = 1.66) with a modular and cost-effective visible and near infrared (VNIR) spectroradiometer. This combination of a less time-consuming compost sample preparation with a versatile sensor system provides an easy-to-implement, efficient and cost-effective protocol for compost maturity assessment and near-real-time monitoring. PMID:27854280

  19. Wastewater Biosolid Composting Optimization Based on UV-VNIR Spectroscopy Monitoring.

    PubMed

    Temporal-Lara, Beatriz; Melendez-Pastor, Ignacio; Gómez, Ignacio; Navarro-Pedreño, Jose

    2016-11-15

    Conventional wastewater treatment generates large amounts of organic matter-rich sludge that requires adequate treatment to avoid public health and environmental problems. The mixture of wastewater sludge and some bulking agents produces a biosolid to be composted at adequate composting facilities. The composting process is chemically and microbiologically complex and requires an adequate aeration of the biosolid (e.g., with a turner machine) for proper maturation of the compost. Adequate (near) real-time monitoring of the compost maturity process is highly difficult and the operation of composting facilities is not as automatized as other industrial processes. Spectroscopic analysis of compost samples has been successfully employed for compost maturity assessment but the preparation of the solid compost samples is difficult and time-consuming. This manuscript presents a methodology based on a combination of a less time-consuming compost sample preparation and ultraviolet, visible and short-wave near-infrared spectroscopy. Spectroscopic measurements were performed with liquid compost extract instead of solid compost samples. Partial least square (PLS) models were developed to quantify chemical fractions commonly employed for compost maturity assessment. Effective regression models were obtained for total organic matter (residual predictive deviation-RPD = 2.68), humification ratio (RPD = 2.23), total exchangeable carbon (RPD = 2.07) and total organic carbon (RPD = 1.66) with a modular and cost-effective visible and near infrared (VNIR) spectroradiometer. This combination of a less time-consuming compost sample preparation with a versatile sensor system provides an easy-to-implement, efficient and cost-effective protocol for compost maturity assessment and near-real-time monitoring.

  20. Modelling the growth of the brown frog (Rana dybowskii)

    PubMed Central

    Du, Xiao-peng; Hu, Zong-fu; Cui, Li-yong

    2018-01-01

    Well-controlled development leads to uniform body size and a better growth rate; therefore, the ability to determine the growth rate of frogs and their period of sexual maturity is essential for producing healthy, high-quality descendant frogs. To establish a working model that can best predict the growth performance of frogs, the present study examined the growth of one-year-old and two-year-old brown frogs (Rana dybowskii) from metamorphosis to hibernation (18 weeks) and out-hibernation to hibernation (20 weeks) under the same environmental conditions. Brown frog growth was studied and mathematically modelled using various nonlinear, linear, and polynomial functions. The model input values were statistically evaluated using parameters such as the Akaike’s information criterion. The body weight/size ratio (Kwl) and Fulton’s condition factor (K) were used to compare the weight and size of groups of frogs during the growth period. The results showed that the third- and fourth-order polynomial models provided the most consistent predictions of body weight for age 1 and age 2 brown frogs, respectively. Both the Gompertz and third-order polynomial models yielded similarly adequate results for the body size of age 1 brown frogs, while the Janoschek model produced a similarly adequate result for the body size of age 2 brown frogs. The Brody and Janoschek models yielded the highest and lowest estimates of asymptotic weight, respectively, for the body weights of all frogs. The Kwl value of all frogs increased from 0.40 to 3.18. The K value of age 1 frogs decreased from 23.81 to 9.45 in the first four weeks. The K value of age 2 frogs remained close to 10. Graphically, a sigmoidal trend was observed for body weight and body size with increasing age. The results of this study will be useful not only for amphibian research but also for frog farming management strategies and decisions.

  1. Modeling thermal sensation in a Mediterranean climate—a comparison of linear and ordinal models

    NASA Astrophysics Data System (ADS)

    Pantavou, Katerina; Lykoudis, Spyridon

    2014-08-01

    A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.

  2. Natural Language Processing for Cohort Discovery in a Discharge Prediction Model for the Neonatal ICU.

    PubMed

    Temple, Michael W; Lehmann, Christoph U; Fabbri, Daniel

    2016-01-01

    Discharging patients from the Neonatal Intensive Care Unit (NICU) can be delayed for non-medical reasons including the procurement of home medical equipment, parental education, and the need for children's services. We previously created a model to identify patients that will be medically ready for discharge in the subsequent 2-10 days. In this study we use Natural Language Processing to improve upon that model and discern why the model performed poorly on certain patients. We retrospectively examined the text of the Assessment and Plan section from daily progress notes of 4,693 patients (103,206 patient-days) from the NICU of a large, academic children's hospital. A matrix was constructed using words from NICU notes (single words and bigrams) to train a supervised machine learning algorithm to determine the most important words differentiating poorly performing patients compared to well performing patients in our original discharge prediction model. NLP using a bag of words (BOW) analysis revealed several cohorts that performed poorly in our original model. These included patients with surgical diagnoses, pulmonary hypertension, retinopathy of prematurity, and psychosocial issues. The BOW approach aided in cohort discovery and will allow further refinement of our original discharge model prediction. Adequately identifying patients discharged home on g-tube feeds alone could improve the AUC of our original model by 0.02. Additionally, this approach identified social issues as a major cause for delayed discharge. A BOW analysis provides a method to improve and refine our NICU discharge prediction model and could potentially avoid over 900 (0.9%) hospital days.

  3. Development and application of a multiroute physiologically based pharmacokinetic model for oxytetracycline in dogs and humans.

    PubMed

    Lin, Zhoumeng; Li, Mengjie; Gehring, Ronette; Riviere, Jim E

    2015-01-01

    Oxytetracycline (OTC) is a commonly used tetracycline antibiotic in veterinary and human medicine. To establish a quantitative model for predicting OTC plasma and tissue exposure, a permeability-limited multiroute physiologically based pharmacokinetic model was developed in dogs. The model was calibrated with plasma pharmacokinetic data in beagle dogs following single intravenous (5 mg/kg), oral (100 mg/kg), and intramuscular (20 mg/kg) administrations. The model predicted other available dog data well, including drug concentrations in the liver, kidney, and muscle after repeated exposure, and data in the mixed-breed dog. The model was extrapolated to humans and the human model adequately simulated measured plasma OTC concentrations after intravenous (7.14 mg/kg) and oral exposures (6.67 mg/kg). The dog model was applied to predict 24-h OTC area-under-the-curve after three therapeutic treatments. Results were 27.75, 51.76, and 64.17 μg/mL*h in the plasma, and 120.93, 225.64, and 279.67 μg/mL*h in the kidney for oral (100 mg/kg), intravenous (10 mg/kg), and intramuscular (20 mg/kg) administrations, respectively. This model can be used to predict plasma and tissue concentrations to aid in designing optimal therapeutic regimens with OTC in veterinary, and potentially, human medicine; and as a foundation for scaling to other tetracycline antibiotics and to other animal species. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:233-243, 2015. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  4. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-07

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web based survey and revised during a three day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).To encourage dissemination of the TRIPOD Statement, this article is freely accessible on the Annals of Internal Medicine Web site (www.annals.org) and will be also published in BJOG, British Journal of Cancer, British Journal of Surgery, BMC Medicine, The BMJ, Circulation, Diabetic Medicine, European Journal of Clinical Investigation, European Urology, and Journal of Clinical Epidemiology. The authors jointly hold the copyright of this article. An accompanying explanation and elaboration article is freely available only on www.annals.org; Annals of Internal Medicine holds copyright for that article. © BMJ Publishing Group Ltd 2014.

  5. Accounting for energy and protein reserve changes in predicting diet-allowable milk production in cattle.

    PubMed

    Tedeschi, L O; Seo, S; Fox, D G; Ruiz, R

    2006-12-01

    Current ration formulation systems used to formulate diets on farms and to evaluate experimental data estimate metabolizable energy (ME)-allowable and metabolizable protein (MP)-allowable milk production from the intake above animal requirements for maintenance, pregnancy, and growth. The changes in body reserves, measured via the body condition score (BCS), are not accounted for in predicting ME and MP balances. This paper presents 2 empirical models developed to adjust predicted diet-allowable milk production based on changes in BCS. Empirical reserves model 1 was based on the reserves model described by the 2001 National Research Council (NRC) Nutrient Requirements of Dairy Cattle, whereas empirical reserves model 2 was developed based on published data of body weight and composition changes in lactating dairy cows. A database containing 134 individually fed lactating dairy cows from 3 trials was used to evaluate these adjustments in milk prediction based on predicted first-limiting ME or MP by the 2001 Dairy NRC and Cornell Net Carbohydrate and Protein System models. The analysis of first-limiting ME or MP milk production without adjustments for BCS changes indicated that the predictions of both models were consistent (r(2) of the regression between observed and model-predicted values of 0.90 and 0.85), had mean biases different from zero (12.3 and 5.34%), and had moderate but different roots of mean square errors of prediction (5.42 and 4.77 kg/d) for the 2001 NRC model and the Cornell Net Carbohydrate and Protein System model, respectively. The adjustment of first-limiting ME- or MP-allowable milk to BCS changes improved the precision and accuracy of both models. We further investigated 2 methods of adjustment; the first method used only the first and last BCS values, whereas the second method used the mean of weekly BCS values to adjust ME- and MP-allowable milk production. The adjustment to BCS changes based on first and last BCS values was more accurate than the adjustment to BCS based on the mean of all BCS values, suggesting that adjusting milk production for mean weekly variations in BCS added more variability to model-predicted milk production. We concluded that both models adequately predicted the first-limiting ME- or MP-allowable milk after adjusting for changes in BCS.

  6. Micromechanical constitutive model for low-temperature constant strain rate deformation of limestones in the brittle and semi-brittle regime

    NASA Astrophysics Data System (ADS)

    Nicolas, A.; Fortin, J.; Guéguen, Y.

    2017-10-01

    Deformation and failure of rocks are important for a better understanding of many crustal geological phenomena such as faulting and compaction. In carbonate rocks among others, low-temperature deformation can either occur with dilatancy or compaction, having implications for porosity changes, failure and petrophysical properties. Hence, a thorough understanding of all the micromechanisms responsible for deformation is of great interest. In this study, a constitutive model for the low-temperature deformation of low-porosity (<20 per cent) carbonate rocks is derived from the micromechanisms identified in previous studies. The micromechanical model is based on (1) brittle crack propagation, (2) a plasticity law (interpreted in terms of dislocation glide without possibility to climb) for porous media with hardening and (3) crack nucleation due to dislocation pile-ups. The model predicts stress-strain relations and the evolution of damage during deformation. The model adequately predicts brittle behaviour at low confining pressures, which switches to a semi-brittle behaviour characterized by inelastic compaction followed by dilatancy at higher confining pressures. Model predictions are compared to experimental results from previous studies and are found to be in close agreement with experimental results. This suggests that microphysical phenomena responsible for the deformation are sufficiently well captured by the model although twinning, recovery and cataclasis are not considered. The porosity range of applicability and limits of the model are discussed.

  7. On INM's Use of Corrected Net Thrust for the Prediction of Jet Aircraft Noise

    NASA Technical Reports Server (NTRS)

    McAninch, Gerry L.; Shepherd, Kevin P.

    2011-01-01

    The Federal Aviation Administration s (FAA) Integrated Noise Model (INM) employs a prediction methodology that relies on corrected net thrust as the sole correlating parameter between aircraft and engine operating states and aircraft noise. Thus aircraft noise measured for one set of atmospheric and aircraft operating conditions is assumed to be applicable to all other conditions as long as the corrected net thrust remains constant. This hypothesis is investigated under two primary assumptions: (1) the sound field generated by the aircraft is dominated by jet noise, and (2) the sound field generated by the jet flow is adequately described by Lighthill s theory of noise generated by turbulence.

  8. Development of a modeling approach to estimate indoor-to-outdoor sulfur ratios and predict indoor PM2.5 and black carbon concentrations for Eastern Massachusetts households

    PubMed Central

    Tang, Chia Hsi; Garshick, Eric; Grady, Stephanie; Coull, Brent; Schwartz, Joel; Koutrakis, Petros

    2018-01-01

    The effects of indoor air pollution on human health have drawn increasing attention among the scientific community as individuals spend most of their time indoors. However, indoor air sampling is labor-intensive and costly, which limits the ability to study the adverse health effects related to indoor air pollutants. To overcome this challenge, many researchers have attempted to predict indoor exposures based on outdoor pollutant concentrations, home characteristics, and weather parameters. Typically, these models require knowledge of the infiltration factor, which indicates the fraction of ambient particles that penetrates indoors. For estimating indoor fine particulate matter (PM2.5) exposure, a common approach is to use the indoor-to-outdoor sulfur ratio (Sindoor/Soutdoor) as a proxy of the infiltration factor. The objective of this study was to develop a robust model that estimates Sindoor/Soutdoor for individual households that can be incorporated into models to predict indoor PM2.5 and black carbon (BC) concentrations. Overall, our model adequately estimated Sindoor/Soutdoor with an out-of-sample by home-season R2 of 0.89. Estimated Sindoor/Soutdoor reflected behaviors that influence particle infiltration, including window opening, use of forced air heating, and air purifier. Sulfur ratio-adjusted models predicted indoor PM2.5 and BC with high precision, with out-of-sample R2 values of 0.79 and 0.76, respectively. PMID:29064481

  9. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling.

    PubMed

    El-Gabbas, Ahmed; Dormann, Carsten F

    2018-02-01

    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions.

  10. DSM-5 section III personality traits and section II personality disorders in a Flemish community sample.

    PubMed

    Bastiaens, Tim; Smits, Dirk; De Hert, Marc; Vanwalleghem, Dominique; Claes, Laurence

    2016-04-30

    The Personality Inventory for DSM-5 (PID-5; Krueger et al., 2012) is a dimensional self-report questionnaire designed to measure personality pathology according to the criterion B of the DSM-5 Section III personality model. In the current issue of DSM, this dimensional Section III personality model co-exists with the Section II categorical personality model derived from DSM-IV-TR. Therefore, investigation of the inter-relatedness of both models across populations and languages is warranted. In this study, we first examined the factor structure and reliability of the PID-5 in a Flemish community sample (N=509) by means of exploratory structural equation modeling and alpha coefficients. Next, we investigated the predictive ability of section III personality traits in relation to section II personality disorders through correlations and stepwise regression analyses. Results revealed a five factor solution for the PID-5, with adequate reliability of the facet scales. The variance in Section II personality disorders could be predicted by their theoretically comprising Section III personality traits, but additional Section III personality traits augmented this prediction. Based on current results, we discuss the Section II personality disorder conceptualization and the Section III personality disorder operationalization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Mid-frequency Band Dynamics of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.; Adams, Douglas S.

    2004-01-01

    High and low intensity dynamic environments experienced by a spacecraft during launch and on-orbit operations, respectively, induce structural loads and motions, which are difficult to reliably predict. Structural dynamics in low- and mid-frequency bands are sensitive to component interface uncertainty and non-linearity as evidenced in laboratory testing and flight operations. Analytical tools for prediction of linear system response are not necessarily adequate for reliable prediction of mid-frequency band dynamics and analysis of measured laboratory and flight data. A new MATLAB toolbox, designed to address the key challenges of mid-frequency band dynamics, is introduced in this paper. Finite-element models of major subassemblies are defined following rational frequency-wavelength guidelines. For computational efficiency, these subassemblies are described as linear, component mode models. The complete structural system model is composed of component mode subassemblies and linear or non-linear joint descriptions. Computation and display of structural dynamic responses are accomplished employing well-established, stable numerical methods, modern signal processing procedures and descriptive graphical tools. Parametric sensitivity and Monte-Carlo based system identification tools are used to reconcile models with experimental data and investigate the effects of uncertainties. Models and dynamic responses are exported for employment in applications, such as detailed structural integrity and mechanical-optical-control performance analyses.

  12. Dynamics of a multimode semiconductor laser with optical feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koryukin, I. V.

    A new model of a multi-longitudinal-mode semiconductor laser with weak optical feedback is proposed. This model generalizes the well-known Tang-Statz-deMars equations, which are derived from the first principles and adequately describe solid-state lasers to a semiconductor active medium. Steady states of the model and the spectrum of relaxation oscillations are found, and the laser dynamics in the chaotic regime of low-frequency fluctuations of intensity is investigated. It is established that the dynamic properties of the proposed model depend mainly on the carrier diffusion, which controls mode-mode coupling in the active medium via spread of gratings of spatial inversion. The resultsmore » obtained are compared with the predictions of previous semiphenomenological models and the scope of applicability of these models is determined.« less

  13. Development of a mathematical model for the growth associated Polyhydroxybutyrate fermentation by Azohydromonas australica and its use for the design of fed-batch cultivation strategies.

    PubMed

    Gahlawat, Geeta; Srivastava, Ashok K

    2013-06-01

    In the present investigation, batch cultivation of Azohydromonas australica DSM 1124 was carried out in a bioreactor for growth associated PHB production. The observed batch PHB production kinetics data was then used for the development of a mathematical model which adequately described the substrate limitation and inhibition during the cultivation. The statistical validity test demonstrated that the proposed mathematical model predictions were significant at 99% confidence level. The model was thereafter extrapolated to fed-batch to identify various nutrients feeding regimes during the bioreactor cultivation to improve the PHB accumulation. The distinct capability of the mathematical model to predict highly dynamic fed-batch cultivation strategies was demonstrated by experimental implementation of two fed-batch cultivation strategies. A significantly high PHB concentration of 22.65 g/L & an overall PHB content of 76% was achieved during constant feed rate fed-batch cultivation which is the highest PHB content reported so far using A. australica. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  15. Prediction of Phase Behavior of Spray-Dried Amorphous Solid Dispersions: Assessment of Thermodynamic Models, Standard Screening Methods and a Novel Atomization Screening Device with Regard to Prediction Accuracy

    PubMed Central

    Chavez, Pierre-François; Meeus, Joke; Robin, Florent; Schubert, Martin Alexander; Somville, Pascal

    2018-01-01

    The evaluation of drug–polymer miscibility in the early phase of drug development is essential to ensure successful amorphous solid dispersion (ASD) manufacturing. This work investigates the comparison of thermodynamic models, conventional experimental screening methods (solvent casting, quench cooling), and a novel atomization screening device based on their ability to predict drug–polymer miscibility, solid state properties (Tg value and width), and adequate polymer selection during the development of spray-dried amorphous solid dispersions (SDASDs). Binary ASDs of four drugs and seven polymers were produced at 20:80, 40:60, 60:40, and 80:20 (w/w). Samples were systematically analyzed using modulated differential scanning calorimetry (mDSC) and X-ray powder diffraction (XRPD). Principal component analysis (PCA) was used to qualitatively assess the predictability of screening methods with regards to SDASD development. Poor correlation was found between theoretical models and experimentally-obtained results. Additionally, the limited ability of usual screening methods to predict the miscibility of SDASDs did not guarantee the appropriate selection of lead excipient for the manufacturing of robust SDASDs. Contrary to standard approaches, our novel screening device allowed the selection of optimal polymer and drug loading and established insight into the final properties and performance of SDASDs at an early stage, therefore enabling the optimization of the scaled-up late-stage development. PMID:29518936

  16. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction.

    PubMed

    Park, Seong Ho; Han, Kyunghwa

    2018-03-01

    The use of artificial intelligence in medicine is currently an issue of great interest, especially with regard to the diagnostic or predictive analysis of medical images. Adoption of an artificial intelligence tool in clinical practice requires careful confirmation of its clinical utility. Herein, the authors explain key methodology points involved in a clinical evaluation of artificial intelligence technology for use in medicine, especially high-dimensional or overparameterized diagnostic or predictive models in which artificial deep neural networks are used, mainly from the standpoints of clinical epidemiology and biostatistics. First, statistical methods for assessing the discrimination and calibration performances of a diagnostic or predictive model are summarized. Next, the effects of disease manifestation spectrum and disease prevalence on the performance results are explained, followed by a discussion of the difference between evaluating the performance with use of internal and external datasets, the importance of using an adequate external dataset obtained from a well-defined clinical cohort to avoid overestimating the clinical performance as a result of overfitting in high-dimensional or overparameterized classification model and spectrum bias, and the essentials for achieving a more robust clinical evaluation. Finally, the authors review the role of clinical trials and observational outcome studies for ultimate clinical verification of diagnostic or predictive artificial intelligence tools through patient outcomes, beyond performance metrics, and how to design such studies. © RSNA, 2018.

  17. Toward improved prediction of the bedrock depth underneath hillslopes: Bayesian inference of the bottom-up control hypothesis using high-resolution topographic data

    NASA Astrophysics Data System (ADS)

    Gomes, Guilherme J. C.; Vrugt, Jasper A.; Vargas, Eurípedes A.

    2016-04-01

    The depth to bedrock controls a myriad of processes by influencing subsurface flow paths, erosion rates, soil moisture, and water uptake by plant roots. As hillslope interiors are very difficult and costly to illuminate and access, the topography of the bedrock surface is largely unknown. This essay is concerned with the prediction of spatial patterns in the depth to bedrock (DTB) using high-resolution topographic data, numerical modeling, and Bayesian analysis. Our DTB model builds on the bottom-up control on fresh-bedrock topography hypothesis of Rempe and Dietrich (2014) and includes a mass movement and bedrock-valley morphology term to extent the usefulness and general applicability of the model. We reconcile the DTB model with field observations using Bayesian analysis with the DREAM algorithm. We investigate explicitly the benefits of using spatially distributed parameter values to account implicitly, and in a relatively simple way, for rock mass heterogeneities that are very difficult, if not impossible, to characterize adequately in the field. We illustrate our method using an artificial data set of bedrock depth observations and then evaluate our DTB model with real-world data collected at the Papagaio river basin in Rio de Janeiro, Brazil. Our results demonstrate that the DTB model predicts accurately the observed bedrock depth data. The posterior mean DTB simulation is shown to be in good agreement with the measured data. The posterior prediction uncertainty of the DTB model can be propagated forward through hydromechanical models to derive probabilistic estimates of factors of safety.

  18. CRCM + BATS-R-US two-way coupling

    NASA Astrophysics Data System (ADS)

    Glocer, A.; Fok, M.; Meng, X.; Toth, G.; Buzulukova, N.; Chen, S.; Lin, K.

    2013-04-01

    We present the coupling methodology and validation of a fully coupled inner and global magnetosphere code using the infrastructure provided by the Space Weather Modeling Framework (SWMF). In this model, the Comprehensive Ring Current Model (CRCM) represents the inner magnetosphere, while the Block-Adaptive-Tree Solar-Wind Roe-Type Upwind Scheme (BATS-R-US) represents the global magnetosphere. The combined model is a global magnetospheric code with a realistic ring current and consistent electric and magnetic fields. The computational performance of the model was improved to surpass real-time execution by the use of the Message Passing Interface (MPI) to parallelize the CRCM. Initial simulations under steady driving found that the coupled model resulted in a higher pressure in the inner magnetosphere and an inflated closed field-line region as compared to simulations without inner-magnetosphere coupling. Our validation effort was split into two studies. The first study examined the ability of the model to reproduce Dst for a range of events from the Geospace Environment Modeling (GEM) Dst Challenge. It also investigated the possibility of a baseline shift and compared two approaches to calculating Dst from the model. We found that the model did a reasonable job predicting Dst and Sym-H according to our two metrics of prediction efficiency and predicted yield. The second study focused on the specific case of the 22 July 2009 moderate geomagnetic storm. In this study, we directly compare model predictions and observations for Dst, THEMIS energy spectragrams, TWINS ENA images, and GOES 11 and 12 magnetometer data. The model did an adequate job reproducing trends in the data. Moreover, we found that composition can have a large effect on the result.

  19. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  20. How important is embeddedness in predicting Australian speech-language pathologists' intentions to leave their jobs and the profession?

    PubMed

    Heritage, Brody; Quail, Michelle; Cocks, Naomi

    2018-03-05

    This study explored the predictors of the outcomes of turnover and occupation attrition intentions for speech-language pathologists. The researchers examined the mediating effects of job satisfaction and strain on the relationship between stress and the latter outcomes. Additionally, the researchers examined the importance of embeddedness in predicting turnover intentions after accounting for stress, strain and job satisfaction. An online questionnaire was used to explore turnover and attrition intentions in 293 Australian speech-language pathologists. Job satisfaction contributed to a significant indirect effect on the stress and turnover intention relationship, however strain did not. There was a significant direct effect between stress and turnover intention after accounting for covariates. Embeddedness and the perceived availability of alternative jobs were also found to be significant predictors of turnover intentions. The mediating model used to predict turnover intentions also predicted occupation attrition intentions. The effect of stress on occupation attrition intentions was indirect in nature, the direct effect negated by mediating variables. Qualitative data provided complementary evidence to the quantitative model. The findings indicate that the proposed parsimonious model adequately captures predictors of speech-language pathologists' turnover and occupation attrition intentions. Workplaces and the profession may wish to consider these retention factors.

  1. Application of a prediction model for work-related sensitisation in bakery workers.

    PubMed

    Meijer, E; Suarthana, E; Rooijackers, J; Grobbee, D E; Jacobs, J H; Meijster, T; de Monchy, J G R; van Otterloo, E; van Rooy, F G B G J; Spithoven, J J G; Zaat, V A C; Heederik, D J J

    2010-10-01

    Identification of work-related allergy, particularly work-related asthma, in a (nationwide) medical surveillance programme among bakery workers requires an effective and efficient strategy. Bakers at high risk of having work-related allergy were indentified by use of a questionnaire-based prediction model for work-related sensitisation. The questionnaire was applied among 5,325 participating bakers. Sequential diagnostic investigations were performed only in those with an elevated risk. Performance of the model was evaluated in 674 randomly selected bakers who participated in the medical surveillance programme and the validation study. Clinical investigations were evaluated in the first 73 bakers referred at high risk. Overall 90% of bakers at risk of having asthma could be identified. Individuals at low risk showed 0.3-3.8% work-related respiratory symptoms, medication use or absenteeism. Predicting flour sensitisation by a simple questionnaire and score chart seems more effective at detecting work-related allergy than serology testing followed by clinical investigation in all immunoglobulin E class II-positive individuals. This prediction based stratification procedure appeared effective in detecting work-related allergy among bakers and can accurately be used for periodic examination, especially in small enterprises where delivery of adequate care is difficult. This approach may contribute to cost reduction.

  2. Predicting critical micelle concentration and micelle molecular weight of polysorbate 80 using compendial methods.

    PubMed

    Braun, Alexandra C; Ilko, David; Merget, Benjamin; Gieseler, Henning; Germershaus, Oliver; Holzgrabe, Ulrike; Meinel, Lorenz

    2015-08-01

    This manuscript addresses the capability of compendial methods in controlling polysorbate 80 (PS80) functionality. Based on the analysis of sixteen batches, functionality related characteristics (FRC) including critical micelle concentration (CMC), cloud point, hydrophilic-lipophilic balance (HLB) value and micelle molecular weight were correlated to chemical composition including fatty acids before and after hydrolysis, content of non-esterified polyethylene glycols and sorbitan polyethoxylates, sorbitan- and isosorbide polyethoxylate fatty acid mono- and diesters, polyoxyethylene diesters, and peroxide values. Batches from some suppliers had a high variability in functionality related characteristic (FRC), questioning the ability of the current monograph in controlling these. Interestingly, the combined use of the input parameters oleic acid content and peroxide value - both of which being monographed methods - resulted in a model adequately predicting CMC. Confining the batches to those complying with specifications for peroxide value proved oleic acid content alone as being predictive for CMC. Similarly, a four parameter model based on chemical analyses alone was instrumental in predicting the molecular weight of PS80 micelles. Improved models based on analytical outcome from fingerprint analyses are also presented. A road map controlling PS80 batches with respect to FRC and based on chemical analyses alone is provided for the formulator. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Non-destructive prediction of pigment content in lettuce based on visible-NIR spectroscopy.

    PubMed

    Steidle Neto, Antonio José; Moura, Lorena de Oliveira; Lopes, Daniela de Carvalho; Carlos, Lanamar de Almeida; Martins, Luma Moreira; Ferraz, Leila de Castro Louback

    2017-05-01

    Lettuce (Lactuca sativa L.) is one of the most important salad vegetables in the world, with a number of head shapes, leaf types and colors. The lettuce pigments play important physiological functions, such as photosynthetic processes and light stress defense, but they also benefit human health because of their antioxidant action and anticarcinogenic properties. In this study three lettuce cultivars were grown under different farming systems, and partial least squares models were built to predict the leaf chlorophyll, carotenoid and anthocyanin content. The three proposed models resulted in high coefficients of determination and variable importance for the projection values, as well as low estimative errors for calibration and external validation datasets. These results confirmed that it is possible to accurately predict chlorophyll, carotenoid and anthocyanin content of green and red lettuces, grown in different farming systems, based on the spectral reflectance from 500 to 1000 nm. The proposed models were adequate for estimating lettuce pigments in a quick and non-destructive way, representing an alternative to conventional measurement methods. Prediction accuracies were improved by using the detrending, smoothing and first derivative pretreatments to the original spectral signatures prior to estimating lettuce chlorophyll, carotenoid and anthocyanin content, respectively. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  4. Data-intensive drug development in the information age: applications of Systems Biology/Pharmacology/Toxicology.

    PubMed

    Kiyosawa, Naoki; Manabe, Sunao

    2016-01-01

    Pharmaceutical companies continuously face challenges to deliver new drugs with true medical value. R&D productivity of drug development projects depends on 1) the value of the drug concept and 2) data and in-depth knowledge that are used rationally to evaluate the drug concept's validity. A model-based data-intensive drug development approach is a key competitive factor used by innovative pharmaceutical companies to reduce information bias and rationally demonstrate the value of drug concepts. Owing to the accumulation of publicly available biomedical information, our understanding of the pathophysiological mechanisms of diseases has developed considerably; it is the basis for identifying the right drug target and creating a drug concept with true medical value. Our understanding of the pathophysiological mechanisms of disease animal models can also be improved; it can thus support rational extrapolation of animal experiment results to clinical settings. The Systems Biology approach, which leverages publicly available transcriptome data, is useful for these purposes. Furthermore, applying Systems Pharmacology enables dynamic simulation of drug responses, from which key research questions to be addressed in the subsequent studies can be adequately informed. Application of Systems Biology/Pharmacology to toxicology research, namely Systems Toxicology, should considerably improve the predictability of drug-induced toxicities in clinical situations that are difficult to predict from conventional preclinical toxicology studies. Systems Biology/Pharmacology/Toxicology models can be continuously improved using iterative learn-confirm processes throughout preclinical and clinical drug discovery and development processes. Successful implementation of data-intensive drug development approaches requires cultivation of an adequate R&D culture to appreciate this approach.

  5. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency planners and the media, a forecast product, which is based on wrong assumptions that violate the best-documented earthquake statistics in California, which accuracy was not investigated, and which forecasts were not tested in a rigorous way.

  6. Analysis of model development strategies: predicting ventral hernia recurrence.

    PubMed

    Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K

    2016-11-01

    There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. The NASA Space Radiation Health Program

    NASA Technical Reports Server (NTRS)

    Schimmerling, W.; Sulzman, F. M.

    1994-01-01

    The NASA Space Radiation Health Program is a part of the Life Sciences Division in the Office of Space Science and Applications (OSSA). The goal of the Space Radiation Health Program is development of scientific bases for assuring adequate radiation protection in space. A proposed research program will determine long-term health risks from exposure to cosmic rays and other radiation. Ground-based animal models will be used to predict risk of exposures at varying levels from various sources and the safe levels for manned space flight.

  8. An Evaluation of the Effective Block Approach Using P-3C and F-111 Crack Growth Data

    DTIC Science & Technology

    2008-09-01

    the end of 2006 where his research interests included, modelling of fatigue crack growth, infrared NDT technologies and fibre optic corrosion...2006). It was claimed that the growth of these cracks in structures made of 7050 aluminium alloy could not be adequately predicted using classical...the crack growth behaviour of 7050 aluminium alloy subjected to the service load of the F/A-18 fighter planes. To make the matter worse, the

  9. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    PubMed

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Flight motor set 360L001 (STS-26R). Volume 1: System overview, revision A

    NASA Technical Reports Server (NTRS)

    Garecht, Diane M.

    1990-01-01

    The NASA space shuttle flight STS-26R, launched at 11:37.00.009 am, EDT on 29 Sep. 1988, used the redesigned solid rocket motors (RSRM) 360L001A and 360L001B. Evaluation of the ground environment instrumentation (GEI) data recorded prior to flight showed no launch commit criteria violations; that the field joint heater and aft skirt thermal conditioning systems performed adequately; and that the GEI data showed good agreement with thermal model predictions. Evaluation of the developmental flight instrumentation (DFI) revealed excellent agreement with both the predicted and required ballistic specifications. All parameters were well within the GEI specification requirements including propellant burn rates, specific impulse values, and thrust imbalance. Recorded strain values also indicated satisfactory radial growth and stress levels, as well as verification of adequate safety factors. Postflight inspection of the insulation, seals, case, and nozzles showed overall excellent performance. Some thermal DFI protective cork was missing, and inoperative field joint vent valves on the thermal protection cork allowed water entry into the field joints upon splashdown. Evaluation of these anomalies, as well as complete evaluation of all Redesigned Solid Rocket Motor components, is contained.

  11. Flight motor set 360L001 (STS-26R), volume 1

    NASA Technical Reports Server (NTRS)

    Ricks, Glen A.

    1988-01-01

    The NASA space shuttle flight STS-26R, launched at 11:37.00.009 a.m. EDT on 29 Sep. 1988, used the redesigned solid rocket motors (RSRM) 360LOO1A and 360LOO1B. Evaluation of the ground environment instrumentation (GEI) data recorded prior to flight showed: (1) no launch commit criteria violations, (2) that the field joint heater and aft skirt thermal conditioning systems performed adequately, and (3) that the GEI data showed good agreement with thermal model predictions. Evaluation of the developmental flight instrumentation (DFI) revealed excellent agreement with both the predicted and required ballistic specifications. All parameters were well within the CEI specification requirements including propellant burn rates, specific impulse values, and thrust imbalance. Recorded strain values also indicated satisfactory radial growth and stress levels, as well as verification of adequate safety factors. Postflight inspection of the insulation, seals, case, and nozzles showed overall excellent performance. Some thermal DFI protective cork was missing, and inoperative field joint vent valves on the thermal protection cork allowed water entry into the field joints upon splashdown. Evaluation of these anomalies, as well as complete evaluation of all RSRM components, is presented.

  12. Coupled rotor/airframe vibration analysis

    NASA Technical Reports Server (NTRS)

    Sopher, R.; Studwell, R. E.; Cassarino, S.; Kottapalli, S. B. R.

    1982-01-01

    A coupled rotor/airframe vibration analysis developed as a design tool for predicting helicopter vibrations and a research tool to quantify the effects of structural properties, aerodynamic interactions, and vibration reduction devices on vehicle vibration levels is described. The analysis consists of a base program utilizing an impedance matching technique to represent the coupled rotor/airframe dynamics of the system supported by inputs from several external programs supplying sophisticated rotor and airframe aerodynamic and structural dynamic representation. The theoretical background, computer program capabilities and limited correlation results are presented in this report. Correlation results using scale model wind tunnel results show that the analysis can adequately predict trends of vibration variations with airspeed and higher harmonic control effects. Predictions of absolute values of vibration levels were found to be very sensitive to modal characteristics and results were not representative of measured values.

  13. Development of Decision Support Formulas for the Prediction of Bladder Outlet Obstruction and Prostatic Surgery in Patients With Lower Urinary Tract Symptom/Benign Prostatic Hyperplasia: Part II, External Validation and Usability Testing of a Smartphone App.

    PubMed

    Choo, Min Soo; Jeong, Seong Jin; Cho, Sung Yong; Yoo, Changwon; Jeong, Chang Wook; Ku, Ja Hyeon; Oh, Seung-June

    2017-04-01

    We aimed to externally validate the prediction model we developed for having bladder outlet obstruction (BOO) and requiring prostatic surgery using 2 independent data sets from tertiary referral centers, and also aimed to validate a mobile app for using this model through usability testing. Formulas and nomograms predicting whether a subject has BOO and needs prostatic surgery were validated with an external validation cohort from Seoul National University Bundang Hospital and Seoul Metropolitan Government-Seoul National University Boramae Medical Center between January 2004 and April 2015. A smartphone-based app was developed, and 8 young urologists were enrolled for usability testing to identify any human factor issues of the app. A total of 642 patients were included in the external validation cohort. No significant differences were found in the baseline characteristics of major parameters between the original (n=1,179) and the external validation cohort, except for the maximal flow rate. Predictions of requiring prostatic surgery in the validation cohort showed a sensitivity of 80.6%, a specificity of 73.2%, a positive predictive value of 49.7%, and a negative predictive value of 92.0%, and area under receiver operating curve of 0.84. The calibration plot indicated that the predictions have good correspondence. The decision curve showed also a high net benefit. Similar evaluation results using the external validation cohort were seen in the predictions of having BOO. Overall results of the usability test demonstrated that the app was user-friendly with no major human factor issues. External validation of these newly developed a prediction model demonstrated a moderate level of discrimination, adequate calibration, and high net benefit gains for predicting both having BOO and requiring prostatic surgery. Also a smartphone app implementing the prediction model was user-friendly with no major human factor issue.

  14. Refining Prediction in Treatment-Resistant Depression: Results of Machine Learning Analyses in the TRD III Sample.

    PubMed

    Kautzky, Alexander; Dold, Markus; Bartova, Lucie; Spies, Marie; Vanicek, Thomas; Souery, Daniel; Montgomery, Stuart; Mendlewicz, Julien; Zohar, Joseph; Fabbri, Chiara; Serretti, Alessandro; Lanzenberger, Rupert; Kasper, Siegfried

    The study objective was to generate a prediction model for treatment-resistant depression (TRD) using machine learning featuring a large set of 47 clinical and sociodemographic predictors of treatment outcome. 552 Patients diagnosed with major depressive disorder (MDD) according to DSM-IV criteria were enrolled between 2011 and 2016. TRD was defined as failure to reach response to antidepressant treatment, characterized by a Montgomery-Asberg Depression Rating Scale (MADRS) score below 22 after at least 2 antidepressant trials of adequate length and dosage were administered. RandomForest (RF) was used for predicting treatment outcome phenotypes in a 10-fold cross-validation. The full model with 47 predictors yielded an accuracy of 75.0%. When the number of predictors was reduced to 15, accuracies between 67.6% and 71.0% were attained for different test sets. The most informative predictors of treatment outcome were baseline MADRS score for the current episode; impairment of family, social, and work life; the timespan between first and last depressive episode; severity; suicidal risk; age; body mass index; and the number of lifetime depressive episodes as well as lifetime duration of hospitalization. With the application of the machine learning algorithm RF, an efficient prediction model with an accuracy of 75.0% for forecasting treatment outcome could be generated, thus surpassing the predictive capabilities of clinical evaluation. We also supply a simplified algorithm of 15 easily collected clinical and sociodemographic predictors that can be obtained within approximately 10 minutes, which reached an accuracy of 70.6%. Thus, we are confident that our model will be validated within other samples to advance an accurate prediction model fit for clinical usage in TRD. © Copyright 2017 Physicians Postgraduate Press, Inc.

  15. Impact of a cost constraint on nutritionally adequate food choices for French women: an analysis by linear programming.

    PubMed

    Darmon, Nicole; Ferguson, Elaine L; Briend, André

    2006-01-01

    To predict, for French women, the impact of a cost constraint on the food choices required to provide a nutritionally adequate diet. Isocaloric daily diets fulfilling both palatability and nutritional constraints were modeled in linear programming, using different cost constraint levels. For each modeled diet, total departure from an observed French population's average food group pattern ("mean observed diet") was minimized. To achieve the nutritional recommendations without a cost constraint, the modeled diet provided more energy from fish, fresh fruits and green vegetables and less energy from animal fats and cheese than the "mean observed diet." Introducing and strengthening a cost constraint decreased the energy provided by meat, fresh vegetables, fresh fruits, vegetable fat, and yogurts and increased the energy from processed meat, eggs, offal, and milk. For the lowest cost diet (ie, 3.18 euros/d), marked changes from the "mean observed diet" were required, including a marked reduction in the amount of energy from fresh fruits (-85%) and green vegetables (-70%), and an increase in the amount of energy from nuts, dried fruits, roots, legumes, and fruit juices. Nutrition education for low-income French women must emphasize these affordable food choices.

  16. Quadrupedal locomotor simulation: producing more realistic gaits using dual-objective optimization

    PubMed Central

    Hirasaki, Eishi

    2018-01-01

    In evolutionary biomechanics it is often considered that gaits should evolve to minimize the energetic cost of travelling a given distance. In gait simulation this goal often leads to convincing gait generation. However, as the musculoskeletal models used get increasingly sophisticated, it becomes apparent that such a single goal can lead to extremely unrealistic gait patterns. In this paper, we explore the effects of requiring adequate lateral stability and show how this increases both energetic cost and the realism of the generated walking gait in a high biofidelity chimpanzee musculoskeletal model. We also explore the effects of changing the footfall sequences in the simulation so it mimics both the diagonal sequence walking gaits that primates typically use and also the lateral sequence walking gaits that are much more widespread among mammals. It is apparent that adding a lateral stability criterion has an important effect on the footfall phase relationship, suggesting that lateral stability may be one of the key drivers behind the observed footfall sequences in quadrupedal gaits. The observation that single optimization goals are no longer adequate for generating gait in current models has important implications for the use of biomimetic virtual robots to predict the locomotor patterns in fossil animals. PMID:29657790

  17. A model for foam formation, stability, and breakdown in glass-melting furnaces.

    PubMed

    van der Schaaf, John; Beerkens, Ruud G C

    2006-03-01

    A dynamic model for describing the build-up and breakdown of a glass-melt foam is presented. The foam height is determined by the gas flux to the glass-melt surface and the drainage rate of the liquid lamellae between the gas bubbles. The drainage rate is determined by the average gas bubble radius and the physical properties of the glass melt: density, viscosity, surface tension, and interfacial mobility. Neither the assumption of a fully mobile nor the assumption of a fully immobile glass-melt interface describe the observed foam formation on glass melts adequately. The glass-melt interface appears partially mobile due to the presence of surface active species, e.g., sodium sulfate and silanol groups. The partial mobility can be represented by a single, glass-melt composition specific parameter psi. The value of psi can be estimated from gas bubble lifetime experiments under furnace conditions. With this parameter, laboratory experiments of foam build-up and breakdown in a glass melt are adequately described, qualitatively and quantitatively by a set of ordinary differential equations. An approximate explicit relationship for the prediction of the steady-state foam height is derived from the fundamental model.

  18. Development of flood regressions and climate change scenarios to explore estimates of future peak flows

    USGS Publications Warehouse

    Burns, Douglas A.; Smith, Martyn J.; Freehafer, Douglas A.

    2015-12-31

    The application uses predictions of future annual precipitation from five climate models and two future greenhouse gas emissions scenarios and provides results that are averaged over three future periods—2025 to 2049, 2050 to 2074, and 2075 to 2099. Results are presented in ensemble form as the mean, median, maximum, and minimum values among the five climate models for each greenhouse gas emissions scenario and period. These predictions of future annual precipitation are substituted into either the precipitation variable or a water balance equation for runoff to calculate potential future peak flows. This application is intended to be used only as an exploratory tool because (1) the regression equations on which the application is based have not been adequately tested outside the range of the current climate and (2) forecasting future precipitation with climate models and downscaling these results to a fine spatial resolution have a high degree of uncertainty. This report includes a discussion of the assumptions, uncertainties, and appropriate use of this exploratory application.

  19. Thermal Damage Analysis in Biological Tissues Under Optical Irradiation: Application to the Skin

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Ortega-Quijano, Noé; Solana-Quirós, José Ramón; Arce-Diego, José Luis

    2009-07-01

    The use of optical sources in medical praxis is increasing nowadays. In this study, different approaches using thermo-optical principles that allow us to predict thermal damage in irradiated tissues are analyzed. Optical propagation is studied by means of the radiation transport theory (RTT) equation, solved via a Monte Carlo analysis. Data obtained are included in a bio-heat equation, solved via a numerical finite difference approach. Optothermal properties are considered for the model to be accurate and reliable. Thermal distribution is calculated as a function of optical source parameters, mainly optical irradiance, wavelength and exposition time. Two thermal damage models, the cumulative equivalent minutes (CEM) 43 °C approach and the Arrhenius analysis, are used. The former is appropriate when dealing with dosimetry considerations at constant temperature. The latter is adequate to predict thermal damage with arbitrary temperature time dependence. Both models are applied and compared for the particular application of skin thermotherapy irradiation.

  20. High-quality unsaturated zone hydraulic property data for hydrologic applications

    USGS Publications Warehouse

    Perkins, Kimberlie; Nimmo, John R.

    2009-01-01

    In hydrologic studies, especially those using dynamic unsaturated zone moisture modeling, calculations based on property transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values has become increasingly common with the use of neural networks. High-quality data are needed for databases used in this way and for theoretical and property transfer model development and testing. Hydraulic properties predicted on the basis of existing databases may be adequate in some applications but not others. An obvious problem occurs when the available database has few or no data for samples that are closely related to the medium of interest. The data set presented in this paper includes saturated and unsaturated hydraulic conductivity, water retention, particle-size distributions, and bulk properties. All samples are minimally disturbed, all measurements were performed using the same state of the art techniques and the environments represented are diverse.

  1. Herpes Zoster Risk Reduction through Exposure to Chickenpox Patients: A Systematic Multidisciplinary Review.

    PubMed

    Ogunjimi, Benson; Van Damme, Pierre; Beutels, Philippe

    2013-01-01

    Varicella-zoster virus (VZV) causes chickenpox and may subsequently reactivate to cause herpes zoster later in life. The exogenous boosting hypothesis states that re-exposure to circulating VZV can inhibit VZV reactivation and consequently also herpes zoster in VZV-immune individuals. Using this hypothesis, mathematical models predicted widespread chickenpox vaccination to increase herpes zoster incidence over more than 30 years. Some countries have postponed universal chickenpox vaccination, at least partially based on this prediction. After a systematic search and selection procedure, we analyzed different types of exogenous boosting studies. We graded 13 observational studies on herpes zoster incidence after widespread chickenpox vaccination, 4 longitudinal studies on VZV immunity after re-exposure, 9 epidemiological risk factor studies, 7 mathematical modeling studies as well as 7 other studies. We conclude that exogenous boosting exists, although not for all persons, nor in all situations. Its magnitude is yet to be determined adequately in any study field.

  2. Numerical Prediction of SERN Performance using WIND code

    NASA Technical Reports Server (NTRS)

    Engblom, W. A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of single-expansion ramp nozzles (SERNs) during overexpanded operation and transonic flight. Three-dimensional Reynolds-Averaged Navier Stokes (RANS) results are obtained for two vehicle configurations, including the NASP Model 5B and ISTAR RBCC (a variant of X-43B) using the WIND code. Numerical predictions for nozzle integrated forces and pitch moments are directly compared to experimental data for the NASP Model 5B, and adequate-to-excellent agreement is found. The sensitivity of SERN performance and separation phenomena to freestream static pressure and Mach number is demonstrated via a matrix of cases for both vehicles. 3-D separation regions are shown to be induced by either lateral (e.g., sidewall) shocks or vertical (e.g., cowl trailing edge) shocks. Finally, the implications of this work to future preliminary design efforts involving SERNs are discussed.

  3. An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2003-01-01

    Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.

  4. Dependence of the Radiation Pressure on the Background Refractive Index

    NASA Astrophysics Data System (ADS)

    Webb, Kevin J.

    2013-07-01

    The 1978 experiments by Jones and Leslie showing that the radiation pressure on a mirror depends on the background medium refractive index have yet to be adequately explained using a force model and have provided a leading challenge to the Abraham form of the electromagnetic momentum. Those experimental results are predicted for the first time using a force representation that incorporates the Abraham momentum by utilizing the power calibration method employed in the Jones and Leslie experiments. With an extension of the same procedure, the polarization and angle independence of the experimental data are also explained by this model. Prospects are good for this general form of the electromagnetic force density to be effective in predicting other experiments with macroscopic materials. Furthermore, the rigorous representation of material dispersion makes the representation important for metamaterials that operate in the vicinity of homogenized material resonances.

  5. The insertion of human dynamics models in the flight control loops of V/STOL research aircraft. Appendix 2: The optimal control model of a pilot in V/STOL aircraft control loops

    NASA Technical Reports Server (NTRS)

    Zipf, Mark E.

    1989-01-01

    An overview is presented of research work focussed on the design and insertion of classical models of human pilot dynamics within the flight control loops of V/STOL aircraft. The pilots were designed and configured for use in integrated control system research and design. The models of human behavior that were considered are: McRuer-Krendel (a single variable transfer function model); and Optimal Control Model (a multi-variable approach based on optimal control and stochastic estimation theory). These models attempt to predict human control response characteristics when confronted with compensatory tracking and state regulation tasks. An overview, mathematical description, and discussion of predictive limitations of the pilot models is presented. Design strategies and closed loop insertion configurations are introduced and considered for various flight control scenarios. Models of aircraft dynamics (both transfer function and state space based) are developed and discussed for their use in pilot design and application. Pilot design and insertion are illustrated for various flight control objectives. Results of pilot insertion within the control loops of two V/STOL research aricraft (Sikorski Black Hawk UH-60A, McDonnell Douglas Harrier II AV-8B) are presented and compared against actual pilot flight data. Conclusions are reached on the ability of the pilot models to adequately predict human behavior when confronted with similar control objectives.

  6. Feasibility study of modeling liver thermal damage using minimally invasive optical method adequate for in situ measurement.

    PubMed

    Zhao, Jinzhe; Zhao, Qi; Jiang, Yingxu; Li, Weitao; Yang, Yamin; Qian, Zhiyu; Liu, Jia

    2018-06-01

    Liver thermal ablation techniques have been widely used for the treatment of liver cancer. Kinetic model of damage propagation play an important role for ablation prediction and real-time efficacy assessment. However, practical methods for modeling liver thermal damage are rare. A minimally invasive optical method especially adequate for in situ liver thermal damage modeling is introduced in this paper. Porcine liver tissue was heated by water bath under different temperatures. During thermal treatment, diffuse reflectance spectrum of liver was measured by optical fiber and used to deduce reduced scattering coefficient (μ ' s ). Arrhenius parameters were obtained through non-isothermal heating approach with damage marker of μ ' s . Activation energy (E a ) and frequency factor (A) was deduced from these experiments. A pair of averaged value is 1.200 × 10 5  J mol -1 and 4.016 × 10 17  s -1 . The results were verified for their reasonableness and practicality. Therefore, it is feasible to modeling liver thermal damage based on minimally invasive measurement of optical property and in situ kinetic analysis of damage progress with Arrhenius model. These parameters and this method are beneficial for preoperative planning and real-time efficacy assessment of liver ablation therapy. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A review of some fish nutrition methodologies.

    PubMed

    Belal, Ibrahim E H

    2005-03-01

    Several classical warm blooded animal (poultry, sheep, cows, etc.) methods for dietary nutrients evaluation (digestibility, metabolizablity, and energy budget) are applied to fish, even though fish live in a different environment in addition to being cold blooded animals. These applications have caused significant errors that have made these methods non-additive and meaningless, as is explained in the text. In other words, dietary digestion and absorption could not adequately be measured due to the aquatic environment fish live in. Therefore, net nutrient deposition and/or growth are the only accurate measurement left to evaluate dietary nutrients intake in fish. In order to understand and predict dietary nutrient intake-growth response relationship, several mathematical models; (1) the simple linear equation, (2) the logarithmic equation, and (3) the quadratic equation are generally used. These models however, do not describe a full range of growth and have no biological meaning as explained in the text. On the other hand, a model called the saturation kinetic model. It has biological basis (the law of mass action and the enzyme kinetic) and it describes the full range of growth curve. Additionally, it has four parameters that summarize the growth curve and could also be used in comparing diets or nutrients effect on fish growth and/or net nutrient deposition. The saturation kinetic model is proposed to be adequate for dietary nutrient evaluation for fish. The theoretical derivation of this model is illustrated in the text.

  8. Improving operational flood ensemble prediction by the assimilation of satellite soil moisture: comparison between lumped and semi-distributed schemes

    NASA Astrophysics Data System (ADS)

    Alvarez-Garreton, C.; Ryu, D.; Western, A. W.; Su, C.-H.; Crow, W. T.; Robertson, D. E.; Leahy, C.

    2014-09-01

    Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool. Within this context, we assimilate active and passive satellite soil moisture (SSM) retrievals using an ensemble Kalman filter to improve operational flood prediction within a large semi-arid catchment in Australia (>40 000 km2). We assess the importance of accounting for channel routing and the spatial distribution of forcing data by applying SM-DA to a lumped and a semi-distributed scheme of the probability distributed model (PDM). Our scheme also accounts for model error representation and seasonal biases and errors in the satellite data. Before assimilation, the semi-distributed model provided more accurate streamflow prediction (Nash-Sutcliffe efficiency, NS = 0.77) than the lumped model (NS = 0.67) at the catchment outlet. However, this did not ensure good performance at the "ungauged" inner catchments. After SM-DA, the streamflow ensemble prediction at the outlet was improved in both the lumped and the semi-distributed schemes: the root mean square error of the ensemble was reduced by 27 and 31%, respectively; the NS of the ensemble mean increased by 7 and 38%, respectively; the false alarm ratio was reduced by 15 and 25%, respectively; and the ensemble prediction spread was reduced while its reliability was maintained. Our findings imply that even when rainfall is the main driver of flooding in semi-arid catchments, adequately processed SSM can be used to reduce errors in the model soil moisture, which in turn provides better streamflow ensemble prediction. We demonstrate that SM-DA efficacy is enhanced when the spatial distribution in forcing data and routing processes are accounted for. At ungauged locations, SM-DA is effective at improving streamflow ensemble prediction, however, the updated prediction is still poor since SM-DA does not address systematic errors in the model.

  9. Biomarker-Based Risk Model to Predict Cardiovascular Mortality in Patients With Stable Coronary Disease.

    PubMed

    Lindholm, Daniel; Lindbäck, Johan; Armstrong, Paul W; Budaj, Andrzej; Cannon, Christopher P; Granger, Christopher B; Hagström, Emil; Held, Claes; Koenig, Wolfgang; Östlund, Ollie; Stewart, Ralph A H; Soffer, Joseph; White, Harvey D; de Winter, Robbert J; Steg, Philippe Gabriel; Siegbahn, Agneta; Kleber, Marcus E; Dressel, Alexander; Grammer, Tanja B; März, Winfried; Wallentin, Lars

    2017-08-15

    Currently, there is no generally accepted model to predict outcomes in stable coronary heart disease (CHD). This study evaluated and compared the prognostic value of biomarkers and clinical variables to develop a biomarker-based prediction model in patients with stable CHD. In a prospective, randomized trial cohort of 13,164 patients with stable CHD, we analyzed several candidate biomarkers and clinical variables and used multivariable Cox regression to develop a clinical prediction model based on the most important markers. The primary outcome was cardiovascular (CV) death, but model performance was also explored for other key outcomes. It was internally bootstrap validated, and externally validated in 1,547 patients in another study. During a median follow-up of 3.7 years, there were 591 cases of CV death. The 3 most important biomarkers were N-terminal pro-B-type natriuretic peptide (NT-proBNP), high-sensitivity cardiac troponin T (hs-cTnT), and low-density lipoprotein cholesterol, where NT-proBNP and hs-cTnT had greater prognostic value than any other biomarker or clinical variable. The final prediction model included age (A), biomarkers (B) (NT-proBNP, hs-cTnT, and low-density lipoprotein cholesterol), and clinical variables (C) (smoking, diabetes mellitus, and peripheral arterial disease). This "ABC-CHD" model had high discriminatory ability for CV death (c-index 0.81 in derivation cohort, 0.78 in validation cohort), with adequate calibration in both cohorts. This model provided a robust tool for the prediction of CV death in patients with stable CHD. As it is based on a small number of readily available biomarkers and clinical factors, it can be widely employed to complement clinical assessment and guide management based on CV risk. (The Stabilization of Atherosclerotic Plaque by Initiation of Darapladib Therapy Trial [STABILITY]; NCT00799903). Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  10. Re-Analysis of the Solar Phase Curves of the Icy Galilean Satellites

    NASA Technical Reports Server (NTRS)

    Domingue, Deborah; Verbiscer, Anne

    1997-01-01

    Re-analysis of the solar phase curves of the icy Galilean satellites demonstrates that the quantitative results are dependent on the single particle scattering function incorporated into the photometric model; however, the qualitative properties are independent. The results presented here show that the general physical characteristics predicted by a Hapke model (B. Hapke, 1986, Icarus 67, 264-280) incorporating a two parameter double Henyey-Greenstein scattering function are similar to the predictions given by the same model incorporating a three parameter double Henyey-Greenstein scattering function as long as the data set being modeled has adequate coverage in phase angle. Conflicting results occur when the large phase angle coverage is inadequate. Analysis of the role of isotropic versus anisotropic multiple scattering shows that for surfaces as bright as Europa the two models predict very similar results over phase angles covered by the data. Differences arise only at those phase angles for which there are no data. The single particle scattering behavior between the leading and trailing hemispheres of Europa and Ganymede is commensurate with magnetospheric alterations of their surfaces. Ion bombardment will produce more forward scattering single scattering functions due to annealing of potential scattering centers within regolith particles (N. J. Sack et al., 1992, Icarus 100, 534-540). Both leading and trailing hemispheres of Europa are consistent with a high porosity model and commensurate with a frost surface. There are no strong differences in predicted porosity between the two hemispheres of Callisto, both are consistent with model porosities midway between that deduced for Europa and the Moon. Surface roughness model estimates predict that surface roughness increases with satellite distance from Jupiter, with lunar surface roughness values falling midway between those measured for Ganymede and Callisto. There is no obvious variation in predicted surface roughness with hemisphere for any of the Galilean satellites.

  11. Extensions to the visual predictive check to facilitate model performance evaluation.

    PubMed

    Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert

    2008-04-01

    The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.

  12. The intensity dependence of lesion position shift during focused ultrasound surgery.

    PubMed

    Meaney, P M; Cahill, M D; ter Haar, G R

    2000-03-01

    Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical for predicting lesion formation in focused ultrasound (US) surgery (FUS). To date, most models have used linear propagation models to predict intensity profiles required to compute the temporally varying temperature distributions used to compute thermal dose contours. These are used to predict the extent of thermal damage. However, these simulations fail to describe adequately the abnormal lesion formation behaviour observed during ex vivo experiments in cases for which the transducer drive levels are varied over a wide range. In such experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increased transducer drive levels than would be predicted using linear-propagation models. The first set of simulations described herein use the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused US waves to demonstrate that both the peak intensity and the lesion positions do, indeed, move closer to the transducer. This illustrates that, for accurate modelling of heating during FUS, nonlinear effects should be considered. Additionally, a first order approximation has been employed that attempts to account for the abnormal heat deposition distributions that accompany high transducer drive level FUS exposures where cavitation and boiling may be present. The results of these simulations are presented. It is suggested that this type of approach may be a useful tool in understanding thermal damage mechanisms.

  13. Assessing participation in community-based physical activity programs in Brazil.

    PubMed

    Reis, Rodrigo S; Yan, Yan; Parra, Diana C; Brownson, Ross C

    2014-01-01

    This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14-4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16-2.53), reporting a good health (OR = 1.58, 95% CI = 1.02-2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05-2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26-2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18-2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil.

  14. Modelling the growth of Populus species using Ecosystem Demography (ED) model

    NASA Astrophysics Data System (ADS)

    Wang, D.; Lebauer, D. S.; Feng, X.; Dietze, M. C.

    2010-12-01

    Hybrid poplar plantations are an important source being evaluated for biomass production. Effective management of such plantations requires adequate growth and yield models. The Ecosystem Demography model (ED) makes predictions about the large scales of interest in above- and belowground ecosystem structure and the fluxes of carbon and water from a description of the fine-scale physiological processes. In this study, we used a workflow management tool, the Predictive Ecophysiological Carbon flux Analyzer (PECAn), to integrate literature data, field measurement and the ED model to provide predictions of ecosystem functioning. Parameters for the ED ensemble runs were sampled from the posterior distribution of ecophysiological traits of Populus species compiled from the literature using a Bayesian meta-analysis approach. Sensitivity analysis was performed to identify the parameters which contribute the most to the uncertainties of the ED model output. Model emulation techniques were used to update parameter posterior distributions using field-observed data in northern Wisconsin hybrid poplar plantations. Model results were evaluated with 5-year field-observed data in a hybrid poplar plantation at New Franklin, MO. ED was then used to predict the spatial variability of poplar yield in the coterminous United States (United States minus Alaska and Hawaii). Sensitivity analysis showed that root respiration, dark respiration, growth respiration, stomatal slope and specific leaf area contribute the most to the uncertainty, which suggests that our field measurements and data collection should focus on these parameters. The ED model successfully captured the inter-annual and spatial variability of the yield of poplar. Analyses in progress with the ED model focus on evaluating the ecosystem services of short-rotation woody plantations, such as impacts on soil carbon storage, water use, and nutrient retention.

  15. Hydrogen production by the hyperthermophilic bacterium Thermotoga maritima Part II: modeling and experimental approaches for hydrogen production.

    PubMed

    Auria, Richard; Boileau, Céline; Davidson, Sylvain; Casalot, Laurence; Christen, Pierre; Liebgott, Pierre Pol; Combet-Blanc, Yannick

    2016-01-01

    Thermotoga maritima is a hyperthermophilic bacterium known to produce hydrogen from a large variety of substrates. The aim of the present study is to propose a mathematical model incorporating kinetics of growth, consumption of substrates, product formations, and inhibition by hydrogen in order to predict hydrogen production depending on defined culture conditions. Our mathematical model, incorporating data concerning growth, substrates, and products, was developed to predict hydrogen production from batch fermentations of the hyperthermophilic bacterium, T. maritima . It includes the inhibition by hydrogen and the liquid-to-gas mass transfer of H 2 , CO 2 , and H 2 S. Most kinetic parameters of the model were obtained from batch experiments without any fitting. The mathematical model is adequate for glucose, yeast extract, and thiosulfate concentrations ranging from 2.5 to 20 mmol/L, 0.2-0.5 g/L, or 0.01-0.06 mmol/L, respectively, corresponding to one of these compounds being the growth-limiting factor of T. maritima . When glucose, yeast extract, and thiosulfate concentrations are all higher than these ranges, the model overestimates all the variables. In the window of the model validity, predictions of the model show that the combination of both variables (increase in limiting factor concentration and in inlet gas stream) leads up to a twofold increase of the maximum H 2 -specific productivity with the lowest inhibition. A mathematical model predicting H 2 production in T. maritima was successfully designed and confirmed in this study. However, it shows the limit of validity of such mathematical models. Their limit of applicability must take into account the range of validity in which the parameters were established.

  16. Recollection is a continuous process: Evidence from plurality memory receiver operating characteristics.

    PubMed

    Slotnick, Scott D; Jeye, Brittany M; Dodson, Chad S

    2016-01-01

    Is recollection a continuous/graded process or a threshold/all-or-none process? Receiver operating characteristic (ROC) analysis can answer this question as the continuous model and the threshold model predict curved and linear recollection ROCs, respectively. As memory for plurality, an item's previous singular or plural form, is assumed to rely on recollection, the nature of recollection can be investigated by evaluating plurality memory ROCs. The present study consisted of four experiments. During encoding, words (singular or plural) or objects (single/singular or duplicate/plural) were presented. During retrieval, old items with the same plurality or different plurality were presented. For each item, participants made a confidence rating ranging from "very sure old", which was correct for same plurality items, to "very sure new", which was correct for different plurality items. Each plurality memory ROC was the proportion of same versus different plurality items classified as "old" (i.e., hits versus false alarms). Chi-squared analysis revealed that all of the plurality memory ROCs were adequately fit by the continuous unequal variance model, whereas none of the ROCs were adequately fit by the two-high threshold model. These plurality memory ROC results indicate recollection is a continuous process, which complements previous source memory and associative memory ROC findings.

  17. Strange baryon resonance production in sqrt s NN=200 GeV p+p and Au+Au collisions.

    PubMed

    Abelev, B I; Aggarwal, M M; Ahammed, Z; Amonett, J; Anderson, B D; Anderson, M; Arkhipkin, D; Averichev, G S; Bai, Y; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellingeri-Laurikainen, A; Bellwied, R; Benedosso, F; Bhardwaj, S; Bhasin, A; Bhati, A K; Bichsel, H; Bielcik, J; Bielcikova, J; Bland, L C; Blyth, S-L; Bonner, B E; Botje, M; Bouchet, J; Brandin, A V; Bravar, A; Burton, T P; Bystersky, M; Cadman, R V; Cai, X Z; Caines, H; Calderón de la Barca Sánchez, M; Castillo, J; Catu, O; Cebra, D; Chajecki, Z; Chaloupka, P; Chattopadhyay, S; Chen, H F; Chen, J H; Cheng, J; Cherney, M; Chikanian, A; Christie, W; Coffin, J P; Cormier, T M; Cosentino, M R; Cramer, J G; Crawford, H J; Das, D; Das, S; Dash, S; Daugherity, M; de Moura, M M; Dedovich, T G; DePhillips, M; Derevschikov, A A; Didenko, L; Dietel, T; Djawotho, P; Dogra, S M; Dong, W J; Dong, X; Draper, J E; Du, F; Dunin, V B; Dunlop, J C; Dutta Mazumdar, M R; Eckardt, V; Edwards, W R; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Estienne, M; Fachini, P; Fatemi, R; Fedorisin, J; Filimonov, K; Filip, P; Finch, E; Fine, V; Fisyak, Y; Fu, J; Gagliardi, C A; Gaillard, L; Ganti, M S; Gaudichet, L; Ghazikhanian, V; Ghosh, P; Gonzalez, J E; Gorbunov, Y G; Gos, H; Grebenyuk, O; Grosnick, D; Guertin, S M; Guimaraes, K S F F; Gupta, N; Gutierrez, T D; Haag, B; Hallman, T J; Hamed, A; Harris, J W; He, W; Heinz, M; Henry, T W; Hepplemann, S; Hippolyte, B; Hirsch, A; Hjort, E; Hoffman, A M; Hoffmann, G W; Horner, M J; Huang, H Z; Huang, S L; Hughes, E W; Humanic, T J; Igo, G; Jacobs, P; Jacobs, W W; Jakl, P; Jia, F; Jiang, H; Jones, P G; Judd, E G; Kabana, S; Kang, K; Kapitan, J; Kaplan, M; Keane, D; Kechechyan, A; Khodyrev, V Yu; Kim, B C; Kiryluk, J; Kisiel, A; Kislov, E M; Klein, S R; Kocoloski, A; Koetke, D D; Kollegger, T; Kopytine, M; Kotchenda, L; Kouchpil, V; Kowalik, K L; Kramer, M; Kravtsov, P; Kravtsov, V I; Krueger, K; Kuhn, C; Kulikov, A I; Kumar, A; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; LaPointe, S; Laue, F; Lauret, J; Lebedev, A; Lednicky, R; Lee, C-H; Lehocka, S; LeVine, M J; Li, C; Li, Q; Li, Y; Lin, G; Lin, X; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, H; Liu, J; Liu, L; Liu, Z; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Love, W A; Lu, Y; Ludlam, T; Lynn, D; Ma, G L; Ma, J G; Ma, Y G; Magestro, D; Mahapatra, D P; Majka, R; Mangotra, L K; Manweiler, R; Margetis, S; Markert, C; Martin, L; Matis, H S; Matulenko, Yu A; McClain, C J; McShane, T S; Melnick, Yu; Meschanin, A; Millane, J; Miller, M L; Minaev, N G; Mioduszewski, S; Mironov, C; Mischke, A; Mishra, D K; Mitchell, J; Mohanty, B; Molnar, L; Moore, C F; Morozov, D A; Munhoz, M G; Nandi, B K; Nattrass, C; Nayak, T K; Nelson, J M; Netrakanti, P K; Nogach, L V; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Pachr, M; Pal, S K; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Peitzmann, T; Perevoztchikov, V; Perkins, C; Peryt, W; Phatak, S C; Picha, R; Planinic, M; Pluta, J; Poljak, N; Porile, N; Porter, J; Poskanzer, A M; Potekhin, M; Potrebenikova, E; Potukuchi, B V K S; Prindle, D; Pruneau, C; Putschke, J; Rakness, G; Raniwala, R; Raniwala, S; Ray, R L; Razin, S V; Reinnarth, J; Relyea, D; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevskiy, O V; Romero, J L; Rose, A; Roy, C; Ruan, L; Russcher, M J; Sahoo, R; Sakuma, T; Salur, S; Sandweiss, J; Sarsour, M; Sazhin, P S; Schambach, J; Scharenberg, R P; Schmitz, N; Schweda, K; Seger, J; Selyuzhenkov, I; Seyboth, P; Shabetai, A; Shahaliev, E; Shao, M; Sharma, M; Shen, W Q; Shimanskiy, S S; Sichtermann, E; Simon, F; Singaraju, R N; Smirnov, N; Snellings, R; Sood, G; Sorensen, P; Sowinski, J; Speltz, J; Spinka, H M; Srivastava, B; Stadnik, A; Stanislaus, T D S; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Suaide, A A P; Sugarbaker, E; Sumbera, M; Sun, Z; Surrow, B; Swanger, M; Symons, T J M; Szanto de Toledo, A; Tai, A; Takahashi, J; Tang, A H; Tarnowsky, T; Thein, D; Thomas, J H; Timmins, A R; Timoshenko, S; Tokarev, M; Trainor, T A; Trentalange, S; Tribble, R E; Tsai, O D; Ulery, J; Ullrich, T; Underwood, D G; Buren, G Van; van der Kolk, N; van Leeuwen, M; Molen, A M Vander; Varma, R; Vasilevski, I M; Vasiliev, A N; Vernet, R; Vigdor, S E; Viyogi, Y P; Vokal, S; Voloshin, S A; Waggoner, W T; Wang, F; Wang, G; Wang, J S; Wang, X L; Wang, Y; Watson, J W; Webb, J C; Westfall, G D; Wetzler, A; Whitten, C; Wieman, H; Wissink, S W; Witt, R; Wood, J; Wu, J; Xu, N; Xu, Q H; Xu, Z; Yepes, P; Yoo, I-K; Yurevich, V I; Zhan, W; Zhang, H; Zhang, W M; Zhang, Y; Zhang, Z P; Zhao, Y; Zhong, C; Zoulkarneev, R; Zoulkarneeva, Y; Zubarev, A N; Zuo, J X

    2006-09-29

    We report the measurements of Sigma(1385) and Lambda(1520) production in p+p and Au+Au collisions at sqrt[s{NN}]=200 GeV from the STAR Collaboration. The yields and the p(T) spectra are presented and discussed in terms of chemical and thermal freeze-out conditions and compared to model predictions. Thermal and microscopic models do not adequately describe the yields of all the resonances produced in central Au+Au collisions. Our results indicate that there may be a time span between chemical and thermal freeze-out during which elastic hadronic interactions occur.

  18. Beyond the Band Function Paradigm : a New Model for GRB Prompt Emission and Possible Impact in Cosmology

    NASA Astrophysics Data System (ADS)

    Guiriec, Sylvain

    Gamma Ray Bursts (GRBs) are the most violent phenomenons in the Universe. They are associated with the birth of stellar mass black holes either from the collapse of hypermassive stars or the merger of compact objects. The Fireball model is the most popular scenario to explain GRBs. In this theoretical framework, GRB central engines release collimated, bipolar and highly relativistic jets mainly composed of electrons, positrons, photons, and a small amount of baryons. During the first phase of the Fireball model, charged particles are accelerated and release non-thermal radiations. The Fireball model also predicts a thermal like component coming from the jet photosphere. This first phase would be responsible for the GRB prompt emission observed by gamma ray telescopes such as Fermi/GBM in the keV-MeV energy range and which is the only phase discussed in this talk. Until now, GRB prompt emission spectra were considered as adequately fitted with the empirical Band function, which is a smoothly broken power law. However, its parameters are very often incompatible with the Fireball model predictions for both the thermal and non-thermal components. We will see that observation with the Fermi Gamma Ray Space Telescope break the paradigm of the Band function and that deviations from this function exists in many GRBs. Those deviations are adequately fitted with an additional thermal-like component -that we consider as the jet photosphere- and/or an additional power law. Importantly, with the three components together, theory and observations are much more in agreement. We will also see how this new model for prompt emission spectra may have an impact beyond the physics of GRBs. Indeed, this work may confirm a relation between the hardness of the GRB prompt emission and its luminosity which may be used to scale GRBs as standard-like candles for use in cosmology.

  19. Effect of abdominopelvic abscess drain size on drainage time and probability of occlusion.

    PubMed

    Rotman, Jessica A; Getrajdman, George I; Maybody, Majid; Erinjeri, Joseph P; Yarmohammadi, Hooman; Sofocleous, Constantinos T; Solomon, Stephen B; Boas, F Edward

    2017-04-01

    The purpose of this study is to determine whether larger abdominopelvic abscess drains reduce the time required for abscess resolution or the probability of tube occlusion. 144 consecutive patients who underwent abscess drainage at a single institution were reviewed retrospectively. Larger initial drain size did not reduce drainage time, drain occlusion, or drain exchanges (P > .05). Subgroup analysis did not find any type of collection that benefitted from larger drains. A multivariate model predicting drainage time showed that large collections (>200 mL) required 16 days longer drainage time than small collections (<50 mL). Collections with a fistula to bowel required 17 days longer drainage time than collections without a fistula. Initial drain size and the viscosity of the fluid in the collection had no significant effect on drainage time in the multivariate model. 8 F drains are adequate for initial drainage of most serous and serosanguineous collections. 10 F drains are adequate for initial drainage of most purulent or bloody collections. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Comparison of Different Classification Methods for Analyzing Electronic Nose Data to Characterize Sesame Oils and Blends

    PubMed Central

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-01-01

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results. PMID:26506350

  1. Computational studies of horizontal axis wind turbines in high wind speed condition using advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Benjanirat, Sarun

    Next generation horizontal-axis wind turbines (HAWTs) will operate at very high wind speeds. Existing engineering approaches for modeling the flow phenomena are based on blade element theory, and cannot adequately account for 3-D separated, unsteady flow effects. Therefore, researchers around the world are beginning to model these flows using first principles-based computational fluid dynamics (CFD) approaches. In this study, an existing first principles-based Navier-Stokes approach is being enhanced to model HAWTs at high wind speeds. The enhancements include improved grid topology, implicit time-marching algorithms, and advanced turbulence models. The advanced turbulence models include the Spalart-Allmaras one-equation model, k-epsilon, k-o and Shear Stress Transport (k-o-SST) models. These models are also integrated with detached eddy simulation (DES) models. Results are presented for a range of wind speeds, for a configuration termed National Renewable Energy Laboratory Phase VI rotor, tested at NASA Ames Research Center. Grid sensitivity studies are also presented. Additionally, effects of existing transition models on the predictions are assessed. Data presented include power/torque production, radial distribution of normal and tangential pressure forces, root bending moments, and surface pressure fields. Good agreement was obtained between the predictions and experiments for most of the conditions, particularly with the Spalart-Allmaras-DES model.

  2. The effect of learning styles and study behavior on success of preclinical students in pharmacology.

    PubMed

    Asci, Halil; Kulac, Esin; Sezik, Mekin; Cankara, F Nihan; Cicek, Ekrem

    2016-01-01

    To evaluate the effect of learning styles and study behaviors on preclinical medical students' pharmacology exam scores in a non-Western setting. Grasha-Reichmann Student Learning Study Scale and a modified Study Behavior Inventory were used to assess learning styles and study behaviors of preclinical medical students (n = 87). Logistic regression models were used to evaluate the independent effect of gender, age, learning style, and study behavior on pharmacology success. Collaborative (40%) and competitive (27%) dominant learning styles were frequent in the cohort. The most common study behavior subcategories were study reading (40%) and general study habits (38%). Adequate listening and note-taking skills were associated with pharmacology success, whereas students with adequate writing skills had lower exam scores. These effects were independent of gender. Preclinical medical students' study behaviors are independent predictive factors for short-term pharmacology success.

  3. Striatal dopamine transporter binding for predicting the development of delayed neuropsychological sequelae in suicide attempters by carbon monoxide poisoning: A SPECT study.

    PubMed

    Yang, Kai-Chun; Ku, Hsiao-Lun; Wu, Chia-Liang; Wang, Shyh-Jen; Yang, Chen-Chang; Deng, Jou-Fang; Lee, Ming-Been; Chou, Yuan-Hwa

    2011-12-30

    Carbon monoxide poisoning (COP) after charcoal burning results in delayed neuropsychological sequelae (DNS), which show clinical resemblance to Parkinson's disease, without adequate predictors at present. This study examined the role of dopamine transporter (DAT) binding for the prediction of DNS. Twenty-seven suicide attempters with COP were recruited. Seven of them developed DNS, while the remainder did not. The striatal DAT binding was measured by single photon emission computed tomography with (99m)Tc-TRODAT. The specific uptake ratio was derived based on a ratio equilibrium model. Using a logistic regression model, multiple clinical variables were examined as potential predictors for DNS. COP patients with DNS had a lower binding on left striatal DAT binding than patients without DNS. Logistic regression analysis showed that a combination of initial loss of consciousness and lower left striatal DAT binding predicted the development of DNS. Our data indicate that the left striatal DAT binding could help to predict the development of DNS. This finding not only demonstrates the feasibility of brain imaging techniques for predicting the development of DNS but will also help clinicians to improve the quality of care for COP patients. 2011 Elsevier Ireland Ltd. All rights reserved.

  4. A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises

    PubMed Central

    Marquis-Favre, Catherine; Morel, Julien

    2015-01-01

    Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances. PMID:26197326

  5. Constituent bioconcentration in rainbow trout exposed to a complex chemical mixture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder, G.; Bergman, H.L.; Meyer, J.S.

    1984-09-01

    Classically, aquatic contaminant fate models predicting a chemical's bioconcentration factor (BCF) are based upon single-compound derived models, yet such BCF predictions may deviate from observed BCFs when physicochemical interactions or biological responses to complex chemical mixture exposures are not adequately considered in the predictive model. Rainbow trout were exposed to oil-shale retort waters. Such a study was designed to model the potential biological effects precluded by exposure to complex chemical mixtures such as solid waste leachates, agricultural runoff, and industrial process waste waters. Chromatographic analysis of aqueous and nonaqueous liquid-liquid reservoir components yielded differences in mixed extraction solvent HPLC profilesmore » of whole fish exposed for 1 and 3 weeks to the highest dilution of the complex chemical mixture when compared to their corresponding control, yet subsequent whole fish extractions at 6, 9, 12, and 15 weeks into exposure demonstrated no qualitative differences between control and exposed fish. Liver extractions and deproteinized bile samples from exposed fish were qualitatively different than their corresponding controls. These findings support the projected NOEC of 0.0045% dilution, even though the differences in bioconcentration profiles suggest hazard assessment strategies may be useful in evaluating environmental fate processes associated with complex chemical mixtures. 12 references, 4 figures, 2 tables.« less

  6. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less

  7. Spatial prediction of wheat Septoria leaf blotch (Septoria tritici) disease severity in central Ethiopia

    USGS Publications Warehouse

    Wakie, Tewodros; Kumar, Sunil; Senay, Gabriel; Takele, Abera; Lencho, Alemu

    2016-01-01

    A number of studies have reported the presence of wheat septoria leaf blotch (Septoria tritici; SLB) disease in Ethiopia. However, the environmental factors associated with SLB disease, and areas under risk of SLB disease, have not been studied. Here, we tested the hypothesis that environmental variables can adequately explain observed SLB disease severity levels in West Shewa, Central Ethiopia. Specifically, we identified 50 environmental variables and assessed their relationships with SLB disease severity. Geographically referenced disease severity data were obtained from the field, and linear regression and Boosted Regression Trees (BRT) modeling approaches were used for developing spatial models. Moderate-resolution imaging spectroradiometer (MODIS) derived vegetation indices and land surface temperature (LST) variables highly influenced SLB model predictions. Soil and topographic variables did not sufficiently explain observed SLB disease severity variation in this study. Our results show that wheat growing areas in Central Ethiopia, including highly productive districts, are at risk of SLB disease. The study demonstrates the integration of field data with modeling approaches such as BRT for predicting the spatial patterns of severity of a pathogenic wheat disease in Central Ethiopia. Our results can aid Ethiopia's wheat disease monitoring efforts, while our methods can be replicated for testing related hypotheses elsewhere.

  8. Quantitative prediction of repaglinide-rifampicin complex drug interactions using dynamic and static mechanistic models: delineating differential CYP3A4 induction and OATP1B1 inhibition potential of rifampicin.

    PubMed

    Varma, Manthena V S; Lin, Jian; Bi, Yi-An; Rotter, Charles J; Fahmi, Odette A; Lam, Justine L; El-Kattan, Ayman F; Goosen, Theunis C; Lai, Yurong

    2013-05-01

    Repaglinide is mainly metabolized by cytochrome P450 enzymes CYP2C8 and CYP3A4, and it is also a substrate to a hepatic uptake transporter, organic anion transporting polypeptide (OATP)1B1. The purpose of this study is to predict the dosing time-dependent pharmacokinetic interactions of repaglinide with rifampicin, using mechanistic models. In vitro hepatic transport of repaglinide, characterized using sandwich-cultured human hepatocytes, and intrinsic metabolic parameters were used to build a dynamic whole-body physiologically-based pharmacokinetic (PBPK) model. The PBPK model adequately described repaglinide plasma concentration-time profiles and successfully predicted area under the plasma concentration-time curve ratios of repaglinide (within ± 25% error), dosed (staggered 0-24 hours) after rifampicin treatment when primarily considering induction of CYP3A4 and reversible inhibition of OATP1B1 by rifampicin. Further, a static mechanistic "extended net-effect" model incorporating transport and metabolic disposition parameters of repaglinide and interaction potency of rifampicin was devised. Predictions based on the static model are similar to those observed in the clinic (average error ∼19%) and to those based on the PBPK model. Both the models suggested that the combined effect of increased gut extraction and decreased hepatic uptake caused minimal repaglinide systemic exposure change when repaglinide is dosed simultaneously or 1 hour after the rifampicin dose. On the other hand, isolated induction effect as a result of temporal separation of the two drugs translated to an approximate 5-fold reduction in repaglinide systemic exposure. In conclusion, both dynamic and static mechanistic models are instrumental in delineating the quantitative contribution of transport and metabolism in the dosing time-dependent repaglinide-rifampicin interactions.

  9. A dynamic model for plant growth: validation study under changing temperatures

    NASA Technical Reports Server (NTRS)

    Wann, M.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1984-01-01

    A dynamic simulation model to describe vegetative growth of plants, for which some functions and parameter values have been estimated previously by optimization search techniques and numerical experimentation based on data from constant temperature experiments, is validated under conditions of changing temperatures. To test the predictive capacity of the model, dry matter accumulation in the leaves, stems, and roots of tobacco plants (Nicotiana tabacum L.) was measured at 2- or 3-day intervals during a 5-week period when temperatures in controlled-environment rooms were programmed for changes at weekly and daily intervals and in ascending or descending sequences within a range of 14 to 34 degrees C. Simulations of dry matter accumulation and distribution were carried out using the programmed changes for experimental temperatures and compared with the measured values. The agreement between measured and predicted values was close and indicates that the temperature-dependent functional forms derived from constant-temperature experiments are adequate for modelling plant growth responses to conditions of changing temperatures with switching intervals as short as 1 day.

  10. A Particle Representation Model for the Deformation of Homogeneous Turbulence

    NASA Technical Reports Server (NTRS)

    Kassinos, S. C.; Reynolds, W. C.

    1996-01-01

    In simple flows, where the mean deformation rates are mild and the turbulence has time to come to equilibrium with the mean flow, the Reynolds stresses are determined by the applied strain rate. Hence in these flows, it is often adequate to use an eddy-viscosity representation. The modern family of kappa-epsilon models has been very useful in predicting near equilibrium turbulent flows, where the rms deformation rate S is small compared to the reciprocal time scale of the turbulence (epsilon/kappa). In modern engineering applications, turbulence models are quite often required to predict flows with very rapid deformations (large S kappa/epsilon). In these flows, the structure takes some time to respond and eddy viscosity models are inadequate. The response of turbulence to rapid deformations is given by rapid distortion theory (RDT). Under RDT the nonlinear effects due to turbulence-turbulence interactions are neglected in the governing equations, but even when linearized in this fashion, the governing equations are unclosed at the one-point level due to the non-locality of the pressure fluctuations.

  11. Vibration test of 1/5 scale H-II launch vehicle

    NASA Astrophysics Data System (ADS)

    Morino, Yoshiki; Komatsu, Keiji; Sano, Masaaki; Minegishi, Masakatsu; Morita, Toshiyuki; Kohsetsu, Y.

    In order to predict dynamic loads on the newly designed Japanese H-II launch vehicle, the adequacy of prediction methods has been assessed by the dynamic scale model testing. The three-dimensional dynamic model was used in the analysis to express coupling effects among axial, lateral (pitch and yaw) and torsional vibrations. The liquid/tank interaction was considered by use of a boundary element method. The 1/5 scale model of the H-II launch vehicle was designed to simulate stiffness and mass properties of important structural parts, such as core/SRB junctions, first and second stage Lox tanks and engine mount structures. Modal excitation of the test vehicle was accomplished with 100-1000 N shakers which produced random or sinusoidal vibrational forces. The vibrational response of the test vehicle was measured at various locations with accelerometers and pressure sensor. In the lower frequency range, corresmpondence between analysis and experiment was generally good. The basic procedures in analysis seem to be adequate so far, but some improvements in mathematical modeling are suggested by comparison of test and analysis.

  12. An investigation of the information propagation and entropy transport aspects of Stirling machine numerical simulation

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1992-01-01

    Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.

  13. Body shape changes in the elderly and the influence of density assumptions on segment inertia parameters

    NASA Astrophysics Data System (ADS)

    Jensen, Robert K.; Fletcher, P.; Abraham, C.

    1991-04-01

    The segment mass mass proportions and moments of inertia of a sample of twelve females and seven males with mean ages of 67. 4 and 69. 5 years were estimated using textbook proportions based on cadaver studies. These were then compared with the parameters calculated using a mathematical model the zone method. The methodology of the model was fully evaluated for accuracy and precision and judged to be adequate. The results of the comparisons show that for some segments female parameters are quite different from male parameters and inadequately predicted by the cadaver proportions. The largest discrepancies were for the thigh and the trunk. The cadaver predictions were generally less than satisfactory although the common variance for some segments was moderately high. The use ofnon-linear regression and segment anthropometry was illustrated for the thigh moments of inertia and appears to be appropriate. However the predictions from cadaver data need to be examined fully. These results are dependent on the changes in mass and density distribution which occur with aging and the changes which occur with cadaver samples prior to and following death.

  14. Optimization of Computational Performance and Accuracy in 3-D Transient CFD Model for CFB Hydrodynamics Predictions

    NASA Astrophysics Data System (ADS)

    Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.

    2007-09-01

    This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the computational cost was kept low. Flow patterns inside the bed such as the core-annulus flow and the transportation of clusters were at least qualitatively captured.

  15. Equilibrium, kinetic, and reactive transport models for plutonium

    NASA Astrophysics Data System (ADS)

    Schwantes, Jon Michael

    Equilibrium, kinetic, and reactive transport models for plutonium (Pu) have been developed to help meet environmental concerns posed by past war-related and present and future peacetime nuclear technologies. A thorough review of the literature identified several hurdles that needed to be overcome in order to develop capable predictive tools for Pu. These hurdles include: (1) missing or ill-defined chemical equilibrium and kinetic constants for environmentally important Pu species; (2) no adequate conceptual model describing the formation of Pu oxy/hydroxide colloids and solids; and (3) an inability of two-phase reactive transport models to adequately simulate Pu behavior in the presence of colloids. A computer program called INVRS K was developed that integrates the geochemical modeling software of PHREEQC with a nonlinear regression routine. This program provides a tool for estimating equilibrium and kinetic constants from experimental data. INVRS K was used to regress on binding constants for Pu sorbing onto various mineral and humic surfaces. These constants enhance the thermodynamic database for Pu and improve the capability of current predictive tools. Time and temperature studies of the Pu intrinsic colloid were also conducted and results of these studies were presented here. Formation constants for the fresh and aged Pu intrinsic colloid were regressed upon using INVRS K. From these results, it was possible to develop a cohesive diagenetic model that describes the formation of Pu oxy/hydroxide colloids and solids. This model provides for the first time a means of deciphering historically unexplained observations with respect to the Pu intrinsic colloid, as well as a basis for simulating the behavior within systems containing these solids. Discussion of the development and application of reactive transport models is also presented and includes: (1) the general application of a 1-D in flow, three-phase (i.e., dissolved, solid, and colloidal), reactive transport model; (2) a simulation of the effects of dissolution of PuO2 solid and radiolysis on the behavior of Pu diffusing out of a confined pore space; and (3) application of a steady-state three phase reactive transport model to groundwater at the Nevada Test Site.

  16. A modeling and simulation approach to characterize methadone QT prolongation using pooled data from five clinical trials in MMT patients.

    PubMed

    Florian, J; Garnett, C E; Nallani, S C; Rappaport, B A; Throckmorton, D C

    2012-04-01

    Pharmacokinetic (PK)-pharmacodynamic modeling and simulation were used to establish a link between methadone dose, concentrations, and Fridericia rate-corrected QT (QTcF) interval prolongation, and to identify a dose that was associated with increased risk of developing torsade de pointes. A linear relationship between concentration and QTcF described the data from five clinical trials in patients on methadone maintenance treatment (MMT). A previously published population PK model adequately described the concentration-time data, and this model was used for simulation. QTcF was increased by a mean (90% confidence interval (CI)) of 17 (12, 22) ms per 1,000 ng/ml of methadone. Based on this model, doses >120 mg/day would increase the QTcF interval by >20 ms. The model predicts that 1-3% of patients would have ΔQTcF >60 ms, and 0.3-2.0% of patients would have QTcF >500 ms at doses of 160-200 mg/day. Our predictions are consistent with available observational data and support the need for electrocardiogram (ECG) monitoring and arrhythmia risk factor assessment in patients receiving methadone doses >120 mg/day.

  17. A clustering-based fuzzy wavelet neural network model for short-term load forecasting.

    PubMed

    Kodogiannis, Vassilis S; Amina, Mahdi; Petrounias, Ilias

    2013-10-01

    Load forecasting is a critical element of power system operation, involving prediction of the future level of demand to serve as the basis for supply and demand planning. This paper presents the development of a novel clustering-based fuzzy wavelet neural network (CB-FWNN) model and validates its prediction on the short-term electric load forecasting of the Power System of the Greek Island of Crete. The proposed model is obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with a "multiplication" wavelet neural network (MWNN). Multidimensional Gaussian type of activation functions have been used in the IF part of the fuzzyrules. A Fuzzy Subtractive Clustering scheme is employed as a pre-processing technique to find out the initial set and adequate number of clusters and ultimately the number of multiplication nodes in MWNN, while Gaussian Mixture Models with the Expectation Maximization algorithm are utilized for the definition of the multidimensional Gaussians. The results corresponding to the minimum and maximum power load indicate that the proposed load forecasting model provides significantly accurate forecasts, compared to conventional neural networks models.

  18. Modeling Soil Organic Carbon at Regional Scale by Combining Multi-Spectral Images with Laboratory Spectra.

    PubMed

    Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog

    2015-01-01

    There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l'Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the 'upland model' was able to more accurately predict SOC compared with the 'upland & wetland model'. However, the separately calibrated 'upland and wetland model' did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory Vis-NIR spectroscopy adds critical information that significantly improves the prediction accuracy of SOC compared to using RS data alone. We recommend the incorporation of laboratory spectra with RS data and other environmental data to improve soil spatial modeling and digital soil mapping (DSM).

  19. Updating Known Distribution Models for Forecasting Climate Change Impact on Endangered Species

    PubMed Central

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2013-01-01

    To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only. PMID:23840330

  20. Updating known distribution models for forecasting climate change impact on endangered species.

    PubMed

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2013-01-01

    To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.

  1. Remotely sensed rice yield prediction using multi-temporal NDVI data derived from NOAA's-AVHRR.

    PubMed

    Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun

    2013-01-01

    Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha(-1). Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly.

  2. Remotely Sensed Rice Yield Prediction Using Multi-Temporal NDVI Data Derived from NOAA's-AVHRR

    PubMed Central

    Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun

    2013-01-01

    Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha−1. Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly. PMID:23967112

  3. Modelling impacts of performance on the probability of reproducing, and thereby on productive lifespan, allow prediction of lifetime efficiency in dairy cows.

    PubMed

    Phuong, H N; Blavy, P; Martin, O; Schmidely, P; Friggens, N C

    2016-01-01

    Reproductive success is a key component of lifetime efficiency - which is the ratio of energy in milk (MJ) to energy intake (MJ) over the lifespan, of cows. At the animal level, breeding and feeding management can substantially impact milk yield, body condition and energy balance of cows, which are known as major contributors to reproductive failure in dairy cattle. This study extended an existing lifetime performance model to incorporate the impacts that performance changes due to changing breeding and feeding strategies have on the probability of reproducing and thereby on the productive lifespan, and thus allow the prediction of a cow's lifetime efficiency. The model is dynamic and stochastic, with an individual cow being the unit modelled and one day being the unit of time. To evaluate the model, data from a French study including Holstein and Normande cows fed high-concentrate diets and data from a Scottish study including Holstein cows selected for high and average genetic merit for fat plus protein that were fed high- v. low-concentrate diets were used. Generally, the model consistently simulated productive and reproductive performance of various genotypes of cows across feeding systems. In the French data, the model adequately simulated the reproductive performance of Holsteins but significantly under-predicted that of Normande cows. In the Scottish data, conception to first service was comparably simulated, whereas interval traits were slightly under-predicted. Selection for greater milk production impaired the reproductive performance and lifespan but not lifetime efficiency. The definition of lifetime efficiency used in this model did not include associated costs or herd-level effects. Further works should include such economic indicators to allow more accurate simulation of lifetime profitability in different production scenarios.

  4. Quantifying the uncertainty of nonpoint source attribution in distributed water quality models: A Bayesian assessment of SWAT's sediment export predictions

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Long, Tanya; Boyd, Duncan

    2014-11-01

    Spatially distributed nonpoint source watershed models are essential tools to estimate the magnitude and sources of diffuse pollution. However, little work has been undertaken to understand the sources and ramifications of the uncertainty involved in their use. In this study we conduct the first Bayesian uncertainty analysis of the water quality components of the SWAT model, one of the most commonly used distributed nonpoint source models. Working in Southern Ontario, we apply three Bayesian configurations for calibrating SWAT to Redhill Creek, an urban catchment, and Grindstone Creek, an agricultural one. We answer four interrelated questions: can SWAT determine suspended sediment sources with confidence when end of basin data is used for calibration? How does uncertainty propagate from the discharge submodel to the suspended sediment submodels? Do the estimated sediment sources vary when different calibration approaches are used? Can we combine the knowledge gained from different calibration approaches? We show that: (i) despite reasonable fit at the basin outlet, the simulated sediment sources are subject to uncertainty sufficient to undermine the typical approach of reliance on a single, best fit simulation; (ii) more than a third of the uncertainty of sediment load predictions may stem from the discharge submodel; (iii) estimated sediment sources do vary significantly across the three statistical configurations of model calibration despite end-of-basin predictions being virtually identical; and (iv) Bayesian model averaging is an approach that can synthesize predictions when a number of adequate distributed models make divergent source apportionments. We conclude with recommendations for future research to reduce the uncertainty encountered when using distributed nonpoint source models for source apportionment.

  5. Longitudinal train dynamics model for a rail transit simulation system

    DOE PAGES

    Wang, Jinghui; Rakha, Hesham A.

    2018-01-01

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  6. Longitudinal train dynamics model for a rail transit simulation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jinghui; Rakha, Hesham A.

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  7. Modeling of Hydrate Formation Mode in Raw Natural Gas Air Coolers

    NASA Astrophysics Data System (ADS)

    Scherbinin, S. V.; Prakhova, M. Yu; Krasnov, A. N.; Khoroshavina, E. A.

    2018-05-01

    Air cooling units (ACU) are used at all the gas fields for cooling natural gas after compressing. When using ACUs on raw (wet) gas in a low temperature condition, there is a danger of hydrate plug formation in the heat exchanging tubes of the ACU. To predict possible hydrate formation, a mathematical model of the air cooler thermal behavior used in the control system shall adequately calculate not only gas temperature at the cooler's outlet, but also a dew point value, a temperature at which condensation, as well as the gas hydrate formation point, onsets. This paper proposes a mathematical model allowing one to determine the pressure in the air cooler which makes hydrate formation for a given gas composition possible.

  8. A simple model to estimate the impact of sea-level rise on platform beaches

    NASA Astrophysics Data System (ADS)

    Taborda, Rui; Ribeiro, Mónica Afonso

    2015-04-01

    Estimates of future beach evolution in response to sea-level rise are needed to assess coastal vulnerability. A research gap is identified in providing adequate predictive methods to use for platform beaches. This work describes a simple model to evaluate the effects of sea-level rise on platform beaches that relies on the conservation of beach sand volume and assumes an invariant beach profile shape. In closed systems, when compared with the Inundation Model, results show larger retreats; the differences are higher for beaches with wide berms and when the shore platform develops at shallow depths. The application of the proposed model to Cascais (Portugal) beaches, using 21st century sea-level rise scenarios, shows that there will be a significant reduction in beach width.

  9. ABM and GIS-based multi-scenarios volcanic evacuation modelling of Merapi

    NASA Astrophysics Data System (ADS)

    Jumadi, Carver, Steve; Quincey, Duncan

    2016-05-01

    Conducting effective evacuation is one of the successful keys to deal with such crisis. Therefore, a plan that considers the probability of the spatial extent of the hazard occurrences is needed. Likewise, the evacuation plan in Merapi is already prepared before the eruption on 2010. However, the plan could not be performed because the eruption magnitude was bigger than it was predicted. In this condition, the extent of the hazardous area was increased larger than the prepared hazard model. Managing such unpredicted situation need adequate information that flexible and adaptable to the current situation. Therefore, we applied an Agent-based Model (ABM) and Geographic Information System (GIS) using multi-scenarios hazard model to support the evacuation management. The methodology and the case study in Merapi is provided.

  10. Population pharmacokinetics of rifampin in the treatment of Mycobacterium tuberculosis in Asian elephants.

    PubMed

    Egelund, E F; Isaza, R; Brock, A P; Alsultan, A; An, G; Peloquin, C A

    2015-04-01

    The objective of this study was to develop a population pharmacokinetic model for rifampin in elephants. Rifampin concentration data from three sources were pooled to provide a total of 233 oral concentrations from 37 Asian elephants. The population pharmacokinetic models were created using Monolix (version 4.2). Simulations were conducted using ModelRisk. We examined the influence of age, food, sex, and weight as model covariates. We further optimized the dosing of rifampin based upon simulations using the population pharmacokinetic model. Rifampin pharmacokinetics were best described by a one-compartment open model including first-order absorption with a lag time and first-order elimination. Body weight was a significant covariate for volume of distribution, and food intake was a significant covariate for lag time. The median Cmax of 6.07 μg/mL was below the target range of 8-24 μg/mL. Monte Carlo simulations predicted the highest treatable MIC of 0.25 μg/mL with the current initial dosing recommendation of 10 mg/kg, based upon a previously published target AUC0-24/MIC > 271 (fAUC > 41). Simulations from the population model indicate that the current dose of 10 mg/kg may be adequate for MICs up to 0.25 μg/mL. While the targeted AUC/MIC may be adequate for most MICs, the median Cmax for all elephants is below the human and elephant targeted ranges. © 2014 John Wiley & Sons Ltd.

  11. Ecological Niche Modeling for Filoviruses: A Risk Map for Ebola and Marburg Virus Disease Outbreaks in Uganda.

    PubMed

    Nyakarahuka, Luke; Ayebare, Samuel; Mosomtai, Gladys; Kankya, Clovice; Lutwama, Julius; Mwiine, Frank Norbert; Skjerve, Eystein

    2017-09-05

    Uganda has reported eight outbreaks caused by filoviruses between 2000 to 2016, more than any other country in the world. We used species distribution modeling to predict where filovirus outbreaks are likely to occur in Uganda to help in epidemic preparedness and surveillance. The MaxEnt software, a machine learning modeling approach that uses presence-only data was used to establish filovirus - environmental relationships. Presence-only data for filovirus outbreaks were collected from the field and online sources. Environmental covariates from Africlim that have been downscaled to a nominal resolution of 1km x 1km were used. The final model gave the relative probability of the presence of filoviruses in the study area obtained from an average of 100 bootstrap runs. Model evaluation was carried out using Receiver Operating Characteristic (ROC) plots. Maps were created using ArcGIS 10.3 mapping software. We showed that bats as potential reservoirs of filoviruses are distributed all over Uganda. Potential outbreak areas for Ebola and Marburg virus disease were predicted in West, Southwest and Central parts of Uganda, which corresponds to bat distribution and previous filovirus outbreaks areas. Additionally, the models predicted the Eastern Uganda region and other areas that have not reported outbreaks before to be potential outbreak hotspots. Rainfall variables were the most important in influencing model prediction compared to temperature variables. Despite the limitations in the prediction model due to lack of adequate sample records for outbreaks, especially for the Marburg cases, the models provided risk maps to the Uganda surveillance system on filovirus outbreaks. The risk maps will aid in identifying areas to focus the filovirus surveillance for early detection and responses hence curtailing a pandemic. The results from this study also confirm previous findings that suggest that filoviruses are mainly limited by the amount of rainfall received in an area.

  12. Ecological Niche Modeling for Filoviruses: A Risk Map for Ebola and Marburg Virus Disease Outbreaks in Uganda

    PubMed Central

    Nyakarahuka, Luke; Ayebare, Samuel; Mosomtai, Gladys; Kankya, Clovice; Lutwama, Julius; Mwiine, Frank Norbert; Skjerve, Eystein

    2017-01-01

    Introduction: Uganda has reported eight outbreaks caused by filoviruses between 2000 to 2016, more than any other country in the world. We used species distribution modeling to predict where filovirus outbreaks are likely to occur in Uganda to help in epidemic preparedness and surveillance. Methods: The MaxEnt software, a machine learning modeling approach that uses presence-only data was used to establish filovirus – environmental relationships. Presence-only data for filovirus outbreaks were collected from the field and online sources. Environmental covariates from Africlim that have been downscaled to a nominal resolution of 1km x 1km were used. The final model gave the relative probability of the presence of filoviruses in the study area obtained from an average of 100 bootstrap runs. Model evaluation was carried out using Receiver Operating Characteristic (ROC) plots. Maps were created using ArcGIS 10.3 mapping software. Results: We showed that bats as potential reservoirs of filoviruses are distributed all over Uganda. Potential outbreak areas for Ebola and Marburg virus disease were predicted in West, Southwest and Central parts of Uganda, which corresponds to bat distribution and previous filovirus outbreaks areas. Additionally, the models predicted the Eastern Uganda region and other areas that have not reported outbreaks before to be potential outbreak hotspots. Rainfall variables were the most important in influencing model prediction compared to temperature variables. Conclusions: Despite the limitations in the prediction model due to lack of adequate sample records for outbreaks, especially for the Marburg cases, the models provided risk maps to the Uganda surveillance system on filovirus outbreaks. The risk maps will aid in identifying areas to focus the filovirus surveillance for early detection and responses hence curtailing a pandemic. The results from this study also confirm previous findings that suggest that filoviruses are mainly limited by the amount of rainfall received in an area. PMID:29034123

  13. Inclusion of Regional Poroelastic Material Properties Better Predicts Biomechanical Behavior of Lumbar Discs Subjected to Dynamic Loading

    PubMed Central

    Williams, Jamie R.; Natarajan, Raghu N.; Andersson, Gunnar B.J.

    2009-01-01

    Understanding the relationship between repetitive lifting and the breakdown of disc tissue over several years of exposure is difficult to study in vivo and in vitro. The aim of this investigation was to develop a three-dimensional poroelastic finite element model of a lumbar motion segment that reflects the biological properties and behaviors of in vivo disc tissues including swelling pressure due to the proteoglycans and strain dependent permeability and porosity. It was hypothesized that when modeling the annulus, prescribing tissue specific material properties will not be adequate for studying the in vivo loading and unloading behavior of the disc. Rather, regional variations of these properties, which are known to exist within the annulus, must also be included. Finite element predictions were compared to in vivo measurements published by Tyrrell et al., (Tyrrell et al., 1985) of percent change in total stature for two loading protocols, short-term creep loading and standing recovery and short-term cyclic loading with standing recovery. The model in which the regional variations of material properties in the annulus had been included provided an overall better prediction of the in vivo behavior as compared to the model in which the annulus properties were assumed to be homogenous. This model will now be used to study the relationship between repetitive lifting and disc degeneration. PMID:17156786

  14. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  15. MultiBLUP: improved SNP-based prediction for complex traits.

    PubMed

    Speed, Doug; Balding, David J

    2014-09-01

    BLUP (best linear unbiased prediction) is widely used to predict complex traits in plant and animal breeding, and increasingly in human genetics. The BLUP mathematical model, which consists of a single random effect term, was adequate when kinships were measured from pedigrees. However, when genome-wide SNPs are used to measure kinships, the BLUP model implicitly assumes that all SNPs have the same effect-size distribution, which is a severe and unnecessary limitation. We propose MultiBLUP, which extends the BLUP model to include multiple random effects, allowing greatly improved prediction when the random effects correspond to classes of SNPs with distinct effect-size variances. The SNP classes can be specified in advance, for example, based on SNP functional annotations, and we also provide an adaptive procedure for determining a suitable partition of SNPs. We apply MultiBLUP to genome-wide association data from the Wellcome Trust Case Control Consortium (seven diseases), and from much larger studies of celiac disease and inflammatory bowel disease, finding that it consistently provides better prediction than alternative methods. Moreover, MultiBLUP is computationally very efficient; for the largest data set, which includes 12,678 individuals and 1.5 M SNPs, the total analysis can be run on a single desktop PC in less than a day and can be parallelized to run even faster. Tools to perform MultiBLUP are freely available in our software LDAK. © 2014 Speed and Balding; Published by Cold Spring Harbor Laboratory Press.

  16. The linear relationship between the Vulnerable Elders Survey-13 score and mortality in an Asian population of community-dwelling older persons.

    PubMed

    Wang, Jye; Lin, Wender; Chang, Ling-Hui

    2018-01-01

    The Vulnerable Elders Survey-13 (VES-13) has been used as a screening tool to identify vulnerable community-dwelling older persons for more in-depth assessment and targeted interventions. Although many studies supported its use in different populations, few have addressed Asian populations. The optimal scaling system for the VES-13 in predicting health outcomes also has not been adequately tested. This study (1) assesses the applicability of the VES-13 to predict the mortality of community-dwelling older persons in Taiwan, (2) identifies the best scaling system for the VES-13 in predicting mortality using generalized additive models (GAMs), and (3) determines whether including covariates, such as socio-demographic factors and common geriatric syndromes, improves model fitting. This retrospective longitudinal cohort study analyzed the data of 2184 community-dwelling persons 65 years old or older from the 2003 wave of the national-wide Taiwan Longitudinal Study on Aging. Cox proportional hazards models and Generalized Additive Models (GAMs) were used. The VES-13 significantly predicted the mortality of Taiwan's community-dwelling elders. A one-point increase in the VES-13 score raised the risk of death by 26% (hazard ratio, 1.26; 95% confidence interval, 1.21-1.32). The hazard ratio of death increased linearly with each additional VES-13 score point, suggesting that using a continuous scale is appropriate. Inclusion of socio-demographic factors and geriatric syndromes improved the model-fitting. The VES-13 is appropriate for an Asian population. VES-13 scores linearly predict the mortality of this population. Adjusting the weighting of the physical activity items may improve the performance of the VES-13. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Assessing Ecosystem Model Performance in Semiarid Systems

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  18. Ground Deposition of Liquid Droplets Released from a Point Source in the Atmospheric Surface Layer.

    NASA Astrophysics Data System (ADS)

    Panneton, Bernard

    1989-09-01

    A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 μm in diameter, released from a point source at 7 meters above the ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. A system for analyzing spray samples has been developed. This is a specialized image analysis system based on an electronic digitizing camera which measures the area and perimeter of stains left by dyed droplets collected on Kromekote^{rm TM } cards. A complete set of meteorological data supports the ground-deposition data. The turbulent air velocities at two levels above the ground and the temperature of the air at one level were measured with one sonic anemometer and a sonic anemometer-thermometer. The vertical heat and momentum fluxes were estimated using the eddy-correlation technique. The two-dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's beta technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 μm droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of our experimental data.

  19. Measurements in a Transitional Boundary Layer Under Low-Pressure Turbine Airfoil Conditions

    NASA Technical Reports Server (NTRS)

    Simon, Terrence W.; Qiu, Songgang; Yuan, Kebiao; Ashpis, David (Technical Monitor); Simon, Fred (Technical Monitor)

    2000-01-01

    This report presents the results of an experimental study of transition from laminar to turbulent flow in boundary layers or in shear layers over separation zones on a convex-curved surface which simulates the suction surface of a low-pressure turbine airfoil. Flows with various free-stream turbulence intensity (FSTI) values (0.5%, 2.5% and 10%), and various Reynolds numbers (50,000, 100,000 200,000 and 300,000) are investigated. Reynold numbers in the present study are based on suction surface length and passage exit mean velocity. Flow separation followed by transition within the separated flow region is observed for the lower-Re cases at each of the FSTI levels. At the highest Reynolds numbers and at elevated FSn, transition of the attached boundary layer begins before separation, and the separation zone is small. Transition proceeds in the shear layer over the separation bubble. For both the transitional boundary layer and the transitional shear layer, mean velocity, turbulence intensity and intermittency (the fraction of the time the flow is turbulent) distributions are presented. The present data are compared to published distribution models for bypass transition, intermittency distribution through transition, transition start position, and transition length. A model developed for transition of separated flows is shown to adequately predict the location of the beginning of transition, for these cases, and a model developed for transitional boundary layer flows seems to adequately predict the path of intermittency through transition when the transition start and end are known. These results are useful for the design of low-pressure turbine stages which are known to operate under conditions replicated by these tests.

  20. Modeling the impact of the indigenous microbial population on the maximum population density of Salmonella on alfalfa.

    PubMed

    Rijgersberg, Hajo; Franz, Eelco; Nierop Groot, Masja; Tromp, Seth-Oscar

    2013-07-01

    Within a microbial risk assessment framework, modeling the maximum population density (MPD) of a pathogenic microorganism is important but often not considered. This paper describes a model predicting the MPD of Salmonella on alfalfa as a function of the initial contamination level, the total count of the indigenous microbial population, the maximum pathogen growth rate and the maximum population density of the indigenous microbial population. The model is parameterized by experimental data describing growth of Salmonella on sprouting alfalfa seeds at inoculum size, native microbial load and Pseudomonas fluorescens 2-79. The obtained model fits well to the experimental data, with standard errors less than ten percent of the fitted average values. The results show that the MPD of Salmonella is not only dictated by performance characteristics of Salmonella but depends on the characteristics of the indigenous microbial population like total number of cells and its growth rate. The model can improve the predictions of microbiological growth in quantitative microbial risk assessments. Using this model, the effects of preventive measures to reduce pathogenic load and a concurrent effect on the background population can be better evaluated. If competing microorganisms are more sensitive to a particular decontamination method, a pathogenic microorganism may grow faster and reach a higher level. More knowledge regarding the effect of the indigenous microbial population (size, diversity, composition) of food products on pathogen dynamics is needed in order to make adequate predictions of pathogen dynamics on various food products.

  1. Molecular factor computing for predictive spectroscopy.

    PubMed

    Dai, Bin; Urbas, Aaron; Douglas, Craig C; Lodder, Robert A

    2007-08-01

    The concept of molecular factor computing (MFC)-based predictive spectroscopy was demonstrated here with quantitative analysis of ethanol-in-water mixtures in a MFC-based prototype instrument. Molecular computing of vectors for transformation matrices enabled spectra to be represented in a desired coordinate system. New coordinate systems were selected to reduce the dimensionality of the spectral hyperspace and simplify the mechanical/electrical/computational construction of a new MFC spectrometer employing transmission MFC filters. A library search algorithm was developed to calculate the chemical constituents of the MFC filters. The prototype instrument was used to collect data from 39 ethanol-in-water mixtures (range 0-14%). For each sample, four different voltage outputs from the detector (forming two factor scores) were measured by using four different MFC filters. Twenty samples were used to calibrate the instrument and build a multivariate linear regression prediction model, and the remaining samples were used to validate the predictive ability of the model. In engineering simulations, four MFC filters gave an adequate calibration model (r2 = 0.995, RMSEC = 0.229%, RMSECV = 0.339%, p = 0.05 by f test). This result is slightly better than a corresponding PCR calibration model based on corrected transmission spectra (r2 = 0.993, RMSEC = 0.359%, RMSECV = 0.551%, p = 0.05 by f test). The first actual MFC prototype gave an RMSECV = 0.735%. MFC was a viable alternative to conventional spectrometry with the potential to be more simply implemented and more rapid and accurate.

  2. Self-controlled learning benefits: exploring contributions of self-efficacy and intrinsic motivation via path analysis.

    PubMed

    Ste-Marie, Diane M; Carter, Michael J; Law, Barbi; Vertes, Kelly; Smith, Victoria

    2016-09-01

    Research has shown learning advantages for self-controlled practice contexts relative to yoked (i.e., experimenter-imposed) contexts; yet, explanations for this phenomenon remain relatively untested. We examined, via path analysis, whether self-efficacy and intrinsic motivation are important constructs for explaining self-controlled learning benefits. The path model was created using theory-based and empirically supported relationships to examine causal links between these psychological constructs and physical performance. We hypothesised that self-efficacy and intrinsic motivation would have greater predictive power for learning under self-controlled compared to yoked conditions. Participants learned double-mini trampoline progressions, and measures of physical performance, self-efficacy and intrinsic motivation were collected over two practice days and a delayed retention day. The self-controlled group (M = 2.04, SD = .98) completed significantly more skill progressions in retention than their yoked counterparts (M = 1.3, SD = .65). The path model displayed adequate fit, and similar significant path coefficients were found for both groups wherein each variable was predominantly predicted by its preceding time point (e.g., self-efficacy time 1 predicts self-efficacy time 2). Interestingly, the model was not moderated by group; thus, failing to support the hypothesis that self-efficacy and intrinsic motivation have greater predictive power for learning under self-controlled relative to yoked conditions.

  3. Application of the Rosner-Wei Risk-Prediction Model to Estimate Sexual Orientation Patterns in Colon Cancer Risk in a Prospective Cohort of U.S. Women

    PubMed Central

    Austin, S. Bryn; Pazaris, Mathew J.; Wei, Esther K.; Rosner, Bernard; Kennedy, Grace A.; Bowen, Deborah; Spiegelman, Donna

    2014-01-01

    Purpose We examined whether lesbian and bisexual women may be at greater risk of colon cancer (CC) than heterosexual women. Methods Working with a large cohort of U.S. women ages 25-64 years, we analyzed 20 years of prospective data to estimate CC incidence, based on known risk factors by applying the Rosner-Wei CC risk-prediction model. Comparing to heterosexual women, we calculated for lesbian and bisexual women the predicted one-year incidence rate (IR) per 100,000 person-years and estimated incidence rate ratios (IRR) and 95% confidence intervals (CI), based on each woman’s comprehensive risk factor profile. Results Analyses included 1,373,817 person-years of data from 66,257 women. For each sexual orientation group, mean predicted one-year CC IR per 100,000 person-years was slightly over 12 cases for each of the sexual orientation groups. After controlling for confounders in fully adjusted models and compared to heterosexuals, no significant differences in IRR were observed for lesbians (IRR 1.01; 95% CI 0.99, 1.04) or bisexuals (IRR 1.01; 95% CI 0.98, 1.04). Conclusions CC risk is similar across all sexual orientation subgroups, with all groups comparably affected. Health professionals must ensure that prevention, screening, and treatment programs are adequately reaching each of these communities. PMID:24852207

  4. Neither Single nor a Combination of Routine Laboratory Parameters can Discriminate between Gram-positive and Gram-negative Bacteremia

    PubMed Central

    Ratzinger, Franz; Dedeyan, Michel; Rammerstorfer, Matthias; Perkmann, Thomas; Burgmann, Heinz; Makristathis, Athanasios; Dorffner, Georg; Loetsch, Felix; Blacky, Alexander; Ramharter, Michael

    2015-01-01

    Adequate early empiric antibiotic therapy is pivotal for the outcome of patients with bloodstream infections. In clinical practice the use of surrogate laboratory parameters is frequently proposed to predict underlying bacterial pathogens; however there is no clear evidence for this assumption. In this study, we investigated the discriminatory capacity of predictive models consisting of routinely available laboratory parameters to predict the presence of Gram-positive or Gram-negative bacteremia. Major machine learning algorithms were screened for their capacity to maximize the area under the receiver operating characteristic curve (ROC-AUC) for discriminating between Gram-positive and Gram-negative cases. Data from 23,765 patients with clinically suspected bacteremia were screened and 1,180 bacteremic patients were included in the study. A relative predominance of Gram-negative bacteremia (54.0%), which was more pronounced in females (59.1%), was observed. The final model achieved 0.675 ROC-AUC resulting in 44.57% sensitivity and 79.75% specificity. Various parameters presented a significant difference between both genders. In gender-specific models, the discriminatory potency was slightly improved. The results of this study do not support the use of surrogate laboratory parameters for predicting classes of causative pathogens. In this patient cohort, gender-specific differences in various laboratory parameters were observed, indicating differences in the host response between genders. PMID:26522966

  5. Biomechanical evaluation of tibial bone adaptation after revision total knee arthroplasty: A comparison of different implant systems

    PubMed Central

    Quilez, María Paz; Seral, Belen; Pérez, María Angeles

    2017-01-01

    The best methods to manage tibial bone defects following total knee arthroplasty remain under debate. Different fixation systems exist to help surgeons reconstruct knee osseous bone loss (such as tantalum cones, cement, modular metal augments, autografts, allografts and porous metaphyseal sleeves) However, the effects of the various solutions on the long-term outcome remain unknown. In the present work, a bone remodeling mathematical model was used to predict bone remodeling after total knee arthroplasty (TKA) revision. Five different types of prostheses were analyzed: one with a straight stem; two with offset stems, with and without supplements; and two with sleeves, with and without stems. Alterations in tibia bone density distribution and implant Von Mises stresses were quantified. In all cases, the bone density decreased in the proximal epiphysis and medullary channels, and an increase in bone density was predicted in the diaphysis and around stem tips. The highest bone resorption was predicted for the offset prosthesis without the supplement, and the highest bone formation was computed for the straight stem. The highest Von Mises stress was obtained for the straight tibial stem, and the lowest was observed for the stemless metaphyseal sleeves prosthesis. The computational model predicted different behaviors among the five systems. We were able to demonstrate the importance of choosing an adequate revision system and that in silico models may help surgeons choose patient-specific treatments. PMID:28886100

  6. Interactive vs. Non-Interactive Ensembles for Weather Prediction and Climate Projection

    NASA Astrophysics Data System (ADS)

    Duane, Gregory

    2013-04-01

    If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel" synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model "observation error") as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. Previous results from an ENSO-prediction supermodel [Kirtman et al.] are re-examined in light of the hypothesis about the importance of qualitative inter-model differences.

  7. Posttest Analyses of the Steel Containment Vessel Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costello, J.F.; Hessheimer, M.F.; Ludwigsen, J.S.

    A high pressure test of a scale model of a steel containment vessel (SCV) was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. This testis part of a program to investigate the response of representative models of nuclear containment structures to pressure loads beyond the design basis accident. The posttest analyses of this test focused on three areas where the pretest analysis effort did not adequately predict the model behavior duringmore » the test. These areas are the onset of global yielding, the strain concentrations around the equipment hatch and the strain concentrations that led to a small tear near a weld relief opening that was not modeled in the pretest analysis.« less

  8. Common carotid artery intima-media thickness is as good as carotid intima-media thickness of all carotid artery segments in improving prediction of coronary heart disease risk in the Atherosclerosis Risk in Communities (ARIC) study.

    PubMed

    Nambi, Vijay; Chambless, Lloyd; He, Max; Folsom, Aaron R; Mosley, Tom; Boerwinkle, Eric; Ballantyne, Christie M

    2012-01-01

    Carotid intima-media thickness (CIMT) and plaque information can improve coronary heart disease (CHD) risk prediction when added to traditional risk factors (TRF). However, obtaining adequate images of all carotid artery segments (A-CIMT) may be difficult. Of A-CIMT, the common carotid artery intima-media thickness (CCA-IMT) is relatively more reliable and easier to measure. We evaluated whether CCA-IMT is comparable to A-CIMT when added to TRF and plaque information in improving CHD risk prediction in the Atherosclerosis Risk in Communities (ARIC) study. Ten-year CHD risk prediction models using TRF alone, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque were developed for the overall cohort, men, and women. The area under the receiver operator characteristic curve (AUC), per cent individuals reclassified, net reclassification index (NRI), and model calibration by the Grønnesby-Borgan test were estimated. There were 1722 incident CHD events in 12 576 individuals over a mean follow-up of 15.2 years. The AUC for TRF only, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque models were 0.741, 0.754, and 0.753, respectively. Although there was some discordance when the CCA-IMT + plaque- and A-CIMT + plaque-based risk estimation was compared, the NRI and clinical NRI (NRI in the intermediate-risk group) when comparing the CIMT models with TRF-only model, per cent reclassified, and test for model calibration were not significantly different. Coronary heart disease risk prediction can be improved by adding A-CIMT + plaque or CCA-IMT + plaque information to TRF. Therefore, evaluating the carotid artery for plaque presence and measuring CCA-IMT, which is easier and more reliable than measuring A-CIMT, provide a good alternative to measuring A-CIMT for CHD risk prediction.

  9. Predicting Response to Neoadjuvant Chemoradiotherapy in Esophageal Cancer with Textural Features Derived from Pretreatment 18F-FDG PET/CT Imaging.

    PubMed

    Beukinga, Roelof J; Hulshoff, Jan B; van Dijk, Lisanne V; Muijs, Christina T; Burgerhof, Johannes G M; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Slump, Cornelis H; Mul, Véronique E M; Plukker, John Th M

    2017-05-01

    Adequate prediction of tumor response to neoadjuvant chemoradiotherapy (nCRT) in esophageal cancer (EC) patients is important in a more personalized treatment. The current best clinical method to predict pathologic complete response is SUV max in 18 F-FDG PET/CT imaging. To improve the prediction of response, we constructed a model to predict complete response to nCRT in EC based on pretreatment clinical parameters and 18 F-FDG PET/CT-derived textural features. Methods: From a prospectively maintained single-institution database, we reviewed 97 consecutive patients with locally advanced EC and a pretreatment 18 F-FDG PET/CT scan between 2009 and 2015. All patients were treated with nCRT (carboplatin/paclitaxel/41.4 Gy) followed by esophagectomy. We analyzed clinical, geometric, and pretreatment textural features extracted from both 18 F-FDG PET and CT. The current most accurate prediction model with SUV max as a predictor variable was compared with 6 different response prediction models constructed using least absolute shrinkage and selection operator regularized logistic regression. Internal validation was performed to estimate the model's performances. Pathologic response was defined as complete versus incomplete response (Mandard tumor regression grade system 1 vs. 2-5). Results: Pathologic examination revealed 19 (19.6%) complete and 78 (80.4%) incomplete responders. Least absolute shrinkage and selection operator regularization selected the clinical parameters: histologic type and clinical T stage, the 18 F-FDG PET-derived textural feature long run low gray level emphasis, and the CT-derived textural feature run percentage. Introducing these variables to a logistic regression analysis showed areas under the receiver-operating-characteristic curve (AUCs) of 0.78 compared with 0.58 in the SUV max model. The discrimination slopes were 0.17 compared with 0.01, respectively. After internal validation, the AUCs decreased to 0.74 and 0.54, respectively. Conclusion: The predictive values of the constructed models were superior to the standard method (SUV max ). These results can be considered as an initial step in predicting tumor response to nCRT in locally advanced EC. Further research in refining the predictive value of these models is needed to justify omission of surgery. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  10. Development of specifications for surface and subsurface oceanic environmental data

    NASA Technical Reports Server (NTRS)

    Wolff, P. M.

    1976-01-01

    The existing need for synoptic subsurface observations was demonstrated giving special attention to the requirements of meteorology. The current state of synoptic oceanographic observations was assessed; a preliminary design for the Basic Observational Network needed to fulfill the minimum needs of synoptic meteorology and oceanography was presented. There is an existing critical need for such a network in the support of atmospheric modeling and operational meteorological prediction, and through utilization of the regional water mass concept an adequate observational system can be designed which is realistic in terms of cost and effort.

  11. Leaf bidirectional reflectance and transmittance in corn and soybean

    NASA Technical Reports Server (NTRS)

    Walter-Shea, E. A.; Norman, J. M.; Blad, B. L.

    1989-01-01

    Bidirectional optical properties of leaves must be adequately characterized to develop comprehensive and reliably predictive canopy radiative-transfer models. Directional reflectance and transmittance factors of individual corn and soybean leaves were measured at source incidence angles (SIAs) 20, 45, and 70 deg and numerous view angles in the visible and NIR. Bidirectional reflectance distributions changed with increasing SIA, with forward scattering most pronounced at 70 deg. Directional-hemispherical reflectance generally increased and transmittance decreased with increased SIA. Directional-hemispherical reflectance factors were higher and transmittances were lower than the nadir-viewed reflectance component.

  12. Postural effects on intracranial pressure: modeling and clinical evaluation.

    PubMed

    Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders

    2013-11-01

    The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology.

  13. Merging information from multi-model flood projections in a hierarchical Bayesian framework

    NASA Astrophysics Data System (ADS)

    Le Vine, Nataliya

    2016-04-01

    Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.

  14. Influence of nonelectrostatic ion-ion interactions on double-layer capacitance

    NASA Astrophysics Data System (ADS)

    Zhao, Hui

    2012-11-01

    Recently a Poisson-Helmholtz-Boltzmann (PHB) model [Bohinc , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.031130 85, 031130 (2012)] was developed by accounting for solvent-mediated nonelectrostatic ion-ion interactions. Nonelectrostatic interactions are described by a Yukawa-like pair potential. In the present work, we modify the PHB model by adding steric effects (finite ion size) into the free energy to derive governing equations. The modified PHB model is capable of capturing both ion specificity and ion crowding. This modified model is then employed to study the capacitance of the double layer. More specifically, we focus on the influence of nonelectrostatic ion-ion interactions on charging a double layer near a flat surface in the presence of steric effects. We numerically compute the differential capacitance as a function of the voltage under various conditions. At small voltages and low salt concentrations (dilute solution), we find out that the predictions from the modified PHB model are the same as those from the classical Poisson-Boltzmann theory, indicating that nonelectrostatic ion-ion interactions and steric effects are negligible. At moderate voltages, nonelectrostatic ion-ion interactions play an important role in determining the differential capacitance. Generally speaking, nonelectrostatic interactions decrease the capacitance because of additional nonelectrostatic repulsion among excess counterions inside the double layer. However, increasing the voltage gradually favors steric effects, which induce a condensed layer with crowding of counterions near the electrode. Accordingly, the predictions from the modified PHB model collapse onto those computed by the modified Poisson-Boltzmann theory considering steric effects alone. Finally, theoretical predictions are compared and favorably agree with experimental data, in particular, in concentrated solutions, leading one to conclude that the modified PHB model adequately predicts the diffuse-charge dynamics of the double layer with ion specificity and steric effects.

  15. Radiation Transport Modeling and Assessment to Better Predict Radiation Exposure, Dose, and Toxicological Effects to Human Organs on Long Duration Space Flights

    NASA Technical Reports Server (NTRS)

    Denkins, Pamela; Badhwar, Gautam; Obot, Victor

    2000-01-01

    NASA's long-range plans include possible human exploratory missions to the moon and Mars within the next quarter century. Such missions beyond low Earth orbit will expose crews to transient radiation from solar particle events which include high-energy galactic cosmic rays and high-energy protons. Because the radiation levels in space are high and the missions long, adequate shielding is needed to minimize the deleterious health effects of exposure to radiation. The focus of this study is radiation exposure to the blood-forming organs of the NASA astronauts. NASA/JSC developed the Phantom Torso Experiment for Organ Dose Measurements which housed active and passive dosimeters that would monitor and record absorbed radiation levels at vital organ locations. This experiment was conducted during the STS-9 I mission in May '98 and provided the necessary space radiation data for correlation to results obtained from the current analytical models used to predict exposure to the blood-forming organs. Numerous models (i.e., BRYNTRN and HZETRN) have been developed and used to predict radiation exposure. However, new models are continually being developed and evaluated. The Space Environment Information Systems (SPENVIS) modeling program, developed by the Belgian Institute for Space Aeronomy, is to be used and evaluated as a part of the research activity. It is the intent of this research effort to compare the modeled data to the findings from the STS-9 I mission; assess the accuracy and efficiency of this model; and to determine its usefulness for predicting radiation exposure and developing better guidelines for shielding requirements for long duration manned missions.

  16. Micromechanics of failure waves in glass. 2: Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espinosa, H.D.; Xu, Y.; Brar, N.S.

    1997-08-01

    In an attempt to elucidate the failure mechanism responsible for the so-called failure waves in glass, numerical simulations of plate and rod impact experiments, with a multiple-plane model, have been performed. These simulations show that the failure wave phenomenon can be modeled by the nucleation and growth of penny-shaped shear defects from the specimen surface to its interior. Lateral stress increase, reduction of spall strength,and progressive attenuation of axial stress behind the failure front are properly predicted by the multiple-plane model. Numerical simulations of high-strain-rate pressure-shear experiments indicate that the model predicts reasonably well the shear resistance of the materialmore » at strain rates as high as 1 {times} 10{sup 6}/s. The agreement is believed to be the result of the model capability in simulating damage-induced anisotropy. By examining the kinetics of the failure process in plate experiments, the authors show that the progressive glass spallation in the vicinity of the failure front and the rate of increase in lateral stress are more consistent with a representation of inelasticity based on shear-activated flow surfaces, inhomogeneous flow, and microcracking, rather than pure microcracking. In the former mechanism, microcracks are likely formed at a later time at the intersection of flow surfaces, in the case of rod-on-rod impact, stress and radial velocity histories predicted by the microcracking model are in agreement with the experimental measurements. Stress attenuation, pulse duration, and release structure are properly simulated. It is shown that failure wave speeds in excess to 3,600 m/s are required for adequate prediction in rod radial expansion.« less

  17. Intraocular Pressure, Blood Pressure, and Retinal Blood Flow Autoregulation: A Mathematical Model to Clarify Their Relationship and Clinical Relevance

    PubMed Central

    Guidoboni, Giovanna; Harris, Alon; Cassani, Simone; Arciero, Julia; Siesky, Brent; Amireskandari, Annahita; Tobe, Leslie; Egan, Patrick; Januleviciene, Ingrida; Park, Joshua

    2014-01-01

    Purpose. This study investigates the relationship between intraocular pressure (IOP) and retinal hemodynamics and predicts how arterial blood pressure (BP) and blood flow autoregulation (AR) influence this relationship. Methods. A mathematical model is developed to simulate blood flow in the central retinal vessels and retinal microvasculature as current flowing through a network of resistances and capacitances. Variable resistances describe active and passive diameter changes due to AR and IOP. The model is validated by using clinically measured values of retinal blood flow and velocity. The model simulations for six theoretical patients with high, normal, and low BP (HBP-, NBP-, LBP-) and functional or absent AR (-wAR, -woAR) are compared with clinical data. Results. The model predicts that NBPwAR and HBPwAR patients can regulate retinal blood flow (RBF) as IOP varies between 15 and 23 mm Hg and between 23 and 29 mm Hg, respectively, whereas LBPwAR patients do not adequately regulate blood flow if IOP is 15 mm Hg or higher. Hemodynamic alterations would be noticeable only if IOP changes occur outside of the regulating range, which, most importantly, depend on BP. The model predictions are consistent with clinical data for IOP reduction via surgery and medications and for cases of induced IOP elevation. Conclusions. The theoretical model results suggest that the ability of IOP to induce noticeable changes in retinal hemodynamics depends on the levels of BP and AR of the individual. These predictions might help to explain the inconsistencies found in the clinical literature concerning the relationship between IOP and retinal hemodynamics. PMID:24876284

  18. Intraocular pressure, blood pressure, and retinal blood flow autoregulation: a mathematical model to clarify their relationship and clinical relevance.

    PubMed

    Guidoboni, Giovanna; Harris, Alon; Cassani, Simone; Arciero, Julia; Siesky, Brent; Amireskandari, Annahita; Tobe, Leslie; Egan, Patrick; Januleviciene, Ingrida; Park, Joshua

    2014-05-29

    This study investigates the relationship between intraocular pressure (IOP) and retinal hemodynamics and predicts how arterial blood pressure (BP) and blood flow autoregulation (AR) influence this relationship. A mathematical model is developed to simulate blood flow in the central retinal vessels and retinal microvasculature as current flowing through a network of resistances and capacitances. Variable resistances describe active and passive diameter changes due to AR and IOP. The model is validated by using clinically measured values of retinal blood flow and velocity. The model simulations for six theoretical patients with high, normal, and low BP (HBP-, NBP-, LBP-) and functional or absent AR (-wAR, -woAR) are compared with clinical data. The model predicts that NBPwAR and HBPwAR patients can regulate retinal blood flow (RBF) as IOP varies between 15 and 23 mm Hg and between 23 and 29 mm Hg, respectively, whereas LBPwAR patients do not adequately regulate blood flow if IOP is 15 mm Hg or higher. Hemodynamic alterations would be noticeable only if IOP changes occur outside of the regulating range, which, most importantly, depend on BP. The model predictions are consistent with clinical data for IOP reduction via surgery and medications and for cases of induced IOP elevation. The theoretical model results suggest that the ability of IOP to induce noticeable changes in retinal hemodynamics depends on the levels of BP and AR of the individual. These predictions might help to explain the inconsistencies found in the clinical literature concerning the relationship between IOP and retinal hemodynamics. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  19. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  20. Vertical structure of mean cross-shore currents across a barred surf zone

    USGS Publications Warehouse

    Haines, John W.; Sallenger, Asbury H.

    1994-01-01

    Mean cross-shore currents observed across a barred surf zone are compared to model predictions. The model is based on a simplified momentum balance with a turbulent boundary layer at the bed. Turbulent exchange is parameterized by an eddy viscosity formulation, with the eddy viscosity Aυ independent of time and the vertical coordinate. Mean currents result from gradients due to wave breaking and shoaling, and the presence of a mean setup of the free surface. Descriptions of the wave field are provided by the wave transformation model of Thornton and Guza [1983]. The wave transformation model adequately reproduces the observed wave heights across the surf zone. The mean current model successfully reproduces the observed cross-shore flows. Both observations and predictions show predominantly offshore flow with onshore flow restricted to a relatively thin surface layer. Successful application of the mean flow model requires an eddy viscosity which varies horizontally across the surf zone. Attempts are made to parameterize this variation with some success. The data does not discriminate between alternative parameterizations proposed. The overall variability in eddy viscosity suggested by the model fitting should be resolvable by field measurements of the turbulent stresses. Consistent shortcomings of the parameterizations, and the overall modeling effort, suggest avenues for further development and data collection.

  1. Biomimetic three-dimensional tissue models for advanced high-throughput drug screening

    PubMed Central

    Nam, Ki-Hwan; Smith, Alec S.T.; Lone, Saifullah; Kwon, Sunghoon; Kim, Deok-Ho

    2015-01-01

    Most current drug screening assays used to identify new drug candidates are 2D cell-based systems, even though such in vitro assays do not adequately recreate the in vivo complexity of 3D tissues. Inadequate representation of the human tissue environment during a preclinical test can result in inaccurate predictions of compound effects on overall tissue functionality. Screening for compound efficacy by focusing on a single pathway or protein target, coupled with difficulties in maintaining long-term 2D monolayers, can serve to exacerbate these issues when utilizing such simplistic model systems for physiological drug screening applications. Numerous studies have shown that cell responses to drugs in 3D culture are improved from those in 2D, with respect to modeling in vivo tissue functionality, which highlights the advantages of using 3D-based models for preclinical drug screens. In this review, we discuss the development of microengineered 3D tissue models which accurately mimic the physiological properties of native tissue samples, and highlight the advantages of using such 3D micro-tissue models over conventional cell-based assays for future drug screening applications. We also discuss biomimetic 3D environments, based-on engineered tissues as potential preclinical models for the development of more predictive drug screening assays for specific disease models. PMID:25385716

  2. Flow through a very porous obstacle in a shallow channel.

    PubMed

    Creed, M J; Draper, S; Nishino, T; Borthwick, A G L

    2017-04-01

    A theoretical model, informed by numerical simulations based on the shallow water equations, is developed to predict the flow passing through and around a uniform porous obstacle in a shallow channel, where background friction is important. This problem is relevant to a number of practical situations, including flow through aquatic vegetation, the performance of arrays of turbines in tidal channels and hydrodynamic forces on offshore structures. To demonstrate this relevance, the theoretical model is used to (i) reinterpret core flow velocities in existing laboratory-based data for an array of emergent cylinders in shallow water emulating aquatic vegetation and (ii) reassess the optimum arrangement of tidal turbines to generate power in a tidal channel. Comparison with laboratory-based data indicates a maximum obstacle resistance (or minimum porosity) for which the present theoretical model is valid. When the obstacle resistance is above this threshold the shallow water equations do not provide an adequate representation of the flow, and the theoretical model over-predicts the core flow passing through the obstacle. The second application of the model confirms that natural bed resistance increases the power extraction potential for a partial tidal fence in a shallow channel and alters the optimum arrangement of turbines within the fence.

  3. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    NASA Astrophysics Data System (ADS)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  4. Gene flow from domesticated species to wild relatives: migration load in a model of multivariate selection.

    PubMed

    Tufto, Jarle

    2010-01-01

    Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.

  5. Safety focused modeling of lithium-ion batteries: A review

    NASA Astrophysics Data System (ADS)

    Abada, S.; Marlair, G.; Lecocq, A.; Petit, M.; Sauvant-Moynot, V.; Huet, F.

    2016-02-01

    Safety issues pertaining to Li-ion batteries justify intensive testing all along their value chain. However, progress in scientific knowledge regarding lithium based battery failure modes, as well as remarkable technologic breakthroughs in computing science, now allow for development and use of prediction tools to assist designers in developing safer batteries. Subsequently, this paper offers a review of significant modeling works performed in the area with a focus on the characterization of the thermal runaway hazard and their relating triggering events. Progress made in models aiming at integrating battery ageing effect and related physics is also discussed, as well as the strong interaction with modeling-focused use of testing, and the main achievements obtained towards marketing safer systems. Current limitations and new challenges or opportunities that are expected to shape future modeling activity are also put in perspective. According to market trends, it is anticipated that safety may still act as a restraint in the search for acceptable compromise with overall performance and cost of lithium-ion based and post lithium-ion rechargeable batteries of the future. In that context, high-throughput prediction tools capable of screening adequate new components properties allowing access to both functional and safety related aspects are highly desirable.

  6. Early Numeracy Indicators: Examining Predictive Utility Across Years and States

    ERIC Educational Resources Information Center

    Conoyer, Sarah J.; Foegen, Anne; Lembke, Erica S.

    2016-01-01

    Two studies using similar methods in two states investigated the long-term predictive utility of two single-skill early numeracy Curriculum Based Measures (CBMs) and the degree to which they can adequately predict high-stakes test scores. Data were drawn from kindergarten and first-grade students. State standardized assessment data from the…

  7. Field-level validation of a CLIMEX model for Cactoblastis cactorum (Lepidoptera: Pyralidae) using estimated larval growth rates.

    PubMed

    Legaspi, Benjamin C; Legaspi, Jesusa Crisostomo

    2010-04-01

    Invasive pests, such as the cactus moth, Cactoblastis cactorum (Berg) (Lepidoptera: Pyralidae), have not reached equilibrium distributions and present unique opportunities to validate models by comparing predicted distributions with eventual realized geographic ranges. A CLIMEX model was developed for C. cactorum. Model validation was attempted at the global scale by comparing worldwide distribution against known occurrence records and at the field scale by comparing CLIMEX "growth indices" against field measurements of larval growth. Globally, CLIMEX predicted limited potential distribution in North America (from the Caribbean Islands to Florida, Texas, and Mexico), Africa (South Africa and parts of the eastern coast), southern India, parts of Southeast Asia, and the northeastern coast of Australia. Actual records indicate the moth has been found in the Caribbean (Antigua, Barbuda, Montserrat Saint Kitts and Nevis, Cayman Islands, and U.S. Virgin Islands), Cuba, Bahamas, Puerto Rico, southern Africa, Kenya, Mexico, and Australia. However, the model did not predict that distribution would extend from India to the west into Pakistan. In the United States, comparison of the predicted and actual distribution patterns suggests that the moth may be close to its predicted northern range along the Atlantic coast. Parts of Texas and most of Mexico may be vulnerable to geographic range expansion of C. cactorum. Larval growth rates in the field were estimated by measuring differences in head capsules and body lengths of larval cohorts at weekly intervals. Growth indices plotted against measures of larval growth rates compared poorly when CLIMEX was run using the default historical weather data. CLIMEX predicted a single period conducive to insect development, in contrast to the three generations observed in the field. Only time and more complete records will tell whether C. cactorum will extend its geographical distribution to regions predicted by the CLIMEX model. In terms of small scale temporal predictions, this study suggests that CLIMEX indices may agree with field-specific population dynamics, provided an adequate metric for insect growth rate is used and weather data are location and time specific.

  8. Assessing Participation in Community-Based Physical Activity Programs in Brazil

    PubMed Central

    REIS, RODRIGO S.; YAN, YAN; PARRA, DIANA C.; BROWNSON, ROSS C.

    2015-01-01

    Purpose This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. Methods We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. Results The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14–4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16–2.53), reporting a good health (OR = 1.58, 95% CI = 1.02–2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05–2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26–2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18–2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Conclusions Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil. PMID:23846162

  9. Predicting Individual Tree and Shrub Species Distributions with Empirically Derived Microclimate Surfaces in a Complex Mountain Ecosystem in Northern Idaho, USA

    NASA Astrophysics Data System (ADS)

    Holden, Z.; Cushman, S.; Evans, J.; Littell, J. S.

    2009-12-01

    The resolution of current climate interpolation models limits our ability to adequately account for temperature variability in complex mountainous terrain. We empirically derive 30 meter resolution models of June-October day and nighttime temperature and April nighttime Vapor Pressure Deficit (VPD) using hourly data from 53 Hobo dataloggers stratified by topographic setting in mixed conifer forests near Bonners Ferry, ID. 66%, of the variability in average June-October daytime temperature is explained by 3 variables (elevation, relative slope position and topographic roughness) derived from 30 meter digital elevation models. 69% of the variability in nighttime temperatures among stations is explained by elevation, relative slope position and topographic dissection (450 meter window). 54% of variability in April nighttime VPD is explained by elevation, soil wetness and the NDVIc derived from Landsat. We extract temperature and VPD predictions at 411 intensified Forest Inventory and Analysis plots (FIA). We use these variables with soil wetness and solar radiation indices derived from a 30 meter DEM to predict the presence and absence of 10 common forest tree species and 25 shrub species. Classification accuracies range from 87% for Pinus ponderosa , to > 97% for most other tree species. Shrub model accuracies are also high with greater than 90% accuracy for the majority of species. Species distribution models based on the physical variables that drive species occurrence, rather than their topographic surrogates, will eventually allow us to predict potential future distributions of these species with warming climate at fine spatial scales.

  10. The protection motivation theory within the stages of the transtheoretical model - stage-specific interplay of variables and prediction of exercise stage transitions.

    PubMed

    Lippke, Sonia; Plotnikoff, Ronald C

    2009-05-01

    Two different theories of health behaviour have been chosen with the aim of theory integration: a continuous theory (protection motivation theory, PMT) and a stage model (transtheoretical model, TTM). This is the first study to test whether the stages of the TTM moderate the interrelation of PMT-variables and the mediation of motivation, as well as PMT-variables' interactions in predicting stage transitions. Hypotheses were tested regarding (1) mean patterns, stage pair-comparisons and nonlinear trends using ANOVAs; (2) prediction-patterns for the different stage groups employing multi-group structural equation modelling (MSEM) and nested model analyses; and (3) stage transitions using binary logistic regression analyses. Adults (N=1,602) were assessed over a 6 month period on their physical activity stages, PMT-variables and subsequent behaviour. (1) Particular mean differences and nonlinear trends in all test variables were found. (2) The PMT adequately fitted the five stage groups. The MSEM revealed that covariances within threat appraisal and coping appraisal were invariant and all other constrains were stage-specific, i.e. stage was a moderator. Except for self-efficacy, motivation fully mediated the relationship between the social-cognitive variables and behaviour. (3) Predicting stage transitions with the PMT-variables underscored the importance of self-efficacy. Only when threat appraisal and coping appraisal were high, stage movement was more likely in the preparation stage. Results emphasize stage-specific differences of the PMT mechanisms, and hence, support the stage construct. The findings may guide further theory building and research integrating different theoretical approaches.

  11. Predictive vs. Empiric Assessment of Schistosomiasis: Implications for Treatment Projections in Ghana

    PubMed Central

    Kabore, Achille; Biritwum, Nana-Kwadwo; Downs, Philip W.; Soares Magalhaes, Ricardo J.; Zhang, Yaobi; Ottesen, Eric A.

    2013-01-01

    Background Mapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies. Methodology/Principal Findings To assess the accuracy of MBG predictions for Schistosoma haematobium infection in Ghana, school surveys were evaluated at 79 sites to yield empiric prevalence values that could be compared with values derived from recently published MBG predictions. Based on these findings schools were categorized according to WHO guidelines so that practical implications of any differences could be determined. Using the mean predicted values alone, 21 of the 25 empirically determined ‘high-risk’ schools requiring yearly praziquantel would have been undertreated and almost 20% of the remaining schools would have been treated despite empirically-determined absence of infection – translating into 28% of the children in the 79 schools being undertreated and 12% receiving treatment in the absence of any demonstrated need. Conclusions/Significance Using the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers. PMID:23505584

  12. Measurement properties of a screening questionnaire of obstructive sleep apnea risk: Little information, great prediction?(☆).

    PubMed

    Sargento, Paulo; Perea, Victoria; Ladera, Valentina; Lopes, Paulo; Oliveira, Jorge

    2014-06-01

    Previous research had shown the suitability of several questionnaires predicting the obstructive sleep apnea syndrome. Measurement properties of an online screening questionnaire were studied. The sample consisted of 184 Portuguese adults (89 men and 95 women); 46 of them were polysomnographically diagnosed with the untreated obstructive sleep apnea syndrome. The participants were assessed with an online questionnaire of sleep apnea risk, from University of Maryland. A principal component factor analysis was performed, revealing a single factor (49.24% of the total variance). Internal consistency was minimally adequate (α=0.74). The mean of inter-item correlation was of 0.35 (0.120.61), whereas the item-total correlations were considered good (0.520.81). The total score for patients was significantly higher than for healthy participants (p<0.000), but no significant statistical differences between severity groups of patients were found (p>0.05). Furthermore, the ability of the measure in discriminating between healthy subjects and OSA subjects was good. Overall data from the Rasch analysis was consistent with the guidelines of Linacre, scores show good model fit and psychometric adequacy. The measure showed an adequate structural, internal and criterion validity, suggesting this as a useful and effective screening for sleep apnea risk in Portuguese adults.

  13. A three-dimensional topology optimization model for tooth-root morphology.

    PubMed

    Seitz, K-F; Grabe, J; Köhne, T

    2018-02-01

    To obtain the root of a lower incisor through structural optimization, we used two methods: optimization with Solid Isotropic Material with Penalization (SIMP) and Soft-Kill Option (SKO). The optimization was carried out in combination with a finite element analysis in Abaqus/Standard. The model geometry was based on cone-beam tomography scans of 10 adult males with healthy bone-tooth interface. Our results demonstrate that the optimization method using SIMP for minimum compliance could not adequately predict the actual root shape. The SKO method, however, provided optimization results that were comparable to the natural root form and is therefore suitable to set up the basic topology of a dental root.

  14. Wafer hot spot identification through advanced photomask characterization techniques: part 2

    NASA Astrophysics Data System (ADS)

    Choi, Yohan; Green, Michael; Cho, Young; Ham, Young; Lin, Howard; Lan, Andy; Yang, Richer; Lung, Mike

    2017-03-01

    Historically, 1D metrics such as Mean to Target (MTT) and CD Uniformity (CDU) have been adequate for mask end users to evaluate and predict the mask impact on the wafer process. However, the wafer lithographer's process margin is shrinking at advanced nodes to a point that classical mask CD metrics are no longer adequate to gauge the mask contribution to wafer process error. For example, wafer CDU error at advanced nodes is impacted by mask factors such as 3-dimensional (3D) effects and mask pattern fidelity on sub-resolution assist features (SRAFs) used in Optical Proximity Correction (OPC) models of ever-increasing complexity. To overcome the limitation of 1D metrics, there are numerous on-going industry efforts to better define wafer-predictive metrics through both standard mask metrology and aerial CD methods. Even with these improvements, the industry continues to struggle to define useful correlative metrics that link the mask to final device performance. In part 1 of this work, we utilized advanced mask pattern characterization techniques to extract potential hot spots on the mask and link them, theoretically, to issues with final wafer performance. In this paper, part 2, we complete the work by verifying these techniques at wafer level. The test vehicle (TV) that was used for hot spot detection on the mask in part 1 will be used to expose wafers. The results will be used to verify the mask-level predictions. Finally, wafer performance with predicted and verified mask/wafer condition will be shown as the result of advanced mask characterization. The goal is to maximize mask end user yield through mask-wafer technology harmonization. This harmonization will provide the necessary feedback to determine optimum design, mask specifications, and mask-making conditions for optimal wafer process margin.

  15. Neonatal Nutrition Predicts Energy Balance in Young Adults Born Preterm at Very Low Birth Weight

    PubMed Central

    Matinolli, Hanna-Maria; Hovi, Petteri; Levälahti, Esko; Kaseva, Nina; Silveira, Patricia P.; Hemiö, Katri; Järvenpää, Anna-Liisa; Eriksson, Johan G.; Andersson, Sture; Lindström, Jaana; Männistö, Satu; Kajantie, Eero

    2017-01-01

    Epidemiological studies and animal models suggest that early postnatal nutrition and growth can influence adult health. However, few human studies have objective recordings of early nutrient intake. We studied whether nutrient intake and growth during the first 9 weeks after preterm birth with very low birth weight (VLBW, <1500 g) predict total energy intake, resting energy expenditure (REE), physical activity and food preferences in young adulthood. We collected daily nutritional intakes and weights during the initial hospital stay from hospital records for 127 unimpaired VLBW participants. At an average age 22.5 years, they completed a three-day food record and a physical activity questionnaire and underwent measurements of body composition (dual X-ray absorptiometry; n = 115 with adequate data) and REE (n = 92 with adequate data). We used linear regression and path analysis to investigate associations between neonatal nutrient intake and adult outcomes. Higher energy, protein and fat intakes during the first three weeks of life predicted lower relative (=per unit lean body mass) energy intake and relative REE in adulthood, independent of other pre- and neonatal factors. In path analysis, total effects of early nutrition and growth on relative energy intake were mostly explained by direct effects of early life nutrition. A path mediated by early growth reached statistical significance only for protein intake. There were no associations of neonatal intakes with physical activity or food preferences in adulthood. As a conclusion, higher intake of energy and nutrients during first three weeks of life of VLBW infants predicts energy balance after 20 years. This association is partly mediated through postnatal growth. PMID:29186804

  16. Seizure threshold increases can be predicted by EEG quality in right unilateral ultrabrief ECT.

    PubMed

    Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Waite, Susan; Loo, Colleen K

    2017-12-01

    Increases in seizure threshold (ST) over a course of brief pulse ECT can be predicted by decreases in EEG quality, informing ECT dose adjustment to maintain adequate supra-threshold dosing. ST increases also occur over a course of right unilateral ultrabrief (RUL UB) ECT, but no data exist on the relationship between ST increases and EEG indices. This study (n = 35) investigated if increases in ST over RUL UB ECT treatments could be predicted by a decline in seizure quality. ST titration was performed at ECT session one and seven, with treatment dosing maintained stable (at 6-8 times ST) in intervening sessions. Seizure quality indices (slow-wave onset, mid-ictal amplitude, regularity, stereotypy, and post-ictal suppression) were manually rated at the first supra-threshold treatment, and last supra-threshold treatment before re-titration, using a structured rating scale, by a single trained rater blinded to the ECT session being rated. Twenty-one subjects (60%) had a ST increase. The association between ST changes and EEG quality indices was analysed by logistic regression, yielding a significant model (p < 0.001). Initial ST (p < 0.05) and percentage change in mid-ictal amplitude (p < 0.05) were significant predictors of change in ST. Percentage change in post-ictal suppression reached trend level significance (p = 0.065). Increases in ST over a RUL UB ECT course may be predicted by decreases in seizure quality, specifically decline in mid-ictal amplitude and potentially in post-ictal suppression. Such EEG indices may be able to inform when dose adjustments are necessary to maintain adequate supra-threshold dosing in RUL UB ECT.

  17. Hierarchical Heteroclinics in Dynamical Model of Cognitive Processes: Chunking

    NASA Astrophysics Data System (ADS)

    Afraimovich, Valentin S.; Young, Todd R.; Rabinovich, Mikhail I.

    Combining the results of brain imaging and nonlinear dynamics provides a new hierarchical vision of brain network functionality that is helpful in understanding the relationship of the network to different mental tasks. Using these ideas it is possible to build adequate models for the description and prediction of different cognitive activities in which the number of variables is usually small enough for analysis. The dynamical images of different mental processes depend on their temporal organization and, as a rule, cannot be just simple attractors since cognition is characterized by transient dynamics. The mathematical image for a robust transient is a stable heteroclinic channel consisting of a chain of saddles connected by unstable separatrices. We focus here on hierarchical chunking dynamics that can represent several cognitive activities. Chunking is the dynamical phenomenon that means dividing a long information chain into shorter items. Chunking is known to be important in many processes of perception, learning, memory and cognition. We prove that in the phase space of the model that describes chunking there exists a new mathematical object — heteroclinic sequence of heteroclinic cycles — using the technique of slow-fast approximations. This new object serves as a skeleton of motions reflecting sequential features of hierarchical chunking dynamics and is an adequate image of the chunking processing.

  18. Parameter dimensionality reduction of a conceptual model for streamflow prediction in Canadian, snowmelt dominated ungauged basins

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Poissant, Dominique; Brissette, François

    2015-11-01

    This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.

  19. New Equation of State Models for Hydrodynamic Applications

    NASA Astrophysics Data System (ADS)

    Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.

    1997-07-01

    Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.

  20. Modeling Soil Organic Carbon at Regional Scale by Combining Multi-Spectral Images with Laboratory Spectra

    PubMed Central

    Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog

    2015-01-01

    There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l’Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the ‘upland model’ was able to more accurately predict SOC compared with the ‘upland & wetland model’. However, the separately calibrated ‘upland and wetland model’ did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory Vis-NIR spectroscopy adds critical information that significantly improves the prediction accuracy of SOC compared to using RS data alone. We recommend the incorporation of laboratory spectra with RS data and other environmental data to improve soil spatial modeling and digital soil mapping (DSM). PMID:26555071

  1. Development of a dynamic growth-death model for Escherichia coli O157:H7 in minimally processed leafy green vegetables.

    PubMed

    McKellar, Robin C; Delaquis, Pascal

    2011-11-15

    Escherichia coli O157:H7, an occasional contaminant of fresh produce, can present a serious health risk in minimally processed leafy green vegetables. A good predictive model is needed for Quantitative Risk Assessment (QRA) purposes, which adequately describes the growth or die-off of this pathogen under variable temperature conditions experienced during processing, storage and shipping. Literature data on behaviour of this pathogen on fresh-cut lettuce and spinach was taken from published graphs by digitization, published tables or from personal communications. A three-phase growth function was fitted to the data from 13 studies, and a square root model for growth rate (μ) as a function of temperature was derived: μ=(0.023*(Temperature-1.20))(2). Variability in the published data was incorporated into the growth model by the use of weighted regression and the 95% prediction limits. A log-linear die-off function was fitted to the data from 13 studies, and the resulting rate constants were fitted to a shifted lognormal distribution (Mean: 0.013; Standard Deviation, 0.010; Shift, 0.001). The combined growth-death model successfully predicted pathogen behaviour under both isothermal and non-isothermal conditions when compared to new published data. By incorporating variability, the resulting model is an improvement over existing ones, and is suitable for QRA applications. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  2. Impulsive Personality Traits and Alcohol Use: Does Sleeping Help with Thinking?

    PubMed Central

    Miller, Mary Beth; DiBello, Angelo M.; Lust, Sarah A.; Meisel, Matthew K.; Carey, Kate B.

    2016-01-01

    Objective Both impulsivity and sleep disturbance have been associated with heavy alcohol use among young adults; yet studies to date have not examined their interactive effects. The current study aimed to determine if adequate sleep moderates the association between impulsive personality traits and alcohol use among young adults. Method College students (N = 568) who had been mandated to alcohol treatment following violation of campus alcohol policy provided information regarding alcohol use and related consequences, impulsive personality traits (measured using the UPPS Impulsive Behavior Scale), and perception of sleep adequacy as part of a larger intervention trial. Results Higher urgency, lower premeditation, and higher sensation-seeking predicted greater levels of alcohol consumption, while higher urgency predicted more alcohol-related consequences. As hypothesized, there was a significant interaction between premeditation and sleep adequacy in the prediction of drinks per week; in contrast to hypotheses, however, premeditation was associated with drinking only among those reporting adequate (rather than inadequate) sleep. Specifically, the tendency to premeditate was associated with less drinking among those who reported adequate sleep and was not associated with drinking among those reporting inadequate sleep. Conclusion Sensation-seeking and urgency are associated with greater alcohol involvement among young adults, regardless of sleep adequacy. Conversely, the ability to plan ahead and anticipate the consequences of one’s behaviors (premeditation) is only protective against heavy drinking among individuals receiving adequate sleep. With replication, these findings may inform alcohol prevention and intervention efforts. PMID:28094998

  3. Impulsive personality traits and alcohol use: Does sleeping help with thinking?

    PubMed

    Miller, Mary Beth; DiBello, Angelo M; Lust, Sarah A; Meisel, Matthew K; Carey, Kate B

    2017-02-01

    Both impulsivity and sleep disturbance have been associated with heavy alcohol use among young adults; yet studies to date have not examined their interactive effects. The current study aimed to determine if adequate sleep moderates the association between impulsive personality traits and alcohol use among young adults. College students (N = 568) who had been mandated to alcohol treatment following violation of campus alcohol policy provided information regarding alcohol use and related consequences, impulsive personality traits (measured using the UPPS Impulsive Behavior Scale), and perception of sleep adequacy as part of a larger intervention trial. Higher urgency, lower premeditation, and higher sensation-seeking predicted greater levels of alcohol consumption, while higher urgency predicted more alcohol-related consequences. As hypothesized, there was a significant interaction between premeditation and sleep adequacy in the prediction of drinks per week; in contrast to hypotheses, however, premeditation was associated with drinking only among those reporting adequate (rather than inadequate) sleep. Specifically, the tendency to premeditate was associated with less drinking among those who reported adequate sleep and was not associated with drinking among those reporting inadequate sleep. Sensation-seeking and urgency are associated with greater alcohol involvement among young adults, regardless of sleep adequacy. Conversely, the ability to plan ahead and anticipate the consequences of one's behaviors (premeditation) is only protective against heavy drinking among individuals receiving adequate sleep. With replication, these findings may inform alcohol prevention and intervention efforts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Adequate sleep moderates the prospective association between alcohol use and consequences.

    PubMed

    Miller, Mary Beth; DiBello, Angelo M; Lust, Sarah A; Carey, Michael P; Carey, Kate B

    2016-12-01

    Inadequate sleep and heavy alcohol use have been associated with negative outcomes among college students; however, few studies have examined the interactive effects of sleep and drinking quantity in predicting alcohol-related consequences. This study aimed to determine if adequate sleep moderates the prospective association between weekly drinking quantity and consequences. College students (N=568) who were mandated to an alcohol prevention intervention reported drinks consumed per week, typical sleep quantity (calculated from sleep/wake times), and perceptions of sleep adequacy as part of a larger research trial. Assessments were completed at baseline and one-, three-, and five-month follow-ups. Higher baseline quantities of weekly drinking and inadequate sleep predicted alcohol-related consequences at baseline and one-month follow-up. Significant interactions emerged between baseline weekly drinking quantity and adequate sleep in the prediction of alcohol-related consequences at baseline, one-, three-, and five-month assessments. Simple slopes analyses revealed that weekly drinking quantity was positively associated with alcohol-related consequences for those reporting both adequate and inadequate sleep, but this association was consistently stronger among those who reported inadequate sleep. Subjective evaluation of sleep adequacy moderates both the concurrent and prospective associations between weekly drinking quantity and consequences, such that heavy-drinking college students reporting inadequate sleep experience more consequences as a result of drinking. Research needs to examine the mechanism(s) by which inadequate sleep affects alcohol risk among young adults. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Suicide Prevention: College Students' Intention to Intervene.

    PubMed

    Aldrich, Rosalie S

    2017-07-03

    The objective of this article was to examine college students' intention to intervene with a suicidal individual and examine the Willingness to Intervene against Suicide questionnaire (WIS). College students (n = 1065) completed an online questionnaire about their attitudes, subjective norms, and perceived behavioral control regarding suicide and suicide intervention as well as their intention to intervene with a suicidal individual. The data were analyzed using confirmatory factor analysis, reliability analysis, and multiple regression. It was found that the WIS significantly predicted intention to intervene with a suicidal individual. The WIS was internally consistent with adequate goodness-of-fit indices for three of the four sub-scales. The WIS is an effective tool for predicting intention to intervene; however, the subjective norms sub-scale should be revised to improve the model.

  6. A CCIR-based prediction model for Earth-Space propagation

    NASA Technical Reports Server (NTRS)

    Zhang, Zengjun; Smith, Ernest K.

    1991-01-01

    At present there is no single 'best way' to predict propagation impairments to an Earth-Space path. However, there is an internationally accepted way, namely that given in the most recent version of CCIR Report 564 of Study Group 5. This paper treats a computer code conforming as far as possible to Report 564. It was prepared for an IBM PS/2 using a 386 chip and for Macintosh SE or Mach II. It is designed to be easy to write and read, easy to modify, fast, have strong graphic capability, contain adequate functions, have dialog capability and windows capability. Computer languages considered included the following: (1) Turbo BASIC, (2) Turbo PASCAL, (3) FORTRAN, (4) SMALL TALK, (5) C++, (6) MS SPREADSHEET, (7) MS Excel-Macro, (8) SIMSCRIPT II.5, and (9) WINGZ.

  7. Tricyclic [1,2,4]triazine 1,4-dioxides as hypoxia selective cytotoxins.

    PubMed

    Hay, Michael P; Hicks, Kevin O; Pchalek, Karin; Lee, Ho H; Blaser, Adrian; Pruijn, Frederik B; Anderson, Robert F; Shinde, Sujata S; Wilson, William R; Denny, William A

    2008-11-13

    A series of novel tricyclic triazine-di- N-oxides (TTOs) related to tirapazamine have been designed and prepared. A wide range of structural arrangements with cycloalkyl, oxygen-, and nitrogen-containing saturated rings fused to the triazine core, coupled with various side chains linked to either hemisphere, resulted in TTO analogues that displayed hypoxia-selective cytotoxicity in vitro. Optimal rates of hypoxic metabolism and tissue diffusion coefficients were achieved with fused cycloalkyl rings in combination with both the 3-aminoalkyl or 3-alkyl substituents linked to weakly basic soluble amines. The selection was further refined using pharmacokinetic/pharmacodynamic model predictions of the in vivo hypoxic potency (AUC req) and selectivity (HCD) with 12 TTO analogues predicted to be active in vivo, subject to the achievement of adequate plasma pharmacokinetics.

  8. Tricyclic [1,2,4]Triazine 1,4-Dioxides As Hypoxia Selective Cytotoxins

    PubMed Central

    Hay, Michael P.; Hicks, Kevin O.; Pchalek, Karin; Lee, Ho H.; Blaser, Adrian; Pruijn, Frederik B.; Anderson, Robert F.; Shinde, Sujata S.; Wilson, William R.; Denny, William A.

    2009-01-01

    A series of novel tricyclic triazine-di-N-oxides (TTOs) related to tirapazamine have been designed and prepared. A wide range of structural arrangements with cycloalkyl, oxygen- and nitrogen-containing saturated rings fused to the triazine core, coupled with various side chains linked to either hemisphere, resulted in TTO analogues that displayed hypoxia-selective cytotoxicity in vitro. Optimal rates of hypoxic metabolism and tissue diffusion coefficients were achieved with fused cycloalkyl rings in combination with both the 3-aminoalkyl or 3-alkyl substituents linked to weakly basic soluble amines. The selection was further refined using pharmacokinetic/pharmacodynamic model predictions of the in vivo hypoxic potency (AUCreq) and selectivity (HCD) with 12 TTO analogues predicted to be active in vivo, subject to the achievement of adequate plasma pharmacokinetics. PMID:18847185

  9. Finite element based simulation on friction stud welding of metal matrix composites to steel

    NASA Astrophysics Data System (ADS)

    Hynes, N. Rajesh Jesudoss; Tharmaraj, R.; Velu, P. Shenbaga; Kumar, R.

    2016-05-01

    Friction welding is a solid state joining technique used for joining similar and dissimilar materials with high integrity. This new technique is being successfully applied to the aerospace, automobile, and ship building industries, and is attracting more and more research interest. The quality of Friction Stud Welded joints depends on the frictional heat generated at the interface. Hence, thermal analysis on friction stud welding of stainless steel (AISI 304) and aluminium silicon carbide (AlSiC) combination is carried out in the present work. In this study, numerical simulation is carried out using ANSYS software and the temperature profiles are predicted at various increments of time. The developed numerical model is found to be adequate to predict temperature distribution of friction stud weld aluminium silicon carbide/stainless steel joints.

  10. Simulating Space Capsule Water Landing with Explicit Finite Element Method

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lyle, Karen H.

    2007-01-01

    A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.

  11. Simulating smoke transport from wildland fires with a regional-scale air quality model: sensitivity to spatiotemporal allocation of fire emissions.

    PubMed

    Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T

    2014-09-15

    Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  13. Stochastic modeling for river pollution of Sungai Perlis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yunus, Nurul Izzaty Mohd.; Rahman, Haliza Abd.; Bahar, Arifah

    2015-02-03

    River pollution has been recognized as a contributor to a wide range of health problems and disorders in human. It can pose health dangers to humans who come into contact with it, either directly or indirectly. Therefore, it is most important to measure the concentration of Biochemical Oxygen Demand (BOD) as a water quality parameter since the parameter has long been the basic means for determining the degree of water pollution in rivers. In this study, BOD is used as a parameter to estimate the water quality at Sungai Perlis. It has been observed that Sungai Perlis is polluted duemore » to lack of management and improper use of resources. Therefore, it is of importance to model the Sungai Perlis water quality in order to describe and predict the water quality systems. The BOD concentration secondary data set is used which was extracted from the Drainage and Irrigation Department Perlis State website. The first order differential equation from Streeter – Phelps model was utilized as a deterministic model. Then, the model was developed into a stochastic model. Results from this study shows that the stochastic model is more adequate to describe and predict the BOD concentration and the water quality systems in Sungai Perlis by having smaller value of mean squared error (MSE)« less

  14. Streamflow predictions in Alpine Catchments by using artificial neural networks. Application in the Alto Genil Basin (South Spain)

    NASA Astrophysics Data System (ADS)

    Jimeno-Saez, Patricia; Pegalajar-Cuellar, Manuel; Pulido-Velazquez, David

    2017-04-01

    This study explores techniques of modeling water inflow series, focusing on techniques of short-term steamflow prediction. An appropriate estimation of streamflow in advance is necessary to anticipate measures to mitigate the impacts and risks related to drought conditions. This study analyzes the prediction of future streamflow of nineteen subbasins in the Alto-Genil basin in Granada (Southeast of Spain). Some of these basin streamflow have an important component of snowmelt due to part of the system is located in Sierra Nevada Mountain Range, the highest mountain of continental Spain. Streamflow prediction models have been calibrated using time series of historical natural streamflows. The available streamflow measurements have been downloaded from several public data sources. These original data have been preprocessed to turn them to the original natural regime, removing the anthropic effects. The missing values in the adopted horizon period to calibrate the prediction models have been estimated by using a Temez hydrological balance model, approaching the snowmelt processes with a hybrid degree day method. In the experimentation, ARIMA models are used as baseline method, and recurrent neural networks ELMAN and nonlinear autoregressive neural network (NAR) to test if the prediction accuracy can be improved. After performing the multiple experiments with these models, non-parametric statistical tests are applied to select the best of these techniques. In the experiments carried out with ARIMA, it is concluded that ARIMA models are not adequate in this case study due to the existence of a nonlinear component that cannot be modeled. Secondly, ELMAN and NAR neural networks with multi-start training is performed with each network structure to deal with the local optimum problem, since in neural network training there is a very strong dependence on the initial weights of the network. The obtained results suggest that both neural networks are efficient for the short term prediction, surpassing the limitations of the ARIMA models and, in general, the experiments showed that NAR networks are the ones with the greatest generalization capability. Therefore, NAR networks are chosen as the starting point for other works, in which we study the streamflow predictions incorporating exogenous variables (as the Snow Cover Area), the sensitivity of the prediction to the initial conditions, multivariate streamflow predictions considering the spatial correlation between the sub-basins streamflow and the synthetic generations to assess droughts statistic. This research has been partially supported by the CGL2013-48424-C2-2-R (MINECO) and the PMAFI/06/14 (UCAM) projects.

  15. Developing a diagnostic model for estimating terrestrial vegetation gross primary productivity using the photosynthetic quantum yield and Earth Observation data.

    PubMed

    Ogutu, Booker O; Dash, Jadunandan; Dawson, Terence P

    2013-09-01

    This article develops a new carbon exchange diagnostic model [i.e. Southampton CARbon Flux (SCARF) model] for estimating daily gross primary productivity (GPP). The model exploits the maximum quantum yields of two key photosynthetic pathways (i.e. C3 and C4 ) to estimate the conversion of absorbed photosynthetically active radiation into GPP. Furthermore, this is the first model to use only the fraction of photosynthetically active radiation absorbed by photosynthetic elements of the canopy (i.e. FAPARps ) rather than total canopy, to predict GPP. The GPP predicted by the SCARF model was comparable to in situ GPP measurements (R(2)  > 0.7) in most of the evaluated biomes. Overall, the SCARF model predicted high GPP in regions dominated by forests and croplands, and low GPP in shrublands and dry-grasslands across USA and Europe. The spatial distribution of GPP from the SCARF model over Europe and conterminous USA was comparable to those from the MOD17 GPP product except in regions dominated by croplands. The SCARF model GPP predictions were positively correlated (R(2)  > 0.5) to climatic and biophysical input variables indicating its sensitivity to factors controlling vegetation productivity. The new model has three advantages, first, it prescribes only two quantum yield terms rather than species specific light use efficiency terms; second, it uses only the fraction of PAR absorbed by photosynthetic elements of the canopy (FAPARps ) hence capturing the actual PAR used in photosynthesis; and third, it does not need a detailed land cover map that is a major source of uncertainty in most remote sensing based GPP models. The Sentinel satellites planned for launch in 2014 by the European Space Agency have adequate spectral channels to derive FAPARps at relatively high spatial resolution (20 m). This provides a unique opportunity to produce global GPP operationally using the Southampton CARbon Flux (SCARF) model at high spatial resolution. © 2013 John Wiley & Sons Ltd.

  16. Measurement of local deformations on thermoformed composite parts under different process conditions

    NASA Astrophysics Data System (ADS)

    Vanclooster, K.; Lomov, S. V.; Willems, A.; Verpoest, I.

    2007-04-01

    The growing use of thermoplastic composites demands tools to analyze the deformed parts accurately. Intraply shear is the most pronounced deformation mode that occurs when a 2D fabric is draped into a complex 3D shape. The paper uses a 3D image correlation method to investigate the intraply shear deformation of a woven fabric reinforced composite. The thickness distribution of the formed ply is determined by using a dial indicator. The fabric is deformed by non-isothermal stamping into a matched "half-salami" shaped mould. The influence of processing conditions, especially pre-heating temperature, stamp speed and the blankholder force is investigated. The effect of the ply-orientation on the shear angle distribution is discussed. The measured shear angles are compared with a kinematical drape model. It was concluded that the local deformations are not influenced by the processing conditions. For 0 and 90° ply-orientation, the draping model adequately predicts the shear angle up to about 40°; for higher angles the shear is overestimated. In case of other ply orientations the model was unable to predict the correct shear angles.

  17. Comparative study of wine tannin classification using Fourier transform mid-infrared spectrometry and sensory analysis.

    PubMed

    Fernández, Katherina; Labarca, Ximena; Bordeu, Edmundo; Guesalaga, Andrés; Agosin, Eduardo

    2007-11-01

    Wine tannins are fundamental to the determination of wine quality. However, the chemical and sensorial analysis of these compounds is not straightforward and a simple and rapid technique is necessary. We analyzed the mid-infrared spectra of white, red, and model wines spiked with known amounts of skin or seed tannins, collected using Fourier transform mid-infrared (FT-MIR) transmission spectroscopy (400-4000 cm(-1)). The spectral data were classified according to their tannin source, skin or seed, and tannin concentration by means of discriminant analysis (DA) and soft independent modeling of class analogy (SIMCA) to obtain a probabilistic classification. Wines were also classified sensorially by a trained panel and compared with FT-MIR. SIMCA models gave the most accurate classification (over 97%) and prediction (over 60%) among the wine samples. The prediction was increased (over 73%) using the leave-one-out cross-validation technique. Sensory classification of the wines was less accurate than that obtained with FT-MIR and SIMCA. Overall, these results show the potential of FT-MIR spectroscopy, in combination with adequate statistical tools, to discriminate wines with different tannin levels.

  18. Lunar gravitational field estimation and the effects of mismodeling upon lunar satellite orbit prediction. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Davis, John H.

    1993-01-01

    Lunar spherical harmonic gravity coefficients are estimated from simulated observations of a near-circular low altitude polar orbiter disturbed by lunar mascons. Lunar gravity sensing missions using earth-based nearside observations with and without satellite-based far-side observations are simulated and least squares maximum likelihood estimates are developed for spherical harmonic expansion fit models. Simulations and parameter estimations are performed by a modified version of the Smithsonian Astrophysical Observatory's Planetary Ephemeris Program. Two different lunar spacecraft mission phases are simulated to evaluate the estimated fit models. Results for predicting state covariances one orbit ahead are presented along with the state errors resulting from the mismodeled gravity field. The position errors from planning a lunar landing maneuver with a mismodeled gravity field are also presented. These simulations clearly demonstrate the need to include observations of satellite motion over the far side in estimating the lunar gravity field. The simulations also illustrate that the eighth degree and order expansions used in the simulated fits were unable to adequately model lunar mascons.

  19. Inelastic Deformation of Metal Matrix Composites. Part 1; Plasticity and Damage Mechanisms

    NASA Technical Reports Server (NTRS)

    Majumdar, B. S.; Newaz, G. M.

    1992-01-01

    The deformation mechanisms of a Ti 15-3/SCS6 (SiC fiber) metal matrix composite (MMC) were investigated using a combination of mechanical measurements and microstructural analysis. The objectives were to evaluate the contributions of plasticity and damage to the overall inelastic response, and to confirm the mechanisms by rigorous microstructural evaluations. The results of room temperature experiments performed on 0 degree and 90 degree systems primarily are reported in this report. Results of experiments performed on other laminate systems and at high temperatures will be provided in a forthcoming report. Inelastic deformation of the 0 degree MMC (fibers parallel to load direction) was dominated by the plasticity of the matrix. In contrast, inelastic deformations of the 90 degree composite (fibers perpendicular to loading direction) occurred by both damage and plasticity. The predictions of a continuum elastic plastic model were compared with experimental data. The model was adequate for predicting the 0 degree response; however, it was inadequate for predicting the 90 degree response largely because it neglected damage. The importance of validating constitutive models using a combination of mechanical measurements and microstructural analysis is pointed out. The deformation mechanisms, and the likely sequence of events associated with the inelastic deformation of MMCs, are indicated in this paper.

  20. Thermodynamics of concentrated electrolyte mixtures and the prediction of mineral solubilities to high temperatures for mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O

    NASA Astrophysics Data System (ADS)

    Pabalan, Roberto T.; Pitzer, Kenneth S.

    1987-09-01

    Mineral solubilities in binary and ternary electrolyte mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O are calculated to high temperatures using available thermodynamic data for solids and for aqueous electrolyte solutions. Activity and osmotic coefficients are derived from the ion-interaction model of Pitzer (1973, 1979) and co-workers, the parameters of which are evaluated from experimentally determined solution properties or from solubility data in binary and ternary mixtures. Excellent to good agreement with experimental solubilities for binary and ternary mixtures indicate that the model can be successfully used to predict mineral-solution equilibria to high temperatures. Although there are currently no theoretical forms for the temperature dependencies of the various model parameters, the solubility data in ternary mixtures can be adequately represented by constant values of the mixing term θ ij and values of ψ ijk which are either constant or have a simple temperature dependence. Since no additional parameters are needed to describe the thermodynamic properties of more complex electrolyte mixtures, the calculations can be extended to equilibrium studies relevant to natural systems. Examples of predicted solubilities are given for the quaternary system NaCl-KCl-MgCl 2-H 2O.

  1. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response Tier 2 to Intervention

    PubMed Central

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Doug; Fuchs, Lynn S.; Bouton, Bobette

    2013-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small group tutoring in a response-to-intervention model. First-grade students (n=134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of 3 sets of variables: static decoding measures, Tier 1 responsiveness indicators, and pre-reading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% – 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. PMID:23213050

  2. Examining the predictive validity of a dynamic assessment of decoding to forecast response to tier 2 intervention.

    PubMed

    Cho, Eunsoo; Compton, Donald L; Fuchs, Douglas; Fuchs, Lynn S; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of three sets of variables: static decoding measures, Tier 1 responsiveness indicators, and prereading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% to 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. © Hammill Institute on Disabilities 2012.

  3. Chemical structure-based predictive model for the oxidation of trace organic contaminants by sulfate radical.

    PubMed

    Ye, Tiantian; Wei, Zongsu; Spinney, Richard; Tang, Chong-Jian; Luo, Shuang; Xiao, Ruiyang; Dionysiou, Dionysios D

    2017-06-01

    Second-order rate constants [Formula: see text] for the reaction of sulfate radical anion (SO 4 •- ) with trace organic contaminants (TrOCs) are of scientific and practical importance for assessing their environmental fate and removal efficiency in water treatment systems. Here, we developed a chemical structure-based model for predicting [Formula: see text] using 32 molecular fragment descriptors, as this type of model provides a quick estimate at low computational cost. The model was constructed using the multiple linear regression (MLR) and artificial neural network (ANN) methods. The MLR method yielded adequate fit for the training set (R training 2 =0.88,n=75) and reasonable predictability for the validation set (R validation 2 =0.62,n=38). In contrast, the ANN method produced a more statistical robustness but rather poor predictability (R training 2 =0.99andR validation 2 =0.42). The reaction mechanisms of SO 4 •- reactivity with TrOCs were elucidated. Our result shows that the coefficients of functional groups reflect their electron donating/withdrawing characters. For example, electron donating groups typically exhibit positive coefficients, indicating enhanced SO 4 •- reactivity. Electron withdrawing groups exhibit negative values, indicating reduced reactivity. With its quick and accurate features, we applied this structure-based model to 55 discrete TrOCs culled from the Contaminant Candidate List 4, and quantitatively compared their removal efficiency with SO 4 •- and OH in the presence of environmental matrices. This high-throughput model helps prioritize TrOCs that are persistent to SO 4 •- based oxidation technologies at the screening level, and provide diagnostics of SO 4 •- reaction mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Evaluation of current prediction models for Lynch syndrome: updating the PREMM5 model to identify PMS2 mutation carriers.

    PubMed

    Goverde, A; Spaander, M C W; Nieboer, D; van den Ouweland, A M W; Dinjens, W N M; Dubbink, H J; Tops, C J; Ten Broeke, S W; Bruno, M J; Hofstra, R M W; Steyerberg, E W; Wagner, A

    2018-07-01

    Until recently, no prediction models for Lynch syndrome (LS) had been validated for PMS2 mutation carriers. We aimed to evaluate MMRpredict and PREMM5 in a clinical cohort and for PMS2 mutation carriers specifically. In a retrospective, clinic-based cohort we calculated predictions for LS according to MMRpredict and PREMM5. The area under the operator receiving characteristic curve (AUC) was compared between MMRpredict and PREMM5 for LS patients in general and for different LS genes specifically. Of 734 index patients, 83 (11%) were diagnosed with LS; 23 MLH1, 17 MSH2, 31 MSH6 and 12 PMS2 mutation carriers. Both prediction models performed well for MLH1 and MSH2 (AUC 0.80 and 0.83 for PREMM5 and 0.79 for MMRpredict) and fair for MSH6 mutation carriers (0.69 for PREMM5 and 0.66 for MMRpredict). MMRpredict performed fair for PMS2 mutation carriers (AUC 0.72), while PREMM5 failed to discriminate PMS2 mutation carriers from non-mutation carriers (AUC 0.51). The only statistically significant difference between PMS2 mutation carriers and non-mutation carriers was proximal location of colorectal cancer (77 vs. 28%, p < 0.001). Adding location of colorectal cancer to PREMM5 considerably improved the models performance for PMS2 mutation carriers (AUC 0.77) and overall (AUC 0.81 vs. 0.72). We validated these results in an external cohort of 376 colorectal cancer patients, including 158 LS patients. MMRpredict and PREMM5 cannot adequately identify PMS2 mutation carriers. Adding location of colorectal cancer to PREMM5 may improve the performance of this model, which should be validated in larger cohorts.

  5. Validating a spatially distributed hydrological model with soil morphology data

    NASA Astrophysics Data System (ADS)

    Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.

    2013-10-01

    Spatially distributed hydrological models are popular tools in hydrology and they are claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time-series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for the transport of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography. Around 40% of the catchment area are artificially drained. We measured weather data, discharge and groundwater levels in 11 piezometers for 1.5 yr. For broadening the spatially distributed data sets that can be used for model calibration and validation, we translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. We used redox-morphology signs for these estimates. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to the groundwater levels in the piezometers and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the accuracy of the groundwater level predictions was not high enough to be used for the prediction of saturated areas. The groundwater level dynamics were not adequately reproduced and the predicted spatial patterns of soil saturation did not correspond to the patterns estimated from the soil map. Our results indicate that an accurate prediction of the groundwater level dynamics of the shallow groundwater in our catchment that is subject to artificial drainage would require a more complex model. Especially high spatial resolution and very detailed process representations at the boundary between the unsaturated and the saturated zone are expected to be crucial. The data needed for such a detailed model are not generally available. The high computational demand and the complex model setup would require more resources than the direct identification of saturated areas in the field. This severely hampers the practical use of such models despite their usefulness for scientific purposes.

  6. Development and Internal Validation of a Prediction Model to Estimate the Probability of Needing Aggressive Immunosuppressive Therapy With Cytostatics in de Novo Lupus Nephritis Patients.

    PubMed

    Restrepo-Escobar, Mauricio; Granda-Carvajal, Paula Andrea; Jaimes, Fabián

    2017-07-18

    To develop a multivariable clinical prediction model for the requirement of aggressive immunosuppression with cytostatics, based on simple clinical record data and lab tests. The model is defined in accordance with the result of the kidney biopsies. Retrospective study conducted with data from patients 16 years and older, with SLE and nephritis with less than 6 months of evolution. An initial bivariate analysis was conducted to select the variables to be included in a multiple logistic regression model. Goodness of fit was evaluated using a Hosmer-Lemeshow test (H-L) and the discrimination capacity of the model by means of the area under the ROC (AUC) curve. Data from 242 patients was gathered; of these, 18.2% (n=44) did not need an addition of cytostatics according to the findings of their kidney biopsies. The variables included in the final model were 24-h proteinuria, diastolic blood pressure, creatinine, C3 complement and the interaction of hematuria with leukocyturia in urinary sediment. The model showed excellent discrimination (AUC=0.929; 95% CI=0.894-0.963) and adequate calibration (H-L, P=.959). In recent-onset LN patients, the decision to use or not to use intensive immunosuppressive therapy could be performed based on our prediction model as an alternative to kidney biopsies. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  7. Nowcast modeling of Escherichia coli concentrations at multiple urban beaches of southern Lake Michigan

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2005-01-01

    Predictive modeling for Escherichia coli concentrations at effluent-dominated beaches may be a favorable alternative to current, routinely criticized monitoring standards. The ability to model numerous beaches simultaneously and provide real-time data decreases cost and effort associated with beach monitoring. In 2004, five Lake Michigan beaches and the nearby Little Calumet River outfall were monitored for E. coli 7 days a week; on nine occasions, samples were analyzed for coliphage to indicate a sewage source. Ambient lake, river, and weather conditions were measured or obtained from independent monitoring sources. Positive tests for coliphage analysis indicated sewage was present in the river and on bathing beaches following heavy rainfall. Models were developed separately for days with prevailing onshore and offshore winds due to the strong influence of wind direction in determining the river's impact on the beaches. Using regression modeling, it was determined that during onshore winds, E. coli   could be adequately predicted using wave height, lake chlorophyll and turbidity, and river turbidity (R2=0.635, N=94); model performance decreased for offshore winds using wave height, wave period, and precipitation (R2=0.320, N=124). Variation was better explained at individual beaches. Overall, the models only failed to predict E. coli levels above the EPA closure limit (235 CFU/100 ml) on five of eleven occasions, indicating that the model is a more reliable alternative to the monitoring approach employed at most recreational beaches.

  8. A motivational model for environmentally responsible behavior.

    PubMed

    Tabernero, Carmen; Hernández, Bernardo

    2012-07-01

    This paper presents a study examining whether self-efficacy and intrinsic motivation are related to environmentally responsible behavior (ERB). The study analysed past environmental behavior, self-regulatory mechanisms (self-efficacy, satisfaction, goals), and intrinsic and extrinsic motivation in relation to ERBs in a sample of 156 university students. Results show that all the motivational variables studied are linked to ERB. The effects of self-efficacy on ERB are mediated by the intrinsic motivation responses of the participants. A theoretical model was created by means of path analysis, revealing the power of motivational variables to predict ERB. Structural equation modeling was used to test and fit the research model. The role of motivational variables is discussed with a view to creating adequate learning contexts and experiences to generate interest and new sensations in which self-efficacy and affective reactions play an important role.

  9. An approximate model for cancellous bone screw fixation.

    PubMed

    Brown, C J; Sinclair, R A; Day, A; Hess, B; Procter, P

    2013-04-01

    This paper presents a finite element (FE) model to identify parameters that affect the performance of an improved cancellous bone screw fixation technique, and hence potentially improve fracture treatment. In cancellous bone of low apparent density, it can be difficult to achieve adequate screw fixation and hence provide stable fracture fixation that enables bone healing. Data from predictive FE models indicate that cements can have a significant potential to improve screw holding power in cancellous bone. These FE models are used to demonstrate the key parameters that determine pull-out strength in a variety of screw, bone and cement set-ups, and to compare the effectiveness of different configurations. The paper concludes that significant advantages, up to an order of magnitude, in screw pull-out strength in cancellous bone might be gained by the appropriate use of a currently approved calcium phosphate cement.

  10. The development of the asparagus miner (Ophiomyia simplex Loew; Diptera: Agromyzidae) in temperate zones: a degree-day model.

    PubMed

    Morrison, William R; Andresen, Jeffrey; Szendrei, Zsofia

    2014-07-01

    The asparagus miner is a putative vector of Fusarium spp., which have been implicated in globally declining asparagus production. Growers currently apply broad-spectrum insecticides for the asparagus miner, but lack management guidelines for adequately controlling the pest. Our aims were (1) to determine the lower developmental threshold of the asparagus miner, (2) develop and validate a degree-day model describing its phenology, and (3) create a developmental time budget for the asparagus miner to help guide growers' management decisions. We found that the lower developmental threshold for the asparagus miner was 12.1 °C, and that the phenology of the asparagus miner could be reliably predicted over the course of a two-year study. Predictions from the model match well with previously published information on the bionomics of the asparagus miner, but fit better for sampling data collected from the midwestern and eastern United States than for the United Kingdom. The life cycle of the asparagus miner likely requires between 1500 and 2000 degree-days to complete; the longest developmental time requirement was for the pupal stagen This study provides tools for the targeted management of the asparagus miner by offering a degree-day model that may be used to predict its life stages in the north-eastern United States. © 2013 Society of Chemical Industry.

  11. Perceptual inference.

    PubMed

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Estimating non-isothermal bacterial growth in foods from isothermal experimental data.

    PubMed

    Corradini, M G; Peleg, M

    2005-01-01

    To develop a mathematical method to estimate non-isothermal microbial growth curves in foods from experiments performed under isothermal conditions and demonstrate the method's applicability with published growth data. Published isothermal growth curves of Pseudomonas spp. in refrigerated fish at 0-8 degrees C and Escherichia coli 1952 in a nutritional broth at 27.6-36 degrees C were fitted with two different three-parameter 'primary models' and the temperature dependence of their parameters was fitted by ad hoc empirical 'secondary models'. These were used to generate non-isothermal growth curves by solving, numerically, a differential equation derived on the premise that the momentary non-isothermal growth rate is the isothermal rate at the momentary temperature, at a time that corresponds to the momentary growth level of the population. The predicted non-isothermal growth curves were in agreement with the reported experimental ones and, as expected, the quality of the predictions did not depend on the 'primary model' chosen for the calculation. A common type of sigmoid growth curve can be adequately described by three-parameter 'primary models'. At least in the two systems examined, these could be used to predict growth patterns under a variety of continuous and discontinuous non-isothermal temperature profiles. The described mathematical method whenever validated experimentally will enable the simulation of the microbial quality of stored and transported foods under a large variety of existing or contemplated commercial temperature histories.

  13. Validation of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm

    PubMed Central

    Lawson, Sara Nicole; Zaluski, Neal; Petrie, Amanda; Arnold, Cathy; Basran, Jenny

    2013-01-01

    ABSTRACT Purpose: To investigate the concurrent validity of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm (FSRA). Method: A total of 29 older adults (mean age 77.7 [SD 4.0] y) residing in an independent-living senior's complex who met inclusion criteria completed a demographic questionnaire and the components of the FSRA and Berg Balance Scale (BBS). The FSRA consists of the Elderly Fall Screening Test (EFST) and the Multi-factor Falls Questionnaire (MFQ); it is designed to categorize individuals into low, moderate, or high fall-risk categories to determine appropriate management pathways. A predictive model for probability of fall risk, based on previous research, was used to determine concurrent validity of the FSRI. Results: The FSRA placed 79% of participants into the low-risk category, whereas the predictive model found the probability of fall risk to range from 0.04 to 0.74, with a mean of 0.35 (SD 0.25). No statistically significant correlation was found between the FSRA and the predictive model for probability of fall risk (Spearman's ρ=0.35, p=0.06). Conclusion: The FSRA lacks concurrent validity relative to to a previously established model of fall risk and appears to over-categorize individuals into the low-risk group. Further research on the FSRA as an adequate tool to screen community-dwelling older adults for fall risk is recommended. PMID:24381379

  14. Increased sediment oxygen flux in lakes and reservoirs: The impact of hypolimnetic oxygenation

    NASA Astrophysics Data System (ADS)

    Bierlein, Kevin A.; Rezvani, Maryam; Socolofsky, Scott A.; Bryant, Lee D.; Wüest, Alfred; Little, John C.

    2017-06-01

    Hypolimnetic oxygenation is an increasingly common lake management strategy for mitigating hypoxia/anoxia and associated deleterious effects on water quality. A common effect of oxygenation is increased oxygen consumption in the hypolimnion and predicting the magnitude of this increase is the crux of effective oxygenation system design. Simultaneous measurements of sediment oxygen flux (JO2) and turbulence in the bottom boundary layer of two oxygenated lakes were used to investigate the impact of oxygenation on JO2. Oxygenation increased JO2 in both lakes by increasing the bulk oxygen concentration, which in turn steepens the diffusive gradient across the diffusive boundary layer. At high flow rates, the diffusive boundary layer thickness decreased as well. A transect along one of the lakes showed JO2 to be spatially quite variable, with near-field and far-field JO2 differing by a factor of 4. Using these in situ measurements, physical models of interfacial flux were compared to microprofile-derived JO2 to determine which models adequately predict JO2 in oxygenated lakes. Models based on friction velocity, turbulence dissipation rate, and the integral scale of turbulence agreed with microprofile-derived JO2 in both lakes. These models could potentially be used to predict oxygenation-induced oxygen flux and improve oxygenation system design methods for a broad range of reservoir systems.

  15. Do Modern Spectacles Endanger Surgeons?

    PubMed Central

    Chong, Simon J.; Smith, Charlotte; Bialostocki, Adam; McEwan, Christopher N.

    2007-01-01

    Background: Despite documented cases of infectious disease transmission to medical staff via conjunctival contamination and widespread recommendation of protective eyewear use during surgical procedures, a large number of surgeons rely on their prescription spectacles as sole eye protection. Modern fashion spectacles, being of increasingly slim design, may no longer be adequate in this role. Methods: A survey was conducted among the surgeons at Waikato Hospital from December 7, 2004 to February 1, 2005, to assess current operating theater eyewear practices and attitudes. Those who wore prescription spectacles were asked to assume a standardized “operating position” from which anatomic measurements were obtained. These data were mathematically analyzed to determine the degree of palebral fissure protection conferred by their spectacles. Results: Of 71 surgical practitioners surveyed, 45.1% required prescription lenses for operating, the mean spectacle age being 2.45 years; 84.5% had experienced prior periorbital blood splashes; 2.8% had previously contracted an illness attributed to such an event; 78.8% participants routinely used eye protection, but of the 27 requiring spectacles, 68.0% used these as their sole eye protection. Chief complaints about safety glasses and facial shields were of fogging, poor comfort, inability to wear spectacles underneath, and unavailability. Our model predicted that 100%, 92.6%, 77.8%, and 0% of our population were protected by their spectacles laterally, medially, inferiorly, and superiorly, respectively. Conclusions: Prescription spectacles of contemporary styling do not provide adequate protection against conjunctival blood splash injuries. Our model predicts the design adequacy of currently available purpose-designed protective eyewear, which should be used routinely. PMID:17435558

  16. Do modern spectacles endanger surgeons? The Waikato Eye Protection Study.

    PubMed

    Chong, Simon J; Smith, Charlotte; Bialostocki, Adam; McEwan, Christopher N

    2007-03-01

    Despite documented cases of infectious disease transmission to medical staff via conjunctival contamination and widespread recommendation of protective eyewear use during surgical procedures, a large number of surgeons rely on their prescription spectacles as sole eye protection. Modern fashion spectacles, being of increasingly slim design, may no longer be adequate in this role. A survey was conducted among the surgeons at Waikato Hospital from December 7, 2004 to February 1, 2005, to assess current operating theater eyewear practices and attitudes. Those who wore prescription spectacles were asked to assume a standardized "operating position" from which anatomic measurements were obtained. These data were mathematically analyzed to determine the degree of palebral fissure protection conferred by their spectacles. Of 71 surgical practitioners surveyed, 45.1% required prescription lenses for operating, the mean spectacle age being 2.45 years; 84.5% had experienced prior periorbital blood splashes; 2.8% had previously contracted an illness attributed to such an event; 78.8% participants routinely used eye protection, but of the 27 requiring spectacles, 68.0% used these as their sole eye protection. Chief complaints about safety glasses and facial shields were of fogging, poor comfort, inability to wear spectacles underneath, and unavailability. Our model predicted that 100%, 92.6%, 77.8%, and 0% of our population were protected by their spectacles laterally, medially, inferiorly, and superiorly, respectively. Prescription spectacles of contemporary styling do not provide adequate protection against conjunctival blood splash injuries. Our model predicts the design adequacy of currently available purpose-designed protective eyewear, which should be used routinely.

  17. Advances in phakic intraocular lenses: indications, efficacy, safety, and new designs.

    PubMed

    Alio, Jorge L

    2004-08-01

    The recent evolution of phakic intraocular lenses (PIOLs) has made this refractive surgical technique safer, very predictable, and effective. Due to these reasons, PIOLs have been expanding the horizon of their indications. The aim of this review is to update the reader in the recent advances reported on the topic during the year 2003. The most recent progress has been made towards decreasing the incision size down to 3 mm or less for all PIOLs models to avoid pupil ovalling in angle-supported designs with new biomaterials or exchangeable haptics, and to decrease the incidence of cataract induction in posterior chamber models with modified designs and better sizing. High-order aberrations and the quality of vision are improved with PIOLs. The main limitation for the further development of PIOLs is the lack of adequate diagnostic imaging techniques to perform a precise preoperative study of the anterior segment anatomy. Emerging diagnostic technologies based on the use of very high frequency (100 MHz) ultrasound and optical coherence tomography seem to have a most important role in the future development of PIOLs defining preoperatively the most adequate anatomic conditions for each design. PIOLs offer today an excellent alternative for the correction of high and moderate myopia, hyperopia, and astigmatism. Emerging indications, still under investigation, include presbyopia and pediatric anisometropic amblyopia. Due to their advantages for quality of vision and the increased knowledge on their safety, as well as the evidence of their predictability, PIOLs are expected to largely increase their clinical use as a refractive surgical technique in the coming years.

  18. Characterization of structural connections using free and forced response test data

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Huckelbridge, Arthur A.

    1989-01-01

    The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.

  19. Cognitive task load in a naval ship control centre: from identification to prediction.

    PubMed

    Grootjen, M; Neerincx, M A; Veltman, J A

    Deployment of information and communication technology will lead to further automation of control centre tasks and an increasing amount of information to be processed. A method for establishing adequate levels of cognitive task load for the operators in such complex environments has been developed. It is based on a model distinguishing three load factors: time occupied, task-set switching, and level of information processing. Application of the method resulted in eight scenarios for eight extremes of task load (i.e. low and high values for each load factor). These scenarios were performed by 13 teams in a high-fidelity control centre simulator of the Royal Netherlands Navy. The results show that the method provides good prediction of the task load that will actually appear in the simulator. The model allowed identification of under- and overload situations showing negative effects on operator performance corresponding to controlled experiments in a less realistic task environment. Tools proposed to keep the operator at an optimum task load are (adaptive) task allocation and interface support.

  20. Herpes Zoster Risk Reduction through Exposure to Chickenpox Patients: A Systematic Multidisciplinary Review

    PubMed Central

    Ogunjimi, Benson; Van Damme, Pierre; Beutels, Philippe

    2013-01-01

    Varicella-zoster virus (VZV) causes chickenpox and may subsequently reactivate to cause herpes zoster later in life. The exogenous boosting hypothesis states that re-exposure to circulating VZV can inhibit VZV reactivation and consequently also herpes zoster in VZV-immune individuals. Using this hypothesis, mathematical models predicted widespread chickenpox vaccination to increase herpes zoster incidence over more than 30 years. Some countries have postponed universal chickenpox vaccination, at least partially based on this prediction. After a systematic search and selection procedure, we analyzed different types of exogenous boosting studies. We graded 13 observational studies on herpes zoster incidence after widespread chickenpox vaccination, 4 longitudinal studies on VZV immunity after re-exposure, 9 epidemiological risk factor studies, 7 mathematical modeling studies as well as 7 other studies. We conclude that exogenous boosting exists, although not for all persons, nor in all situations. Its magnitude is yet to be determined adequately in any study field. PMID:23805224

Top