Olivares-Morales, Andrés; Ghosh, Avijit; Aarons, Leon; Rostami-Hodjegan, Amin
2016-11-01
A new minimal Segmented Transit and Absorption model (mSAT) model has been recently proposed and combined with intrinsic intestinal effective permeability (P eff,int ) to predict the regional gastrointestinal (GI) absorption (f abs ) of several drugs. Herein, this model was extended and applied for the prediction of oral bioavailability and pharmacokinetics of oxybutynin and its enantiomers to provide a mechanistic explanation of the higher relative bioavailability observed for oxybutynin's modified-release OROS® formulation compared to its immediate-release (IR) counterpart. The expansion of the model involved the incorporation of mechanistic equations for the prediction of release, transit, dissolution, permeation and first-pass metabolism. The predicted pharmacokinetics of oxybutynin enantiomers after oral administration for both the IR and OROS® formulations were in close agreement with the observed data. The predicted absolute bioavailability for the IR formulation was within 5% of the observed value, and the model adequately predicted the higher relative bioavailability observed for the OROS® formulation vs. the IR counterpart. From the model predictions, it can be noticed that the higher bioavailability observed for the OROS® formulation was mainly attributable to differences in the intestinal availability (F G ) rather than due to a higher colonic f abs , thus confirming previous hypotheses. The predicted f abs was almost 70% lower for the OROS® formulation compared to the IR formulation, whereas the F G was almost eightfold higher than in the IR formulation. These results provide further support to the hypothesis of an increased F G as the main factor responsible for the higher bioavailability of oxybutynin's OROS® formulation vs. the IR.
NASA Technical Reports Server (NTRS)
Dhanasekharan, M.; Huang, H.; Kokini, J. L.; Janes, H. W. (Principal Investigator)
1999-01-01
The measured rheological behavior of hard wheat flour dough was predicted using three nonlinear differential viscoelastic models. The Phan-Thien Tanner model gave good zero shear viscosity prediction, but overpredicted the shear viscosity at higher shear rates and the transient and extensional properties. The Giesekus-Leonov model gave similar predictions to the Phan-Thien Tanner model, but the extensional viscosity prediction showed extension thickening. Using high values of the mobility factor, extension thinning behavior was observed but the predictions were not satisfactory. The White-Metzner model gave good predictions of the steady shear viscosity and the first normal stress coefficient but it was unable to predict the uniaxial extensional viscosity as it exhibited asymptotic behavior in the tested extensional rates. It also predicted the transient shear properties with moderate accuracy in the transient phase, but very well at higher times, compared to the Phan-Thien Tanner model and the Giesekus-Leonov model. None of the models predicted all observed data consistently well. Overall the White-Metzner model appeared to make the best predictions of all the observed data.
Uzun, Harun; Yıldız, Zeynep; Goldfarb, Jillian L; Ceylan, Selim
2017-06-01
As biomass becomes more integrated into our energy feedstocks, the ability to predict its combustion enthalpies from routine data such as carbon, ash, and moisture content enables rapid decisions about utilization. The present work constructs a novel artificial neural network model with a 3-3-1 tangent sigmoid architecture to predict biomasses' higher heating values from only their proximate analyses, requiring minimal specificity as compared to models based on elemental composition. The model presented has a considerably higher correlation coefficient (0.963) and lower root mean square (0.375), mean absolute (0.328), and mean bias errors (0.010) than other models presented in the literature which, at least when applied to the present data set, tend to under-predict the combustion enthalpy. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ajaz, M.; Ullah, S.; Ali, Y.; Younis, H.
2018-02-01
In this research paper, the comprehensive results on the double differential yield of π± and k± mesons, protons and antiprotons as a function of laboratory momentum are reported. These hadrons are produced in proton-carbon interaction at 60 GeV/c. EPOS 1.99, EPOS-LHC and QGSJETII-04 models are used to perform simulations. Comparing the predictions of these models show that QGSJETII-04 model predicts higher yields of all the hadrons in most of the cases at the peak of the distribution. In this interval, the EPOS 1.99 and EPOS-LHC produce similar results. In most of the cases at higher momentum of the hadrons, all the three models are in good agreement. For protons, all models are in good agreement. EPOS-LHC gives high yield of antiprotons at high momentum values as compared to the other two models. EPOS-LHC gives higher prediction at the peak value for π+ mesons and protons at higher polar angle intervals of 100 < 𝜃 < 420 and 100 < 𝜃 < 360, respectively, and EPOS 1.99 gives higher prediction at the peak value for π- mesons for 140 < 𝜃 < 420. The model predictions, except for antiprotons, are compared with the data obtained by the NA61/SHINE experiment at 31 GeV/c proton-carbon collision, which clearly shows that the behavior of the distributions in models are similar to the ones from the data but the yield in data is low because of lower beam energy.
Initial comparison of single cylinder Stirling engine computer model predictions with test results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.
Soyiri, Ireneous N; Reidpath, Daniel D
2013-01-01
Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal/temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept.
Soyiri, Ireneous N.; Reidpath, Daniel D.
2013-01-01
Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal / temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept. PMID:24147122
Human demography and reserve size predict wildlife extinction in West Africa.
Brashares, J S; Arcese, P; Sam, M K
2001-01-01
Species-area models have become the primary tool used to predict baseline extinction rates for species in isolated habitats, and have influenced conservation and land-use planning worldwide. In particular, these models have been used to predict extinction rates following the loss or fragmentation of natural habitats in the absence of direct human influence on species persistence. Thus, where direct human influences, such as hunting, put added pressure on species in remnant habitat patches, we should expect to observe extinction rates higher than those predicted by simple species-area models. Here, we show that extinction rates for 41 species of large mammals in six nature reserves in West Africa are 14-307 times higher than those predicted by models based on reserve size alone. Human population and reserve size accounted for 98% of the observed variation in extinction rates between reserves. Extinction occurred at higher rates than predicted by species-area models for carnivores, primates and ungulates, and at the highest rates overall near reserve borders. Our results indicate that, where the harvest of wildlife is common, conservation plans should focus on increasing the size of reserves and reducing the rate of hunting. PMID:11747566
Khazraee, S Hadi; Johnson, Valen; Lord, Dominique
2018-08-01
The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients). Copyright © 2018. Published by Elsevier Ltd.
CFD Modeling of Launch Vehicle Aerodynamic Heating
NASA Technical Reports Server (NTRS)
Tashakkor, Scott B.; Canabal, Francisco; Mishtawy, Jason E.
2011-01-01
The Loci-CHEM 3.2 Computational Fluid Dynamics (CFD) code is being used to predict Ares-I launch vehicle aerodynamic heating. CFD has been used to predict both ascent and stage reentry environments and has been validated against wind tunnel tests and the Ares I-X developmental flight test. Most of the CFD predictions agreed with measurements. On regions where mismatches occurred, the CFD predictions tended to be higher than measured data. These higher predictions usually occurred in complex regions, where the CFD models (mainly turbulence) contain less accurate approximations. In some instances, the errors causing the over-predictions would cause locations downstream to be affected even though the physics were still being modeled properly by CHEM. This is easily seen when comparing to the 103-AH data. In the areas where predictions were low, higher grid resolution often brought the results closer to the data. Other disagreements are attributed to Ares I-X hardware not being present in the grid, as a result of computational resources limitations. The satisfactory predictions from CHEM provide confidence that future designs and predictions from the CFD code will provide an accurate approximation of the correct values for use in design and other applications
Kesmarky, Klara; Delhumeau, Cecile; Zenobi, Marie; Walder, Bernhard
2017-07-15
The Glasgow Coma Scale (GCS) and the Abbreviated Injury Score of the head region (HAIS) are validated prognostic factors in traumatic brain injury (TBI). The aim of this study was to compare the prognostic performance of an alternative predictive model including motor GCS, pupillary reactivity, age, HAIS, and presence of multi-trauma for short-term mortality with a reference predictive model including motor GCS, pupil reaction, and age (IMPACT core model). A secondary analysis of a prospective epidemiological cohort study in Switzerland including patients after severe TBI (HAIS >3) with the outcome death at 14 days was performed. Performance of prediction, accuracy of discrimination (area under the receiver operating characteristic curve [AUROC]), calibration, and validity of the two predictive models were investigated. The cohort included 808 patients (median age, 56; interquartile range, 33-71), median GCS at hospital admission 3 (3-14), abnormal pupil reaction 29%, with a death rate of 29.7% at 14 days. The alternative predictive model had a higher accuracy of discrimination to predict death at 14 days than the reference predictive model (AUROC 0.852, 95% confidence interval [CI] 0.824-0.880 vs. AUROC 0.826, 95% CI 0.795-0.857; p < 0.0001). The alternative predictive model had an equivalent calibration, compared with the reference predictive model Hosmer-Lemeshow p values (Chi2 8.52, Hosmer-Lemeshow p = 0.345 vs. Chi2 8.66, Hosmer-Lemeshow p = 0.372). The optimism-corrected value of AUROC for the alternative predictive model was 0.845. After severe TBI, a higher performance of prediction for short-term mortality was observed with the alternative predictive model, compared with the reference predictive model.
Heil, Lieke; Kwisthout, Johan; van Pelt, Stan; van Rooij, Iris; Bekkering, Harold
2018-01-01
Evidence is accumulating that our brains process incoming information using top-down predictions. If lower level representations are correctly predicted by higher level representations, this enhances processing. However, if they are incorrectly predicted, additional processing is required at higher levels to "explain away" prediction errors. Here, we explored the potential nature of the models generating such predictions. More specifically, we investigated whether a predictive processing model with a hierarchical structure and causal relations between its levels is able to account for the processing of agent-caused events. In Experiment 1, participants watched animated movies of "experienced" and "novice" bowlers. The results are in line with the idea that prediction errors at a lower level of the hierarchy (i.e., the outcome of how many pins fell down) slow down reporting of information at a higher level (i.e., which agent was throwing the ball). Experiments 2 and 3 suggest that this effect is specific to situations in which the predictor is causally related to the outcome. Overall, the study supports the idea that a hierarchical predictive processing model can account for the processing of observed action outcomes and that the predictions involved are specific to cases where action outcomes can be predicted based on causal knowledge.
Comparisons between thermodynamic and one-dimensional combustion models of spark-ignition engines
NASA Technical Reports Server (NTRS)
Ramos, J. I.
1986-01-01
Results from a one-dimensional combustion model employing a constant eddy diffusivity and a one-step chemical reaction are compared with those of one-zone and two-zone thermodynamic models to study the flame propagation in a spark-ignition engine. One-dimensional model predictions are found to be very sensitive to the eddy diffusivity and reaction rate data. The average mixing temperature found using the one-zone thermodynamic model is higher than those of the two-zone and one-dimensional models during the compression stroke, and that of the one-dimensional model is higher than those predicted by both thermodynamic models during the expansion stroke. The one-dimensional model is shown to predict an accelerating flame even when the front approaches the cold cylinder wall.
Soehle, Martin; Wolf, Christina F; Priston, Melanie J; Neuloh, Georg; Bien, Christian G; Hoeft, Andreas; Ellerkmann, Richard K
2015-08-01
Anaesthesia for awake craniotomy aims for an unconscious patient at the beginning and end of surgery but a rapidly awakening and responsive patient during the awake period. Therefore, an accurate pharmacokinetic/pharmacodynamic (PK/PD) model for propofol is required to tailor depth of anaesthesia. To compare the predictive performances of the Marsh and the Schnider PK/PD models during awake craniotomy. A prospective observational study. Single university hospital from February 2009 to May 2010. Twelve patients undergoing elective awake craniotomy for resection of brain tumour or epileptogenic areas. Arterial blood samples were drawn at intervals and the propofol plasma concentration was determined. The prediction error, bias [median prediction error (MDPE)] and inaccuracy [median absolute prediction error (MDAPE)] of the Marsh and the Schnider models were calculated. The secondary endpoint was the prediction probability PK, by which changes in the propofol effect-site concentration (as derived from simultaneous PK/PD modelling) predicted changes in anaesthetic depth (measured by the bispectral index). The Marsh model was associated with a significantly (P = 0.05) higher inaccuracy (MDAPE 28.9 ± 12.0%) than the Schnider model (MDAPE 21.5 ± 7.7%) and tended to reach a higher bias (MDPE Marsh -11.7 ± 14.3%, MDPE Schnider -5.4 ± 20.7%, P = 0.09). MDAPE was outside of accepted limits in six (Marsh model) and two (Schnider model) of 12 patients. The prediction probability was comparable between the Marsh (PK 0.798 ± 0.056) and the Schnider model (PK 0.787 ± 0.055), but after adjusting the models to each individual patient, the Schnider model achieved significantly higher prediction probabilities (PK 0.807 ± 0.056, P = 0.05). When using the 'asleep-awake-asleep' anaesthetic technique during awake craniotomy, we advocate using the PK/PD model proposed by Schnider. Due to considerable interindividual variation, additional monitoring of anaesthetic depth is recommended. ClinicalTrials.gov identifier: NCT 01128465.
Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H
2017-07-01
Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.
NASA Astrophysics Data System (ADS)
Li, Chengcheng; Ren, Hong-Li; Zhou, Fang; Li, Shuanglin; Fu, Joshua-Xiouhua; Li, Guoping
2018-06-01
Precipitation is highly variable in space and discontinuous in time, which makes it challenging for models to predict on subseasonal scales (10-30 days). We analyze multi-pentad predictions from the Beijing Climate Center Climate System Model version 1.2 (BCC_CSM1.2), which are based on hindcasts from 1997 to 2014. The analysis focus on the skill of the model to predict precipitation variability over Southeast Asia from May to September, as well as its connections with intraseasonal oscillation (ISO). The effective precipitation prediction length is about two pentads (10 days), during which the skill measured by anomaly correlation is greater than 0.1. In order to further evaluate the performance of the precipitation prediction, the diagnosis results of the skills of two related circulation fields show that the prediction skills for the circulation fields exceed that of precipitation. Moreover, the prediction skills tend to be higher when the amplitude of ISO is large, especially for a boreal summer intraseasonal oscillation. The skills associated with phases 2 and 5 are higher, but that of phase 3 is relatively lower. Even so, different initial phases reflect the same spatial characteristics, which shows higher skill of precipitation prediction in the northwest Pacific Ocean. Finally, filter analysis is used on the prediction skills of total and subseasonal anomalies. The results of the two anomaly sets are comparable during the first two lead pentads, but thereafter the skill of the total anomalies is significantly higher than that of the subseasonal anomalies. This paper should help advance research in subseasonal precipitation prediction.
OPTIMIZING MODEL PERFORMANCE: VARIABLE SIZE RESOLUTION IN CLOUD CHEMISTRY MODELING. (R826371C005)
Under many conditions size-resolved aqueous-phase chemistry models predict higher sulfate production rates than comparable bulk aqueous-phase models. However, there are special circumstances under which bulk and size-resolved models offer similar predictions. These special con...
NASA Technical Reports Server (NTRS)
Jongen, T.; Machiels, L.; Gatski, T. B.
1997-01-01
Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations.
Dorota, Myszkowska
2013-03-01
The aim of the study was to construct the model forecasting the birch pollen season characteristics in Cracow on the basis of an 18-year data series. The study was performed using the volumetric method (Lanzoni/Burkard trap). The 98/95 % method was used to calculate the pollen season. The Spearman's correlation test was applied to find the relationship between the meteorological parameters and pollen season characteristics. To construct the predictive model, the backward stepwise multiple regression analysis was used including the multi-collinearity of variables. The predictive models best fitted the pollen season start and end, especially models containing two independent variables. The peak concentration value was predicted with the higher prediction error. Also the accuracy of the models predicting the pollen season characteristics in 2009 was higher in comparison with 2010. Both, the multi-variable model and one-variable model for the beginning of the pollen season included air temperature during the last 10 days of February, while the multi-variable model also included humidity at the beginning of April. The models forecasting the end of the pollen season were based on temperature in March-April, while the peak day was predicted using the temperature during the last 10 days of March.
Evaluation of an Impedance Model for Perforates Including the Effect of Bias Flow
NASA Technical Reports Server (NTRS)
Betts, J. F.; Follet, J. I.; Kelly, J. J.; Thomas, R. H.
2000-01-01
A new bias flow impedance model is developed for perforated plates from basic principles using as little empiricisms as possible. A quality experimental database was used to determine the predictive validity of the model. Results show that the model performs better for higher (15%) rather than lower (5%) percent open area (POA) samples. Based on the least squares ratio of numerical vs. experimental results, model predictions were on average within 20% and 30% for the higher and lower (POA), respectively. It is hypothesized on the work of other investigators that at lower POAs the higher fluid velocities in the perforate's orifices start forming unsteady vortices, which is not accounted for in our model. The numerical model, in general also underpredicts the experiments. It is theorized that the actual acoustic C(sub D) is lower than the measured raylometer C(sub D) used in the model. Using a larger C(sub D) makes the numerical model predict lower impedances. The frequency domain model derived in this paper shows very good agreement with another model derived using a time domain approach.
NASA Astrophysics Data System (ADS)
Roy, Swagata; Biswas, Srija; Babu, K. Arun; Mandal, Sumantra
2018-05-01
A novel constitutive model has been developed for predicting flow responses of super-austenitic stainless steel over a wide range of strains (0.05-0.6), temperatures (1173-1423 K) and strain rates (0.001-1 s-1). Further, the predictability of this new model has been compared with the existing Johnson-Cook (JC) and modified Zerilli-Armstrong (M-ZA) model. The JC model is not befitted for flow prediction as it is found to be exhibiting very high ( 36%) average absolute error (δ) and low ( 0.92) correlation coefficient (R). On the contrary, the M-ZA model has demonstrated relatively lower δ ( 13%) and higher R ( 0.96) for flow prediction. The incorporation of couplings of processing parameters in M-ZA model has led to exhibit better prediction than JC model. However, the flow analyses of the studied alloy have revealed the additional synergistic influences of strain and strain rate as well as strain, temperature, and strain rate apart from those considered in M-ZA model. Hence, the new phenomenological model has been formulated incorporating all the individual and synergistic effects of processing parameters and a `strain-shifting' parameter. The proposed model predicted the flow behavior of the alloy with much better correlation and generalization than M-ZA model as substantiated by its lower δ ( 7.9%) and higher R ( 0.99) of prediction.
Seyed, Mohammadali Rahmati; Mostafa, Rostami; Borhan, Beigzadeh
2018-04-27
The parametric optimization techniques have been widely employed to predict human gait trajectories; however, their applications to reveal the other aspects of gait are questionable. The aim of this study is to investigate whether or not the gait prediction model is able to justify the movement trajectories for the higher average velocities. A planar, seven-segment model with sixteen muscle groups was used to represent human neuro-musculoskeletal dynamics. At first, the joint angles, ground reaction forces (GRFs) and muscle activations were predicted and validated for normal average velocity (1.55 m/s) in the single support phase (SSP) by minimizing energy expenditure, which is subject to the non-linear constraints of the gait. The unconstrained system dynamics of extended inverse dynamics (USDEID) approach was used to estimate muscle activations. Then by scaling time and applying the same procedure, the movement trajectories were predicted for higher average velocities (from 2.07 m/s to 4.07 m/s) and compared to the pattern of movement with fast walking speed. The comparison indicated a high level of compatibility between the experimental and predicted results, except for the vertical position of the center of gravity (COG). It was concluded that the gait prediction model can be effectively used to predict gait trajectories for higher average velocities.
Application of GA-SVM method with parameter optimization for landslide development prediction
NASA Astrophysics Data System (ADS)
Li, X. Z.; Kong, J. M.
2013-10-01
Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.
Qian, Weiguo; Tang, Yunjia; Yan, Wenhua; Sun, Ling; Lv, Haitao
2018-03-09
Kawasaki disease (KD) is the most common pediatric vasculitis. Several models have been established to predict intravenous immunoglobulin (IVIG) resistance. The present study was aimed to evaluate the efficacy of prediction models using the medical data of KD patients. We collected the medical records of patients hospitalized in the Department of Cardiology in Children's Hospital of Soochow University with a diagnosis of KD from Jan 2015 to Dec 2016. IVIG resistance was defined as recrudescent or persistent fever ≥36 h after the end of their IVIG infusion. Patients with IVIG resistance tended to be younger, have higher occurrence of rash and changes of extremities. They had higher levels of c-reactive protein, aspartate aminotransferase, neutrophils proportion (N%), total bilirubin and lower level of albumin. Our prediction model had a sensitivity of 0.72 and a specificity of 0.75. Sensitivity of Kobayashi, Egami, Kawamura, Sano and Formosa were 0.72, 0.44, 0.48, 0.20, and 0.68, respectively. Specificity of these models were 0.62, 0.82, 0.66, 0.91, and 0.48, respectively. Our prediction model had a powerful predictive value in this area, followed by Kobayashi model while all the other prediction models had less excellent performances than ours.
Fujiyoshi, Akira; Arima, Hisatomi; Tanaka-Mizuno, Sachiko; Hisamatsu, Takahashi; Kadowaki, Sayaka; Kadota, Aya; Zaid, Maryam; Sekikawa, Akira; Yamamoto, Takashi; Horie, Minoru; Miura, Katsuyuki; Ueshima, Hirotsugu
2017-12-05
The clinical significance of coronary artery calcification (CAC) is not fully determined in general East Asian populations where background coronary heart disease (CHD) is less common than in USA/Western countries. We cross-sectionally assessed the association between CAC and estimated CHD risk as well as each major risk factor in general Japanese men. Participants were 996 randomly selected Japanese men aged 40-79 y, free of stroke, myocardial infarction, or revascularization. We examined an independent relationship between each risk factor used in prediction models and CAC score ≥100 by logistic regression. We then divided the participants into quintiles of estimated CHD risk per prediction model to calculate odds ratio of having CAC score ≥100. Receiver operating characteristic curve and c-index were used to examine discriminative ability of prevalent CAC for each prediction model. Age, smoking status, and systolic blood pressure were significantly associated with CAC score ≥100 in the multivariable analysis. The odds of having CAC score ≥100 were higher for those in higher quintiles in all prediction models (p-values for trend across quintiles <0.0001 for all models). All prediction models showed fair and similar discriminative abilities to detect CAC score ≥100, with similar c-statistics (around 0.70). In a community-based sample of Japanese men free of CHD and stroke, CAC score ≥100 was significantly associated with higher estimated CHD risk by prediction models. This finding supports the potential utility of CAC as a biomarker for CHD in a general Japanese male population.
NASA Technical Reports Server (NTRS)
Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris
2000-01-01
The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).
Smith, Jordan Ned; Hinderliter, Paul M; Timchalk, Charles; Bartels, Michael J; Poet, Torka S
2014-08-01
Sensitivity to some chemicals in animals and humans are known to vary with age. Age-related changes in sensitivity to chlorpyrifos have been reported in animal models. A life-stage physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model was developed to predict disposition of chlorpyrifos and its metabolites, chlorpyrifos-oxon (the ultimate toxicant) and 3,5,6-trichloro-2-pyridinol (TCPy), as well as B-esterase inhibition by chlorpyrifos-oxon in humans. In this model, previously measured age-dependent metabolism of chlorpyrifos and chlorpyrifos-oxon were integrated into age-related descriptions of human anatomy and physiology. The life-stage PBPK/PD model was calibrated and tested against controlled adult human exposure studies. Simulations suggest age-dependent pharmacokinetics and response may exist. At oral doses ⩾0.6mg/kg of chlorpyrifos (100- to 1000-fold higher than environmental exposure levels), 6months old children are predicted to have higher levels of chlorpyrifos-oxon in blood and higher levels of red blood cell cholinesterase inhibition compared to adults from equivalent doses. At lower doses more relevant to environmental exposures, simulations predict that adults will have slightly higher levels of chlorpyrifos-oxon in blood and greater cholinesterase inhibition. This model provides a computational framework for age-comparative simulations that can be utilized to predict chlorpyrifos disposition and biological response over various postnatal life stages. Copyright © 2013 Elsevier Inc. All rights reserved.
Predictors of cultural capital on science academic achievement at the 8th grade level
NASA Astrophysics Data System (ADS)
Misner, Johnathan Scott
The purpose of the study was to determine if students' cultural capital is a significant predictor of 8th grade science achievement test scores in urban locales. Cultural capital refers to the knowledge used and gained by the dominant class, which allows social and economic mobility. Cultural capital variables include magazines at home and parental education level. Other variables analyzed include socioeconomic status (SES), gender, and English language learners (ELL). This non-experimental study analyzed the results of the 2011 Eighth Grade Science National Assessment of Educational Progress (NAEP). The researcher analyzed the data using a multivariate stepwise regression analysis. The researcher concluded that the addition of cultural capital factors significantly increased the predictive power of the model where magazines in home, gender, student classified as ELL, parental education level, and SES were the independent variables and science achievement was the dependent variable. For alpha=0.05, the overall test for the model produced a R2 value of 0.232; therefore the model predicted 23.2% of variance in science achievement results. Other major findings include: higher measures of home resources predicted higher 2011 NAEP eighth grade science achievement; males were predicted to have higher 2011 NAEP 8 th grade science achievement; classified ELL students were predicted to score lower on the NAEP eight grade science achievement; higher parent education predicted higher NAEP eighth grade science achievement; lower measures of SES predicted lower 2011 NAEP eighth grade science achievement. This study contributed to the research in this field by identifying cultural capital factors that have been found to have statistical significance on predicting eighth grade science achievement results, which can lead to strategies to help improve science academic achievement among underserved populations.
Zhang, Yang; Pun, Betty; Wu, Shiang-Yuh; Vijayaraghavan, Krish; Seigneur, Christian
2004-12-01
The Models-3 Community Multiscale Air Quality (CMAQ) Modeling System and the Particulate Matter Comprehensive Air Quality Model with extensions (PMCAMx) were applied to simulate the period June 29-July 10, 1999, of the Southern Oxidants Study episode with two nested horizontal grid sizes: a coarse resolution of 32 km and a fine resolution of 8 km. The predicted spatial variations of ozone (O3), particulate matter with an aerodynamic diameter less than or equal to 2.5 microm (PM2.5), and particulate matter with an aerodynamic diameter less than or equal to 10 microm (PM10) by both models are similar in rural areas but differ from one another significantly over some urban/suburban areas in the eastern and southern United States, where PMCAMx tends to predict higher values of O3 and PM than CMAQ. Both models tend to predict O3 values that are higher than those observed. For observed O3 values above 60 ppb, O3 performance meets the U.S. Environmental Protection Agency's criteria for CMAQ with both grids and for PMCAMx with the fine grid only. It becomes unsatisfactory for PMCAMx and marginally satisfactory for CMAQ for observed O3 values above 40 ppb. Both models predict similar amounts of sulfate (SO4(2-)) and organic matter, and both predict SO4(2-) to be the largest contributor to PM2.5. PMCAMx generally predicts higher amounts of ammonium (NH4+), nitrate (NO3-), and black carbon (BC) than does CMAQ. PM performance for CMAQ is generally consistent with that of other PM models, whereas PMCAMx predicts higher concentrations of NO3-, NH4+, and BC than observed, which degrades its performance. For PM10 and PM2.5 predictions over the southeastern U.S. domain, the ranges of mean normalized gross errors (MNGEs) and mean normalized bias are 37-43% and -33-4% for CMAQ and 50-59% and 7-30% for PMCAMx. Both models predict the largest MNGEs for NO3- (98-104% for CMAQ 138-338% for PMCAMx). The inaccurate NO3- predictions by both models may be caused by the inaccuracies in the ammonia emission inventory and the uncertainties in the gas/particle partitioning under some conditions. In addition to these uncertainties, the significant PM overpredictions by PMCAMx may be attributed to the lack of wet removal for PM and a likely underprediction in the vertical mixing during the daytime.
Fukuda, Haruhisa; Kuroki, Manabu
2016-03-01
To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.
Lac, Andrew; Brack, Nathaniel
2018-02-01
Alcohol myopia theory posits that alcohol consumption attenuates information processing capacity, and that expectancy beliefs together with intake level are responsible for experiences in myopic effects (relief, self-inflation, and excess). Adults (N=413) averaging 36.39 (SD=13.02) years of age completed the Comprehensive Effects of Alcohol questionnaire at baseline, followed by alcohol use measures (frequency and quantity) and the Alcohol Myopia Scale one month later. Three structural equation models based on differing construct manifestations of alcohol expectancies served to longitudinally forecast alcohol use and myopia. In Model 1, overall expectancy predicted greater alcohol use and higher levels of all three myopic effects. In Model 2, specifying separate positive and negative expectancy factors, positive but not negative expectancy predicted greater use. Furthermore, positive expectancy and use explained higher myopic relief and higher self-inflation, whereas positive expectancy, negative expectancy, and use explained higher myopic excess. In Model 3, the seven specific expectancy subscales (sociability, tension reduction, liquid courage, sexuality, cognitive and behavioral impairment, risk and aggression, and self-perception) were simultaneously specified as predictors. Tension reduction expectancy, sexuality expectancy, and use contributed to higher myopic relief; sexuality expectancy and use explained higher myopic self-inflation; and risk and aggression expectancy and use accounted for higher myopic excess. Across all three predictive models, the total variance explained ranged from 12 to 19% for alcohol use, 50 to 51% for relief, 29 to 34% for self-inflation, and 32 to 35% for excess. Findings support that the type of alcohol myopia experienced is a concurrent function of self-fulfilling alcohol prophecies and drinking levels. The interpreted measurement manifestation of expectancy yielded different prevention implications. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang
2014-02-15
Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang
Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Jordan N.; Hinderliter, Paul M.; Timchalk, Charles
Sensitivity to chemicals in animals and humans are known to vary with age. Age-related changes in sensitivity to chlorpyrifos have been reported in animal models. A life-stage physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model was developed to computationally predict disposition of CPF and its metabolites, chlorpyrifos-oxon (the ultimate toxicant) and 3,5,6-trichloro-2-pyridinol (TCPy), as well as B-esterase inhibition by chlorpyrifos-oxon in humans. In this model, age-dependent body weight was calculated from a generalized Gompertz function, and compartments (liver, brain, fat, blood, diaphragm, rapid, and slow) were scaled based on body weight from polynomial functions on a fractional body weight basis. Bloodmore » flows among compartments were calculated as a constant flow per compartment volume. The life-stage PBPK/PD model was calibrated and tested against controlled adult human exposure studies. Model simulations suggest age-dependent pharmacokinetics and response may exist. At oral doses ≥ 0.55 mg/kg of chlorpyrifos (significantly higher than environmental exposure levels), 6 mo old children are predicted to have higher levels of chlorpyrifos-oxon in blood and higher levels of red blood cell cholinesterase inhibition compared to adults from equivalent oral doses of chlorpyrifos. At lower doses that are more relevant to environmental exposures, the model predicts that adults will have slightly higher levels of chlorpyrifos-oxon in blood and greater cholinesterase inhibition. This model provides a computational framework for age-comparative simulations that can be utilized to predict CPF disposition and biological response over various postnatal life-stages.« less
Evaluation of MEGAN predicted biogenic isoprene emissions at urban locations in Southeast Texas
NASA Astrophysics Data System (ADS)
Kota, Sri Harsha; Schade, Gunnar; Estes, Mark; Boyer, Doug; Ying, Qi
2015-06-01
Summertime isoprene emissions in the Houston area predicted by the Model of Emissions of Gases and Aerosol from Nature (MEGAN) version 2.1 during the 2006 TexAQS study were evaluated using a source-oriented Community Multiscale Air Quality (CMAQ) Model. Predicted daytime isoprene concentrations at nine surface sites operated by the Texas Commission of Environmental Quality (TCEQ) were significantly higher than local observations when biogenic emissions dominate the total isoprene concentrations, with mean normalized bias (MNB) ranges from 2.0 to 7.7 and mean normalized error (MNE) ranges from 2.2 to 7.7. Predicted upper air isoprene and its first generation oxidation products of methacrolein (MACR) and methyl vinyl ketone (MVK) were also significantly higher (MNB = 8.6, MNE = 9.1) than observations made onboard of NOAA's WP-3 airplane, which flew over the urban area. Over-prediction of isoprene and its oxidation products both at the surface and the upper air strongly suggests that biogenic isoprene emissions in the Houston area are significantly overestimated. Reducing the emission rates by approximately 3/4 was necessary to reduce the error between predictions and observations. Comparison of gridded leaf area index (LAI), plant functional type (PFT) and gridded isoprene emission factor (EF) used in MEGAN modeling with estimates of the same factors from a field survey north of downtown Houston showed that the isoprene over-prediction is likely caused by the combined effects of a large overestimation of the gridded EF in urban Houston and an underestimation of urban LAI. Nevertheless, predicted ozone concentrations in this region were not significantly affected by the isoprene over-predictions, while predicted isoprene SOA and total SOA concentrations can be higher by as much as 50% and 13% using the higher isoprene emission rates, respectively.
Payne, Courtney E; Wolfrum, Edward J
2015-01-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.
Elastic velocity models for gas-hydrate-bearing sediments-a comparison
NASA Astrophysics Data System (ADS)
Chand, Shyam; Minshull, Tim A.; Gei, Davide; Carcione, José M.
2004-11-01
The presence of gas hydrate in oceanic sediments is mostly identified by bottom-simulating reflectors (BSRs), reflection events with reversed polarity following the trend of the seafloor. Attempts to quantify the amount of gas hydrate present in oceanic sediments have been based mainly on the presence or absence of a BSR and its relative amplitude. Recent studies have shown that a BSR is not a necessary criterion for the presence of gas hydrates, but rather its presence depends on the type of sediments and the in situ conditions. The influence of hydrate on the physical properties of sediments overlying the BSR is determined by the elastic properties of their constituents and on sediment microstructure. In this context several approaches have been developed to predict the physical properties of sediments, and thereby quantify the amount of gas/gas hydrate present from observed deviations of these properties from those predicted for sediments without gas hydrate. We tested four models: the empirical weighted equation (WE); the three-phase effective-medium theory (TPEM); the three-phase Biot theory (TPB) and the differential effective-medium theory (DEM). We compared these models for a range of variables (porosity and clay content) using standard values for physical parameters. The comparison shows that all the models predict sediment properties comparable to field values except for the WE model at lower porosities and the TPB model at higher porosities. The models differ in the variation of velocity with porosity and clay content. The variation of velocity with hydrate saturation is also different, although the range is similar. We have used these models to predict velocities for field data sets from sediment sections with and without gas hydrates. The first is from the Mallik 2L-38 well, Mackenzie Delta, Canada, and the second is from Ocean Drilling Program (ODP) Leg 164 on Blake Ridge. Both data sets have Vp and Vs information along with the composition and porosity of the matrix. Models are considered successful if predictions from both Vp and Vs match hydrate saturations inferred from other data. Three of the models predict consistent hydrate saturations of 60-80 per cent from both Vp and Vs from log and vertical seismic profiling data for the Mallik 2L-38 well data set, but the TPEM model predicts 20 per cent higher saturations, as does the DEM model with a clay-water starting medium. For the clay-rich sediments of Blake Ridge, the DEM, TPEM and WE models predict 10-20 per cent hydrate saturation from Vp data, comparable to that inferred from resistivity data. The hydrate saturation predicted by the TPB model from Vp is higher. Using Vs data, the DEM and TPEM models predict very low or zero hydrate saturation while the TPB and WE models predict hydrate saturation very much higher than those predicted from Vp data. Low hydrate saturations are observed to have little effect on Vs. The hydrate phase appears to be connected within the sediment microstructure even at low saturations.
QSAR Modeling of Rat Acute Toxicity by Oral Exposure
Zhu, Hao; Martin, Todd M.; Ye, Lin; Sedykh, Alexander; Young, Douglas M.; Tropsha, Alexander
2009-01-01
Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. In this study, a comprehensive dataset of 7,385 compounds with their most conservative lethal dose (LD50) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire dataset was selected that included all 3,472 compounds used in the TOPKAT’s training set. The remaining 3,913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R2 of linear regression between actual and predicted LD50 values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R2 ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD50 for every compound using all 5 models. The consensus models afforded higher prediction accuracy for the external validation dataset with the higher coverage as compared to individual constituent models. The validated consensus LD50 models developed in this study can be used as reliable computational predictors of in vivo acute toxicity. PMID:19845371
Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.
Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander
2009-12-01
Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.
Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms
NASA Technical Reports Server (NTRS)
Bue, Grant; Makinen, Janice; Cognata, Thomas
2010-01-01
The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations
He, Xiaoming; Bhowmick, Sankha; Bischof, John C
2009-07-01
The Arrhenius and thermal isoeffective dose (TID) models are the two most commonly used models for predicting hyperthermic injury. The TID model is essentially derived from the Arrhenius model, but due to a variety of assumptions and simplifications now leads to different predictions, particularly at temperatures higher than 50 degrees C. In the present study, the two models are compared and their appropriateness tested for predicting hyperthermic injury in both the traditional hyperthermia (usually, 43-50 degrees C) and thermal surgery (or thermal therapy/thermal ablation, usually, >50 degrees C) regime. The kinetic parameters of thermal injury in both models were obtained from the literature (or literature data), tabulated, and analyzed for various prostate and kidney systems. It was found that the kinetic parameters vary widely, and were particularly dependent on the cell or tissue type, injury assay used, and the time when the injury assessment was performed. In order to compare the capability of the two models for thermal injury prediction, thermal thresholds for complete killing (i.e., 99% cell or tissue injury) were predicted using the models in two important urologic systems, viz., the benign prostatic hyperplasia tissue and the normal porcine kidney tissue. The predictions of the two models matched well at temperatures below 50 degrees C. At higher temperatures, however, the thermal thresholds predicted using the TID model with a constant R value of 0.5, the value commonly used in the traditional hyperthermia literature, are much lower than those predicted using the Arrhenius model. This suggests that traditional use of the TID model (i.e., R=0.5) is inappropriate for predicting hyperthermic injury in the thermal surgery regime (>50 degrees C). Finally, the time-temperature relationships for complete killing (i.e., 99% injury) were calculated and analyzed using the Arrhenius model for the various prostate and kidney systems.
Cao, Weidan; Qi, Xiaona; Cai, Deborah A; Han, Xuanye
2018-01-01
The purpose of the study was to build a model to explain the relationships between social support, uncontrollability appraisal, adaptive coping, and posttraumatic growth (PTG) among cancer patients in China. The participants who were cancer patients in a cancer hospital in China filled out a survey. The final sample size was 201. Structural equation modeling was used to build a model explaining PTG. Structural equation modeling results indicated that higher levels of social support predicted higher levels of adaptive coping, higher levels of uncontrollability appraisal predicted lower levels of adaptive coping, and higher levels of adaptive coping predicted higher levels of PTG. Moreover, adaptive coping was a mediator between social support and growth, as well as a mediator between uncontrollability and growth. The direct effects of social support and uncontrollability on PTG were insignificant. The model demonstrated the relationships between social support, uncontrollability appraisal, adaptive coping, and PTG. It could be concluded that uncontrollability appraisal was a required but not sufficient condition for PTG. Neither social support nor uncontrollability appraisal had direct influence on PTG. However, social support and uncontrollability might indirectly influence PTG, through adaptive coping. It implies that both internal factors (eg, cognitive appraisal and coping) and external factors (eg, social support) are required in order for growth to happen. Copyright © 2017 John Wiley & Sons, Ltd.
Relationship between Quantitative CT Metrics and Health Status and Bode in COPD
Martinez, Carlos H.; Chen, Ya-Hong; Westgate, Phillip M.; Liu, Lyrica X.; Murray, Susan; Curtis, Jeffrey L.; Make, Barry J.; Kazerooni, Ella A.; Lynch, David A.; Marchetti, Nathaniel; Washko, George R.; Martinez, Fernando J.; Han, MeiLan K.
2013-01-01
Background The value of quantitative computed tomography (QCT) to identify chronic obstructive pulmonary disease (COPD) phenotypes is increasingly appreciated. We hypothesized that QCT-defined emphysema and airway abnormalities relate to St. George's Respiratory Questionnaire (SGRQ) and BODE. Methods 1,200 COPDGene subjects meeting GOLD criteria for COPD with QCT analysis were included. Total lung emphysema was measured using density mask technique with a -950 HU threshold. An automated program measured mean wall thickness (WT), wall area percent (WA%) and pi10 in six segmental bronchi. Separate multivariate analyses examined the relative influence of airway measures and emphysema on SGRQ and BODE. Results In separate models predicting SGRQ score, a one unit standard deviation (SD) increase in each airway measure predicted higher SGRQ scores (for WT, 1.90 points higher, p=0.002; for WA%, 1.52 points higher, p=0.02; for pi10, 2.83 points higher p<0.001). The comparable increase in SGRQ for a one unit SD increase in percent emphysema in these models was relatively weaker, significant only in the pi10 model (for percent emphysema, 1.45 points higher, p=0.01). In separate models predicting BODE, a one unit SD increase in each airway measure predicted higher BODE scores (for WT, 1.07 fold increase, p<0.001; for WA%, 1.20 fold increase, p<0.001; for pi10, 1.16 fold increase, p<0.001). In these models, emphysema more strongly influenced BODE (range 1.24-1.26 fold increase, p<0.001). Conclusion Emphysema and airway disease both relate to clinically important parameters. The relative influence of airway disease is greater for SGRQ; the relative influence of emphysema is greater for BODE. PMID:22514236
Åkerstedt, Torbjörn; Garefelt, Johanna; Richter, Anne; Westerlund, Hugo; Magnusson Hanson, Linda L; Sverke, Magnus; Kecklund, Göran
2015-07-01
There is limited knowledge about the prospective relationship between major work characteristics (psychosocial, physical, scheduling) and disturbed sleep. The current study sought to provide such knowledge. Prospective cohort, with measurements on two occasions (T1 and T2) separated by two years. Naturalistic study, Sweden. There were 4,827 participants forming a representative sample of the working population. Questionnaire data on work factors obtained on two occasions were analyzed with structural equation modeling. Competing models were compared in order to investigate temporal relationships. A reciprocal model was found to fit the data best. Sleep disturbances at T2 were predicted by higher work demands at T1 and by lower perceived stress at T1. In addition, sleep disturbances at T1 predicted subsequent higher perception of stress, higher work demands, lower degree of control, and less social support at work at T2. A cross-sectional mediation analysis showed that (higher) perceived stress mediated the relationship between (higher) work demands and sleep disturbances; however, no such association was found longitudinally. Higher work demands predicted disturbed sleep, whereas physical work characteristics, shift work, and overtime did not. In addition, disturbed sleep predicted subsequent higher work demands, perceived stress, less social support, and lower degree of control. The results suggest that remedial interventions against sleep disturbances should focus on psychosocial factors, and that such remedial interventions may improve the psychosocial work situation in the long run. © 2015 Associated Professional Sleep Societies, LLC.
20170312 - In Silico Dynamics: computer simulation in a ...
Abstract: Utilizing cell biological information to predict higher order biological processes is a significant challenge in predictive toxicology. This is especially true for highly dynamical systems such as the embryo where morphogenesis, growth and differentiation require precisely orchestrated interactions between diverse cell populations. In patterning the embryo, genetic signals setup spatial information that cells then translate into a coordinated biological response. This can be modeled as ‘biowiring diagrams’ representing genetic signals and responses. Because the hallmark of multicellular organization resides in the ability of cells to interact with one another via well-conserved signaling pathways, multiscale computational (in silico) models that enable these interactions provide a platform to translate cellular-molecular lesions perturbations into higher order predictions. Just as ‘the Cell’ is the fundamental unit of biology so too should it be the computational unit (‘Agent’) for modeling embryogenesis. As such, we constructed multicellular agent-based models (ABM) with ‘CompuCell3D’ (www.compucell3d.org) to simulate kinematics of complex cell signaling networks and enable critical tissue events for use in predictive toxicology. Seeding the ABMs with HTS/HCS data from ToxCast demonstrated the potential to predict, quantitatively, the higher order impacts of chemical disruption at the cellular or bioche
In Silico Dynamics: computer simulation in a Virtual Embryo ...
Abstract: Utilizing cell biological information to predict higher order biological processes is a significant challenge in predictive toxicology. This is especially true for highly dynamical systems such as the embryo where morphogenesis, growth and differentiation require precisely orchestrated interactions between diverse cell populations. In patterning the embryo, genetic signals setup spatial information that cells then translate into a coordinated biological response. This can be modeled as ‘biowiring diagrams’ representing genetic signals and responses. Because the hallmark of multicellular organization resides in the ability of cells to interact with one another via well-conserved signaling pathways, multiscale computational (in silico) models that enable these interactions provide a platform to translate cellular-molecular lesions perturbations into higher order predictions. Just as ‘the Cell’ is the fundamental unit of biology so too should it be the computational unit (‘Agent’) for modeling embryogenesis. As such, we constructed multicellular agent-based models (ABM) with ‘CompuCell3D’ (www.compucell3d.org) to simulate kinematics of complex cell signaling networks and enable critical tissue events for use in predictive toxicology. Seeding the ABMs with HTS/HCS data from ToxCast demonstrated the potential to predict, quantitatively, the higher order impacts of chemical disruption at the cellular or biochemical level. This is demonstrate
Gupta, Shikha; Basant, Nikita; Rai, Premanjali; Singh, Kunwar P
2015-11-01
Binding affinity of chemical to carbon is an important characteristic as it finds vast industrial applications. Experimental determination of the adsorption capacity of diverse chemicals onto carbon is both time and resource intensive, and development of computational approaches has widely been advocated. In this study, artificial intelligence (AI)-based ten different qualitative and quantitative structure-property relationship (QSPR) models (MLPN, RBFN, PNN/GRNN, CCN, SVM, GEP, GMDH, SDT, DTF, DTB) were established for the prediction of the adsorption capacity of structurally diverse chemicals to activated carbon following the OECD guidelines. Structural diversity of the chemicals and nonlinear dependence in the data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. The generalization and prediction abilities of the constructed models were established through rigorous internal and external validation procedures performed employing a wide series of statistical checks. In complete dataset, the qualitative models rendered classification accuracies between 97.04 and 99.93%, while the quantitative models yielded correlation (R(2)) values of 0.877-0.977 between the measured and the predicted endpoint values. The quantitative prediction accuracies for the higher molecular weight (MW) compounds (class 4) were relatively better than those for the low MW compounds. Both in the qualitative and quantitative models, the Polarizability was the most influential descriptor. Structural alerts responsible for the extreme adsorption behavior of the compounds were identified. Higher number of carbon and presence of higher halogens in a molecule rendered higher binding affinity. Proposed QSPR models performed well and outperformed the previous reports. A relatively better performance of the ensemble learning models (DTF, DTB) may be attributed to the strengths of the bagging and boosting algorithms which enhance the predictive accuracies. The proposed AI models can be useful tools in screening the chemicals for their binding affinities toward carbon for their safe management.
NASA Astrophysics Data System (ADS)
Chen, Dar-Hsin; Chou, Heng-Chih; Wang, David; Zaabar, Rim
2011-06-01
Most empirical research of the path-dependent, exotic-option credit risk model focuses on developed markets. Taking Taiwan as an example, this study investigates the bankruptcy prediction performance of the path-dependent, barrier option model in the emerging market. We adopt Duan's (1994) [11], (2000) [12] transformed-data maximum likelihood estimation (MLE) method to directly estimate the unobserved model parameters, and compare the predictive ability of the barrier option model to the commonly adopted credit risk model, Merton's model. Our empirical findings show that the barrier option model is more powerful than Merton's model in predicting bankruptcy in the emerging market. Moreover, we find that the barrier option model predicts bankruptcy much better for highly-leveraged firms. Finally, our findings indicate that the prediction accuracy of the credit risk model can be improved by higher asset liquidity and greater financial transparency.
Probabilistic Forecasting of Coastal Morphodynamic Storm Response at Fire Island, New York
NASA Astrophysics Data System (ADS)
Wilson, K.; Adams, P. N.; Hapke, C. J.; Lentz, E. E.; Brenner, O.
2013-12-01
Site-specific probabilistic models of shoreline change are useful because they are derived from direct observations so that local factors, which greatly influence coastal response, are inherently considered by the model. Fire Island, a 50-km barrier island off Long Island, New York, is periodically subject to large storms, whose waves and storm surge dramatically alter beach morphology. Nor'Ida, which impacted the Fire Island coast in 2009, was one of the larger storms to occur in the early 2000s. In this study, we improve upon a Bayesian Network (BN) model informed with historical data to predict shoreline change from Nor'Ida. We present two BN models, referred to as 'original' model (BNo) and 'revised' model (BNr), designed to predict the most probable magnitude of net shoreline movement (NSM), as measured at 934 cross-shore transects, spanning 46 km. Both are informed with observational data (wave impact hours, shoreline and dune toe change rates, pre-storm beach width, and measured NSM) organized within five nodes, but the revised model contains a sixth node to represent the distribution of material added during an April 2009 nourishment project. We evaluate model success by examining the percentage of transects on which the model chooses the correct (observed) bin value of NSM. Comparisons of observed to model-predicted NSM show BNr has slightly higher predictive success over the total study area and significantly higher success at nourished locations. The BNo, which neglects anthropogenic modification history, correctly predicted the most probable NSM in 66.6% of transects, with ambiguous prediction at 12.7% of the locations. BNr, which incorporates anthropogenic modification history, resulted in 69.4% predictive accuracy and 13.9% ambiguity. However, across nourished transects, BNr reported 72.9% predictive success, while BNo reported 61.5% success. Further, at nourished transects, BNr reported higher ambiguity of 23.5% compared to 9.9% in BNo. These results demonstrate that BNr recognizes that nourished transects may behave differently from the expectation derived from historical data and therefore is more 'cautious' in its predictions at these locations. In contrast, BNo is more confident, but less accurate, demonstrating the risk of ignoring the influences of anthropogenic modification in a probabilistic model. Over the entire study region, both models produced greatest predictive accuracy for low retreat observations (BNo: 77.6%; BNr: 76.0%) and least success at predicting low advance observations, although BNr shows considerable improvement over BNo (39.4% vs. 28.6%, respectively). BNr also was significantly more accurate at predicting observations of no shoreline change (BNo: 56.2%; BNr: 68.93%). Both models were accurate for 60% of high advance observations, and reported high predictive success for high retreat observations (BNo: 69.1%; BNr: 67.6%), the scenario of greatest concern to coastal managers.
Payne, Courtney E.; Wolfrum, Edward J.
2015-03-12
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Courtney E.; Wolfrum, Edward J.
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
NASA Astrophysics Data System (ADS)
Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.
Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.
USDA-ARS?s Scientific Manuscript database
Validation of model predictions for independent variables not included in model development can save time and money by identifying conditions for which new models are not needed. A single strain of Salmonella Typhimurium DT104 was used to develop a general regression neural network model for growth...
An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
Hierarchical models for informing general biomass equations with felled tree data
Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke
2015-01-01
We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...
2011-01-01
Background Genetic risk models could potentially be useful in identifying high-risk groups for the prevention of complex diseases. We investigated the performance of this risk stratification strategy by examining epidemiological parameters that impact the predictive ability of risk models. Methods We assessed sensitivity, specificity, and positive and negative predictive value for all possible risk thresholds that can define high-risk groups and investigated how these measures depend on the frequency of disease in the population, the frequency of the high-risk group, and the discriminative accuracy of the risk model, as assessed by the area under the receiver-operating characteristic curve (AUC). In a simulation study, we modeled genetic risk scores of 50 genes with equal odds ratios and genotype frequencies, and varied the odds ratios and the disease frequency across scenarios. We also performed a simulation of age-related macular degeneration risk prediction based on published odds ratios and frequencies for six genetic risk variants. Results We show that when the frequency of the high-risk group was lower than the disease frequency, positive predictive value increased with the AUC but sensitivity remained low. When the frequency of the high-risk group was higher than the disease frequency, sensitivity was high but positive predictive value remained low. When both frequencies were equal, both positive predictive value and sensitivity increased with increasing AUC, but higher AUC was needed to maximize both measures. Conclusions The performance of risk stratification is strongly determined by the frequency of the high-risk group relative to the frequency of disease in the population. The identification of high-risk groups with appreciable combinations of sensitivity and positive predictive value requires higher AUC. PMID:21797996
Exploring Higher Education Business Models ("If Such a Thing Exists")
ERIC Educational Resources Information Center
Harney, John O.
2013-01-01
The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…
Next-Term Student Performance Prediction: A Recommender Systems Approach
ERIC Educational Resources Information Center
Sweeney, Mack; Rangwala, Huzefa; Lester, Jaime; Johri, Aditya
2016-01-01
An enduring issue in higher education is student retention to successful graduation. National statistics indicate that most higher education institutions have four-year degree completion rates around 50%, or just half of their student populations. While there are prediction models which illuminate what factors assist with college student success,…
Sociodemographic Predictors of Vaccination Exemptions on the Basis of Personal Belief in California.
Yang, Y Tony; Delamater, Paul L; Leslie, Timothy F; Mello, Michelle M
2016-01-01
We examined the variability in the percentage of students with personal belief exemptions (PBEs) from mandatory vaccinations in California schools and communities according to income, education, race, and school characteristics. We used spatial lag models to analyze 2007-2013 PBE data from the California Department of Public Health. The analyses included school- and regional-level models, and separately examined the percentage of students with exemptions in 2013 and the change in percentages over time. The percentage of students with PBEs doubled from 2007 to 2013, from 1.54% to 3.06%. Across all models, higher median household income and higher percentage of White race in the population, but not educational attainment, significantly predicted higher percentages of students with PBEs in 2013. Higher income, White population, and private school type significantly predicted greater increases in exemptions from 2007 to 2013, whereas higher educational attainment was associated with smaller increases. Personal belief exemptions are more common in areas with a higher percentage of White race and higher income.
Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers
Jiang, Yong; Schmidt, Renate H.; Reif, Jochen C.
2018-01-01
Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. PMID:29549092
Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers.
Jiang, Yong; Schmidt, Renate H; Reif, Jochen C
2018-05-04
Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. Copyright © 2018 Jiang et al.
NASA Astrophysics Data System (ADS)
Takagi, M.; Gyokusen, Koichiro; Saito, Akira
It was found that the atmospheric carbon dioxide (CO2) concentration in an urban canyon in Fukuoka city, Japan during August 1997 was about 30 µmol mol-1 higher than that in the suburbs. When fully exposed to sunlight, in situ the rate of photosynthesis in single leaves of Ilex rotunda planted in the urban canyon was higher when the atmospheric CO2 concentration was elevated. A biochemically based model was able to predict the in situ rate of photosynthesis well. The model also predicted an increase in the daily CO2 exchange rate for leaves in the urban canyon with an increase in atmospheric CO2 concentration. However, in situ such an increase in the daily CO2 exchange rate may be offset by diminished sunlight, a higher air temperature and a lower relative humidity. Thus, the daily CO2 exchange rate predicted using the model based soleley on the environmental conditions prevailing in the urban canyon was lower than that predicted based only on environmental factors found in the suburbs.
The Mt. Hood challenge: cross-testing two diabetes simulation models.
Brown, J B; Palmer, A J; Bisgaard, P; Chan, W; Pedula, K; Russell, A
2000-11-01
Starting from identical patients with type 2 diabetes, we compared the 20-year predictions of two computer simulation models, a 1998 version of the IMIB model and version 2.17 of the Global Diabetes Model (GDM). Primary measures of outcome were 20-year cumulative rates of: survival, first (incident) acute myocardial infarction (AMI), first stroke, proliferative diabetic retinopathy (PDR), macro-albuminuria (gross proteinuria, or GPR), and amputation. Standardized test patients were newly diagnosed males aged 45 or 75, with high and low levels of glycated hemoglobin (HbA(1c)), systolic blood pressure (SBP), and serum lipids. Both models generated realistic results and appropriate responses to changes in risk factors. Compared with the GDM, the IMIB model predicted much higher rates of mortality and AMI, and fewer strokes. These differences can be explained by differences in model architecture (Markov vs. microsimulation), different evidence bases for cardiovascular prediction (Framingham Heart Study cohort vs. Kaiser Permanente patients), and isolated versus interdependent prediction of cardiovascular events. Compared with IMIB, GDM predicted much higher lifetime costs, because of lower mortality and the use of a different costing method. It is feasible to cross-validate and explicate dissimilar diabetes simulation models using standardized patients. The wide differences in the model results that we observed demonstrate the need for cross-validation. We propose to hold a second 'Mt Hood Challenge' in 2001 and invite all diabetes modelers to attend.
A method for grounding grid corrosion rate prediction
NASA Astrophysics Data System (ADS)
Han, Juan; Du, Jingyi
2017-06-01
Involved in a variety of factors, prediction of grounding grid corrosion complex, and uncertainty in the acquisition process, we propose a combination of EAHP (extended AHP) and fuzzy nearness degree of effective grounding grid corrosion rate prediction model. EAHP is used to establish judgment matrix and calculate the weight of each factors corrosion of grounding grid; different sample classification properties have different corrosion rate of contribution, and combining the principle of close to predict corrosion rate.The application result shows, the model can better capture data variation, thus to improve the validity of the model to get higher prediction precision.
Development of a prognostic model for predicting spontaneous singleton preterm birth.
Schaaf, Jelle M; Ravelli, Anita C J; Mol, Ben Willem J; Abu-Hanna, Ameen
2012-10-01
To develop and validate a prognostic model for prediction of spontaneous preterm birth. Prospective cohort study using data of the nationwide perinatal registry in The Netherlands. We studied 1,524,058 singleton pregnancies between 1999 and 2007. We developed a multiple logistic regression model to estimate the risk of spontaneous preterm birth based on maternal and pregnancy characteristics. We used bootstrapping techniques to internally validate our model. Discrimination (AUC), accuracy (Brier score) and calibration (calibration graphs and Hosmer-Lemeshow C-statistic) were used to assess the model's predictive performance. Our primary outcome measure was spontaneous preterm birth at <37 completed weeks. Spontaneous preterm birth occurred in 57,796 (3.8%) pregnancies. The final model included 13 variables for predicting preterm birth. The predicted probabilities ranged from 0.01 to 0.71 (IQR 0.02-0.04). The model had an area under the receiver operator characteristic curve (AUC) of 0.63 (95% CI 0.63-0.63), the Brier score was 0.04 (95% CI 0.04-0.04) and the Hosmer Lemeshow C-statistic was significant (p<0.0001). The calibration graph showed overprediction at higher values of predicted probability. The positive predictive value was 26% (95% CI 20-33%) for the 0.4 probability cut-off point. The model's discrimination was fair and it had modest calibration. Previous preterm birth, drug abuse and vaginal bleeding in the first half of pregnancy were the most important predictors for spontaneous preterm birth. Although not applicable in clinical practice yet, this model is a next step towards early prediction of spontaneous preterm birth that enables caregivers to start preventive therapy in women at higher risk. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco
2018-01-01
Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity
NASA Astrophysics Data System (ADS)
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity
NASA Astrophysics Data System (ADS)
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity.
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Progress towards a more predictive model for hohlraum radiation drive and symmetry
NASA Astrophysics Data System (ADS)
Jones, O. S.; Suter, L. J.; Scott, H. A.; Barrios, M. A.; Farmer, W. A.; Hansen, S. B.; Liedahl, D. A.; Mauche, C. W.; Moore, A. S.; Rosen, M. D.; Salmonson, J. D.; Strozzi, D. J.; Thomas, C. A.; Turnbull, D. P.
2017-05-01
For several years, we have been calculating the radiation drive in laser-heated gold hohlraums using flux-limited heat transport with a limiter of 0.15, tabulated values of local thermodynamic equilibrium gold opacity, and an approximate model for not in a local thermodynamic equilibrium (NLTE) gold emissivity (DCA_2010). This model has been successful in predicting the radiation drive in vacuum hohlraums, but for gas-filled hohlraums used to drive capsule implosions, the model consistently predicts too much drive and capsule bang times earlier than measured. In this work, we introduce a new model that brings the calculated bang time into better agreement with the measured bang time. The new model employs (1) a numerical grid that is fully converged in space, energy, and time, (2) a modified approximate NLTE model that includes more physics and is in better agreement with more detailed offline emissivity models, and (3) a reduced flux limiter value of 0.03. We applied this model to gas-filled hohlraum experiments using high density carbon and plastic ablator capsules that had hohlraum He fill gas densities ranging from 0.06 to 1.6 mg/cc and hohlraum diameters of 5.75 or 6.72 mm. The new model predicts bang times to within ±100 ps for most experiments with low to intermediate fill densities (up to 0.85 mg/cc). This model predicts higher temperatures in the plasma than the old model and also predicts that at higher gas fill densities, a significant amount of inner beam laser energy escapes the hohlraum through the opposite laser entrance hole.
Peak expiratory flow profiles delivered by pump systems. Limitations due to wave action.
Miller, M R; Jones, B; Xu, Y; Pedersen, O F; Quanjer, P H
2000-06-01
Pump systems are currently used to test the performance of both spirometers and peak expiratory flow (PEF) meters, but for certain flow profiles the input signal (i.e., requested profile) and the output profile can differ. We developed a mathematical model of wave action within a pump and compared the recorded flow profiles with both the input profiles and the output predicted by the model. Three American Thoracic Society (ATS) flow profiles and four artificial flow-versus-time profiles were delivered by a pump, first to a pneumotachograph (PT) on its own, then to the PT with a 32-cm upstream extension tube (which would favor wave action), and lastly with the PT in series with and immediately downstream to a mini-Wright peak flow meter. With the PT on its own, recorded flow for the seven profiles was 2.4 +/- 1.9% (mean +/- SD) higher than the pump's input flow, and similarly was 2.3 +/- 2.3% higher than the pump's output flow as predicted by the model. With the extension tube in place, the recorded flow was 6.6 +/- 6.4% higher than the input flow (range: 0.1 to 18.4%), but was only 1.2 +/- 2.5% higher than the output flow predicted by the model (range: -0.8 to 5.2%). With the mini-Wright meter in series, the flow recorded by the PT was on average 6.1 +/- 9.1% below the input flow (range: -23.8 to 2. 5%), but was only 0.6 +/- 3.3% above the pump's output flow predicted by the model (range: -5.5 to 3.9%). The mini-Wright meter's reading (corrected for its nonlinearity) was on average 1.3 +/- 3.6% below the model's predicted output flow (range: -9.0 to 1. 5%). The mini-Wright meter would be deemed outside ATS limits for accuracy for three of the seven profiles when compared with the pump's input PEF, but this would be true for only one profile when compared with the pump's output PEF as predicted by the model. Our study shows that the output flow from pump systems can differ from the input waveform depending on the operating configuration. This effect can be predicted with reasonable accuracy using a model based on nonsteady flow analysis that takes account of pressure wave reflections within pump systems.
Eiden, Rina D.; Edwards, Ellen P.; Leonard, Kenneth E.
2009-01-01
The purpose of this study was to test a conceptual model predicting children's externalizing behavior problems in kindergarten in a sample of children with alcoholic (n = 130) and nonalcoholic (n = 97) parents. The model examined the role of parents' alcohol diagnoses, depression, and antisocial behavior at 12–18 months of child age in predicting parental warmth/sensitivity at 2 years of child age. Parental warmth/sensitivity at 2 years was hypothesized to predict children's self-regulation at 3 years (effortful control and internalization of rules), which in turn was expected to predict externalizing behavior problems in kindergarten. Structural equation modeling was largely supportive of this conceptual model. Fathers' alcohol diagnosis at 12–18 months was associated with lower maternal and paternal warmth/sensitivity at 2 years. Lower maternal warmth/sensitivity was longitudinally predictive of lower child self-regulation at 3 years, which in turn was longitudinally predictive of higher externalizing behavior problems in kindergarten, after controlling for prior behavior problems. There was a direct association between parents' depression and children's externalizing behavior problems. Results indicate that one pathway to higher externalizing behavior problems among children of alcoholics may be via parenting and self-regulation in the toddler to preschool years. PMID:17723044
Validation of Aircraft Noise Prediction Models at Low Levels of Exposure
NASA Technical Reports Server (NTRS)
Page, Juliet A.; Hobbs, Christopher M.; Plotkin, Kenneth J.; Stusnick, Eric; Shepherd, Kevin P. (Technical Monitor)
2000-01-01
Aircraft noise measurements were made at Denver International Airport for a period of four weeks. Detailed operational information was provided by airline operators which enabled noise levels to be predicted using the FAA's Integrated Noise Model. Several thrust prediction techniques were evaluated. Measured sound exposure levels for departure operations were found to be 4 to 10 dB higher than predicted, depending on the thrust prediction technique employed. Differences between measured and predicted levels are shown to be related to atmospheric conditions present at the aircraft altitude.
Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR
NASA Astrophysics Data System (ADS)
Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng
2017-06-01
The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.
Anger and the ABC model underlying Rational-Emotive Behavior Therapy.
Ziegler, Daniel J; Smith, Phillip N
2004-06-01
The ABC model underlying Ellis's Rational-Emotive Behavior Therapy predicts that people who think more irrationally should display greater trait anger than do people who think less irrationally. This study tested this prediction regarding the ABC model. 186 college students were administered the Survey of Personal Beliefs and the State-Trait Anger Expression Inventory-Second Edition to measure irrational thinking and trait anger, respectively. Students who scored higher on Overall Irrational Thinking and Low Frustration Tolerance scored significantly higher on Trait Anger than did those who scored lower on Overall Irrational Thinking and Low Frustration Tolerance. This indicates support for the ABC model, especially Ellis's construct of irrational beliefs which is central to the model.
NASA Astrophysics Data System (ADS)
Ajaz, M.; Ali, Y.; Ullah, S.; Ali, Q.; Tabassam, U.
2018-05-01
In this research paper, comprehensive results on the double differential yield of π± and K± mesons, protons and antiprotons as a function of laboratory momentum in several polar angle ranges from 0-420 mrad for pions, 0-360 mrad for kaons, proton and antiproton are reported. EPOS 1.99, EPOS-LHC and QGSJETII-04 models are used to perform simulations. The predictions of these models at 90 GeV/c are plotted for comparison, which shows that QGSJETII-04 model gives overall higher yield for π+ mesons in the polar angle interval of 0-40 mrad but for the π‑ the yield is higher only up to 20 mrad. For π+ mesons after 40 mrad, EPOS-LHC predicts higher yield as compared to EPOS 1.99 and QGSJETII-04 while EPOS-LHC and EPOS 1.99 give similar behavior in these two intervals. However, for π‑ mesons EPOS-LHC and EPOS 1.99 give similar behavior in these two intervals. For of K± mesons, QGSJETII-04 model gives higher predictions in all cases from 0-300 mrad, while EPOS 1.99 and EPOS-LHC show similar distributions. In case of protons, all models give similar distribution but this is not true for antiproton. All models are in good agreement for p > 20 GeV/c. EPOS 1.99 produce lower yield compared to the other two models from 60-360 mrad polar angle interval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alves, Vinicius M.; Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599; Muratov, Eugene
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putativemore » sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative chemical hazards in the Scorecard database were found using our models.« less
NASA Astrophysics Data System (ADS)
Osman, Marisol; Vera, C. S.
2017-10-01
This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to those associated with the signal, especially at the extratropics.
Wang, Bo; Stanton, Bonita; Deveaux, Lynette; Li, Xiaoming; Lunn, Sonja
2015-01-01
CONTEXT Considerable research has examined reciprocal relationships between parenting, peers and adolescent problem behavior; however, such studies have largely considered the influence of peers and parents separately. It is important to examine simultaneously the relationships between parental monitoring, peer risk involvement and adolescent sexual risk behavior, and whether increases in peer risk involvement and changes in parental monitoring longitudinally predict adolescent sexual risk behavior. METHODS Four waves of sexual behavior data were collected between 2008/2009 and 2011 from high school students aged 13–17 in the Bahamas. Structural equation and latent growth curve modeling were used to examine reciprocal relationships between parental monitoring, perceived peer risk involvement and adolescent sexual risk behavior. RESULTS For both male and female youth, greater perceived peer risk involvement predicted higher sexual risk behavior index scores, and greater parental monitoring predicted lower scores. Reciprocal relationships were found between parental monitoring and sexual risk behavior for males and between perceived peer risk involvement and sexual risk behavior for females. For males, greater sexual risk behavior predicted lower parental monitoring; for females, greater sexual risk behavior predicted higher perceived peer risk involvement. According to latent growth curve models, a higher initial level of parental monitoring predicted decreases in sexual risk behavior, whereas both a higher initial level and a higher growth rate of peer risk involvement predicted increases in sexual risk behavior. CONCLUSION Results highlight the important influence of peer risk involvement on youths’ sexual behavior and gender differences in reciprocal relationships between parental monitoring, peer influence and adolescent sexual risk behavior. PMID:26308261
Wang, Bo; Stanton, Bonita; Deveaux, Lynette; Li, Xiaoming; Lunn, Sonja
2015-06-01
Considerable research has examined reciprocal relationships between parenting, peers and adolescent problem behavior; however, such studies have largely considered the influence of peers and parents separately. It is important to examine simultaneously the relationships between parental monitoring, peer risk involvement and adolescent sexual risk behavior, and whether increases in peer risk involvement and changes in parental monitoring longitudinally predict adolescent sexual risk behavior. Four waves of sexual behavior data were collected between 2008/2009 and 2011 from high school students aged 13-17 in the Bahamas. Structural equation and latent growth curve modeling were used to examine reciprocal relationships between parental monitoring, perceived peer risk involvement and adolescent sexual risk behavior. For both male and female youth, greater perceived peer risk involvement predicted higher sexual risk behavior index scores, and greater parental monitoring predicted lower scores. Reciprocal relationships were found between parental monitoring and sexual risk behavior for males and between perceived peer risk involvement and sexual risk behavior for females. For males, greater sexual risk behavior predicted lower parental monitoring; for females, greater sexual risk behavior predicted higher perceived peer risk involvement. According to latent growth curve models, a higher initial level of parental monitoring predicted decreases in sexual risk behavior, whereas both a higher initial level and a higher growth rate of peer risk involvement predicted increases in sexual risk behavior. Results highlight the important influence of peer risk involvement on youths' sexual behavior and gender differences in reciprocal relationships between parental monitoring, peer influence and adolescent sexual risk behavior.
Development of estrogen receptor beta binding prediction model using large sets of chemicals.
Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao
2017-11-03
We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .
A multimodel approach to interannual and seasonal prediction of Danube discharge anomalies
NASA Astrophysics Data System (ADS)
Rimbu, Norel; Ionita, Monica; Patrut, Simona; Dima, Mihai
2010-05-01
Interannual and seasonal predictability of Danube river discharge is investigated using three model types: 1) time series models 2) linear regression models of discharge with large-scale climate mode indices and 3) models based on stable teleconnections. All models are calibrated using discharge and climatic data for the period 1901-1977 and validated for the period 1978-2008 . Various time series models, like autoregressive (AR), moving average (MA), autoregressive and moving average (ARMA) or singular spectrum analysis and autoregressive moving average (SSA+ARMA) models have been calibrated and their skills evaluated. The best results were obtained using SSA+ARMA models. SSA+ARMA models proved to have the highest forecast skill also for other European rivers (Gamiz-Fortis et al. 2008). Multiple linear regression models using large-scale climatic mode indices as predictors have a higher forecast skill than the time series models. The best predictors for Danube discharge are the North Atlantic Oscillation (NAO) and the East Atlantic/Western Russia patterns during winter and spring. Other patterns, like Polar/Eurasian or Tropical Northern Hemisphere (TNH) are good predictors for summer and autumn discharge. Based on stable teleconnection approach (Ionita et al. 2008) we construct prediction models through a combination of sea surface temperature (SST), temperature (T) and precipitation (PP) from the regions where discharge and SST, T and PP variations are stable correlated. Forecast skills of these models are higher than forecast skills of the time series and multiple regression models. The models calibrated and validated in our study can be used for operational prediction of interannual and seasonal Danube discharge anomalies. References Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part I: intearannual predictability. J. Climate, 2484-2501, 2008. Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part II: seasonal predictability. J. Climate, 2503-2518, 2008. Ionita, M., G. Lohmann, and N. Rimbu, Prediction of spring Elbe river discharge based on stable teleconnections with global temperature and precipitation. J. Climate. 6215-6226, 2008.
Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model
NASA Astrophysics Data System (ADS)
Wang, Qijie
2015-08-01
The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.
Predicting the stability of nanodevices
NASA Astrophysics Data System (ADS)
Lin, Z. Z.; Yu, W. F.; Wang, Y.; Ning, X. J.
2011-05-01
A simple model based on the statistics of single atoms is developed to predict the stability or lifetime of nanodevices without empirical parameters. Under certain conditions, the model produces the Arrhenius law and the Meyer-Neldel compensation rule. Compared with the classical molecular-dynamics simulations for predicting the stability of monatomic carbon chain at high temperature, the model is proved to be much more accurate than the transition state theory. Based on the ab initio calculation of the static potential, the model can give out a corrected lifetime of monatomic carbon and gold chains at higher temperature, and predict that the monatomic chains are very stable at room temperature.
Breeman, L D; Wubbels, T; van Lier, P A C; Verhulst, F C; van der Ende, J; Maras, A; Hopman, J A B; Tick, N T
2015-02-01
The goal of this study was to explore relations between teacher characteristics (i.e., competence and wellbeing); social classroom relationships (i.e., teacher-child and peer interactions); and children's social, emotional, and behavioral classroom adjustment. These relations were explored at both the individual and classroom levels among 414 children with emotional and behavioral disorders placed in special education. Two models were specified. In the first model, children's classroom adjustment was regressed on social relationships and teacher characteristics. In the second model, reversed links were examined by regressing teacher characteristics on social relationships and children's adjustment. Results of model 1 showed that, at the individual level, better social and emotional adjustment of children was predicted by higher levels of teacher-child closeness and better behavioral adjustment was predicted by both positive teacher-child and peer interactions. At the classroom level, positive social relationships were predicted by higher levels of teacher competence, which in turn were associated with lower classroom levels of social problems. Higher levels of teacher wellbeing were directly associated with classroom adaptive and maladaptive child outcomes. Results of model 2 showed that, at the individual and classroom levels, only the emotional and behavioral problems of children predicted social classroom relationships. At the classroom level, teacher competence was best predicted by positive teacher-child relationships and teacher wellbeing was best predicted by classroom levels of prosocial behavior. We discuss the importance of positive teacher-child and peer interactions for children placed in special education and suggest ways of improving classroom processes by targeting teacher competence. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Free Dendritic Growth of Succinonitrile-Acetone Alloys with Thermosolutal Melt Convection
NASA Technical Reports Server (NTRS)
Beckermann, Christoph; Li, Ben Q.
2003-01-01
A stagnant film model of the effects of thermosolutal convection on free dendritic growth of alloys is developed, and its predictions are compared to available earth-based experimental data for succinonitrileacetone alloys. It is found that the convection model gives excellent agreement with the measured dendrite tip velocities and radii for low solute concentrations. However, at higher solute concentrations the present predictions show some deviations from the measured data, and the measured (thermal) Peclet numbers tend to fall even below the predictions from diffusion theory. Furthermore, the measured selection parameter (sigma*) is significantly above the expected value of 0.02 and exhibits strong scatter. It is shown that convection is not responsible for these discrepancies. Some of the deviations between the predicted and measured data at higher supercoolings could be caused by measurement difficulties. The systematic disagreement in the selection parameter for higher solute concentrations and all supercoolings examined, indicates that the theory for the selection of the dendrite tip operating state in alloys may need to be reexamined.
Detecting Bias in Selection for Higher Education: Three Different Methods
ERIC Educational Resources Information Center
Kennet-Cohen, Tamar; Turvall, Elliot; Oren, Carmel
2014-01-01
This study examined selection bias in Israeli university admissions with respect to test language and gender, using three approaches for the detection of such bias: Cleary's model of differential prediction, boundary conditions for differential prediction and difference between "d's" (the Constant Ratio Model). The university admissions…
Model predictions of higher-order normal alkane ignition from dilute shock-tube experiments
NASA Astrophysics Data System (ADS)
Rotavera, B.; Petersen, E. L.
2013-07-01
Shock-induced oxidation of two higher-order linear alkanes was measured using a heated shock tube facility. Experimental overlap in stoichiometric ignition delay times obtained under dilute (99 % Ar) conditions near atmospheric pressure was observed in the temperature-dependent ignition trends of n-nonane ( n-C9H20) and n-undecane ( n-C11H24). Despite the overlap, model predictions of ignition using two different detailed chemical kinetics mechanisms show discrepancies relative to both the measured data as well as to one another. The present study therefore focuses on the differences observed in the modeled, high-temperature ignition delay times of higher-order n-alkanes, which are generally regarded to have identical ignition behavior for carbon numbers above C7. Comparisons are drawn using experimental data from the present study and from recent work by the authors relative to two existing chemical kinetics mechanisms. Time histories from the shock-tube OH* measurements are also compared to the model predictions; a double-peaked structure observed in the data shows that the time response of the detector electronics is crucial for properly capturing the first, incipient peak near time zero. Calculations using the two mechanisms were carried out at the dilution level employed in the shock-tube experiments for lean {({φ} = 0.5)}, stoichiometric, and rich {({φ} = 2.0)} equivalence ratios, 1230-1620 K, and for both 1.5 and 10 atm. In general, the models show differing trends relative to both measured data and to one another, indicating that agreement among chemical kinetics models for higher-order n-alkanes is not consistent. For example, under certain conditions, one mechanism predicts the ignition delay times to be virtually identical between the n-nonane and n-undecane fuels (in fact, also for all alkanes between at least C8 and C12), which is in agreement with the experiment, while the other mechanism predicts the larger fuels to ignite progressively more slowly.
Hu, Xuefei; Waller, Lance A; Lyapustin, Alexei; Wang, Yujie; Liu, Yang
2014-10-16
Multiple studies have developed surface PM 2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM 2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM 2.5 . In this paper, we examined whether remotely sensed fire count data could improve PM 2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM 2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM 2.5 across the models considered. Cross validation (CV) generated an R 2 of 0.69, a mean prediction error of 2.75 µg/m 3 , and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m 3 , indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m 3 , exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM 2.5 concentration estimation, especially in areas and seasons prone to fire events.
Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Liu, Yang
2017-01-01
Multiple studies have developed surface PM2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM2.5. In this paper, we examined whether remotely sensed fire count data could improve PM2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM2.5 across the models considered. Cross validation (CV) generated an R2 of 0.69, a mean prediction error of 2.75 µg/m3, and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m3, indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m3, exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM2.5 concentration estimation, especially in areas and seasons prone to fire events. PMID:28967648
De Carli, Margherita M; Baccarelli, Andrea A; Trevisi, Letizia; Pantic, Ivan; Brennan, Kasey JM; Hacker, Michele R; Loudon, Holly; Brunst, Kelly J; Wright, Robert O; Wright, Rosalind J; Just, Allan C
2017-01-01
Aim: We compared predictive modeling approaches to estimate placental methylation using cord blood methylation. Materials & methods: We performed locus-specific methylation prediction using both linear regression and support vector machine models with 174 matched pairs of 450k arrays. Results: At most CpG sites, both approaches gave poor predictions in spite of a misleading improvement in array-wide correlation. CpG islands and gene promoters, but not enhancers, were the genomic contexts where the correlation between measured and predicted placental methylation levels achieved higher values. We provide a list of 714 sites where both models achieved an R2 ≥0.75. Conclusion: The present study indicates the need for caution in interpreting cross-tissue predictions. Few methylation sites can be predicted between cord blood and placenta. PMID:28234020
Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)
NASA Astrophysics Data System (ADS)
OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.
2016-02-01
Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Final Technical Report: Increasing Prediction Accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Hansen, Clifford; Stein, Joshua
2015-12-01
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
ERIC Educational Resources Information Center
Kerby, Molly B.
2015-01-01
Theoretical models designed to predict whether students will persist or not have been valuable tools for retention efforts relative to the creation of services in academic and student affairs. Some of the early models attempted to explain and measure factors in the "college dropout process." For example, in his seminal work, Tinto…
NASA Astrophysics Data System (ADS)
Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.
2018-02-01
The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.
Krikke, M; Hoogeveen, R C; Hoepelman, A I M; Visseren, F L J; Arends, J E
2016-04-01
The aim of the study was to compare the predictions of five popular cardiovascular disease (CVD) risk prediction models, namely the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) model, the Framingham Heart Study (FHS) coronary heart disease (FHS-CHD) and general CVD (FHS-CVD) models, the American Heart Association (AHA) atherosclerotic cardiovascular disease risk score (ASCVD) model and the Systematic Coronary Risk Evaluation for the Netherlands (SCORE-NL) model. A cross-sectional design was used to compare the cumulative CVD risk predictions of the models. Furthermore, the predictions of the general CVD models were compared with those of the HIV-specific D:A:D model using three categories (< 10%, 10-20% and > 20%) to categorize the risk and to determine the degree to which patients were categorized similarly or in a higher/lower category. A total of 997 HIV-infected patients were included in the study: 81% were male and they had a median age of 46 [interquartile range (IQR) 40-52] years, a known duration of HIV infection of 6.8 (IQR 3.7-10.9) years, and a median time on ART of 6.4 (IQR 3.0-11.5) years. The D:A:D, ASCVD and SCORE-NL models gave a lower cumulative CVD risk, compared with that of the FHS-CVD and FHS-CHD models. Comparing the general CVD models with the D:A:D model, the FHS-CVD and FHS-CHD models only classified 65% and 79% of patients, respectively, in the same category as did the D:A:D model. However, for the ASCVD and SCORE-NL models, this percentage was 89% and 87%, respectively. Furthermore, FHS-CVD and FHS-CHD attributed a higher CVD risk to 33% and 16% of patients, respectively, while this percentage was < 6% for ASCVD and SCORE-NL. When using FHS-CVD and FHS-CHD, a higher overall CVD risk was attributed to the HIV-infected patients than when using the D:A:D, ASCVD and SCORE-NL models. This could have consequences regarding overtreatment, drug-related adverse events and drug-drug interactions. © 2015 British HIV Association.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Cosmic ray antiprotons in closed galaxy model
NASA Technical Reports Server (NTRS)
Protheroe, R.
1981-01-01
The flux of secondary antiprotons expected for the leaky-box model was calculated as well as that for the closed galaxy model of Peters and Westergard (1977). The antiproton/proton ratio observed at several GeV is a factor of 4 higher than the prediction for the leaky-box model but is consistent with that predicted for the closed galaxy model. New low energy data is not consistent with either model. The possibility of a primary antiproton component is discussed.
Potential predictability and forecast skill in ensemble climate forecast: a skill-persistence rule
NASA Astrophysics Data System (ADS)
Jin, Yishuai; Rong, Xinyao; Liu, Zhengyu
2017-12-01
This study investigates the factors relationship between the forecast skills for the real world (actual skill) and perfect model (perfect skill) in ensemble climate model forecast with a series of fully coupled general circulation model forecast experiments. It is found that the actual skill for sea surface temperature (SST) in seasonal forecast is substantially higher than the perfect skill on a large part of the tropical oceans, especially the tropical Indian Ocean and the central-eastern Pacific Ocean. The higher actual skill is found to be related to the higher observational SST persistence, suggesting a skill-persistence rule: a higher SST persistence in the real world than in the model could overwhelm the model bias to produce a higher forecast skill for the real world than for the perfect model. The relation between forecast skill and persistence is further proved using a first-order autoregressive model (AR1) analytically for theoretical solutions and numerically for analogue experiments. The AR1 model study shows that the skill-persistence rule is strictly valid in the case of infinite ensemble size, but could be distorted by sampling errors and non-AR1 processes. This study suggests that the so called "perfect skill" is model dependent and cannot serve as an accurate estimate of the true upper limit of real world prediction skill, unless the model can capture at least the persistence property of the observation.
The effects of climate change on instream nitrogen transport in the contiguous United States
NASA Astrophysics Data System (ADS)
Alam, M. J.; Goodall, J. L.
2011-12-01
Excessive nitrogen loading has caused significant environmental impacts such as eutrophication and hypoxia in waterbodies around the world. Nitrogen loading is largely dependent on nonpoint source pollution and nitrogen transport from nonpoint source pollution is greatly impacted by climate conditions. For example, increased precipitation leads to more runoff and a higher nitrogen yield. However, higher temperatures also impact nitrogen transport in that higher temperatures increase denitrification and therefore reduce nitrogen yield. The purpose of this research is to quantify potential changes in nitrogen yield for the contiguous United States under predicted climate change scenarios, specifically changes in precipitation and air temperature. The analysis was performed for high (A2) and low (B1) emission scenarios and for the year 2030, 2050 and 2090. We used 11 different IPCC (The Intergovernmental Panel on Climate Change) models predicted precipitation and temperature estimates to capture uncertainty. The SPARROW model was calibrated using historical nitrogen loading data and used to predict nitrogen yields for future climate conditions. We held nitrogen source data constant in order to isolate the impact of predicted precipitation and temperature changes for each model scenario. Preliminary results suggest an overall decrease in nitrogen yield if climate change impacts are considered in isolation. For the A2 scenario, the model results indicated an overall incremental nitrogen yield decrease of 2-17% by the year 2030, 4-26% by the year 2050, and 11-45% by the year 2090. The B1 emission scenario also indicated an incremental yield decrease, but at lesser amounts of 2-18%, 5-21% and 10-38% by the years 2030, 2050, and 2090, respectively. This decrease is mainly due to higher predicted temperatures that result in increased denitrification rates.
Wu, X; Lund, M S; Sun, D; Zhang, Q; Su, G
2015-10-01
One of the factors affecting the reliability of genomic prediction is the relationship among the animals of interest. This study investigated the reliability of genomic prediction in various scenarios with regard to the relationship between test and training animals, and among animals within the training data set. Different training data sets were generated from EuroGenomics data and a group of Nordic Holstein bulls (born in 2005 and afterwards) as a common test data set. Genomic breeding values were predicted using a genomic best linear unbiased prediction model and a Bayesian mixture model. The results showed that a closer relationship between test and training animals led to a higher reliability of genomic predictions for the test animals, while a closer relationship among training animals resulted in a lower reliability. In addition, the Bayesian mixture model in general led to a slightly higher reliability of genomic prediction, especially for the scenario of distant relationships between training and test animals. Therefore, to prevent a decrease in reliability, constant updates of the training population with animals from more recent generations are required. Moreover, a training population consisting of less-related animals is favourable for reliability of genomic prediction. © 2015 Blackwell Verlag GmbH.
Roberts, James H.; Hitt, Nathaniel P.
2010-01-01
Five conceptual models of longitudinal fish community organization in streams were examined: (1) niche diversity model (NDM), (2) stream continuum model (SCM), (3) immigrant accessibility model (IAM), (4) environmental stability model (ESM), and (5) adventitious stream model (ASM). We used differences among models in their predictions about temporal species turnover, along with five spatiotemporal fish community data sets, to evaluate model applicability. Models were similar in predicting a positive species richness–stream size relationship and longitudinal species nestedness, but differed in predicting either similar temporal species turnover throughout the stream continuum (NDM, SCM), higher turnover upstream (IAM, ESM), or higher turnover downstream (ASM). We calculated measures of spatial and temporal variation from spatiotemporal fish data in five wadeable streams in central and eastern North America spanning 34–68 years (French Creek [New York], Piasa Creek [Illinois], Spruce Run [Virginia], Little Stony Creek [Virginia], and Sinking Creek [Virginia]). All streams exhibited substantial species turnover (i.e., at least 27% turnover in stream-scale species pools), in contrast to the predictions of the SCM. Furthermore, community change was greater in downstream than upstream reaches in four of five streams. This result is most consistent with the ASM and suggests that downstream communities are strongly influenced by migrants to and from species pools outside the focal stream. In Sinking Creek, which is isolated from external species pools, temporal species turnover (via increased richness) was higher upstream than downstream, which is a pattern most consistent with the IAM or ESM. These results corroborate the hypothesis that temperate stream habitats and fish communities are temporally dynamic and that fish migration and environmental disturbances play fundamental roles in stream fish community organization.
Bove, Edward L; Migliavacca, Francesco; de Leval, Marc R; Balossino, Rossella; Pennati, Giancarlo; Lloyd, Thomas R; Khambadkone, Sachin; Hsia, Tain-Yen; Dubini, Gabriele
2008-08-01
Stage one reconstruction (Norwood operation) for hypoplastic left heart syndrome can be performed with either a modified Blalock-Taussig shunt or a right ventricle-pulmonary artery shunt. Both methods have certain inherent characteristics. It is postulated that mathematic modeling could help elucidate these differences. Three-dimensional computer models of the Blalock-Taussig shunt and right ventricle-pulmonary artery shunt modifications of the Norwood operation were developed by using the finite volume method. Conduits of 3, 3.5, and 4 mm were used in the Blalock-Taussig shunt model, whereas conduits of 4, 5, and 6 mm were used in the right ventricle-pulmonary artery shunt model. The hydraulic nets (lumped resistances, compliances, inertances, and elastances) were identical in the 2 models. A multiscale approach was adopted to couple the 3-dimensional models with the circulation net. Computer simulations were compared with postoperative catheterization data. Good correlation was found between predicted and observed data. For the right ventricle-pulmonary artery shunt modification, there was higher aortic diastolic pressure, decreased pulmonary artery pressure, lower Qp/Qs ratio, and higher coronary perfusion pressure. Mathematic modeling predicted minimal regurgitant flow in the right ventricle-pulmonary artery shunt model, which correlated with postoperative Doppler measurements. The right ventricle-pulmonary artery shunt demonstrated lower stroke work and a higher mechanical efficiency (stroke work/total mechanical energy). The close correlation between predicted and observed data supports the use of mathematic modeling in the design and assessment of surgical procedures. The potentially damaging effects of a systemic ventriculotomy in the right ventricle-pulmonary artery shunt modification of the Norwood operation have not been analyzed.
Yellowstone wolf (Canis lupus) denisty predicted by elk (Cervus elaphus) biomass
Mech, L. David; Barber-Meyer, Shannon
2015-01-01
The Northern Range (NR) of Yellowstone National Park (YNP) hosts a higher prey biomass density in the form of elk (Cervus elaphus L., 1758) than any other system of gray wolves (Canis lupus L., 1758) and prey reported. Therefore, it is important to determine whether that wolf–prey system fits a long-standing model relating wolf density to prey biomass. Using data from 2005 to 2012 after elk population fluctuations dampened 10 years subsequent to wolf reintroduction, we found that NR prey biomass predicted wolf density. This finding and the trajectory of the regression extend the validity of the model to prey densities 19% higher than previous data and suggest that the model would apply to wolf–prey systems of even higher prey biomass.
Eiden, Rina D; Molnar, Danielle S; Colder, Craig; Edwards, Ellen P; Leonard, Kenneth E
2009-09-01
The purpose of this study was to test a conceptual model predicting children's anxiety/depression in middle childhood in a community sample of children with parents who had alcohol problems (n = 112) and those without alcohol problems (n = 101). The conceptual model examined the role of parents' alcohol diagnoses, depression, and antisocial behavior among parents of children ages 12 months to kindergarten age in predicting marital aggression and parental aggravation. Higher levels of marital aggression and parental aggravation were hypothesized to predict children's depression/anxiety within time (18 months to kindergarten age and, prospectively, to age during fourth grade). The sample was recruited from New York State birth records when the children were 12 months old. Assessments were conducted at 12, 18, 24, and 36 months; at kindergarten age; and during fourth grade. Children with alcoholic fathers had higher depression/anxiety scores according to parental reports but not self-reports. Structural equations modeling was largely supportive of the conceptual model. Fathers' alcoholism was associated with higher child anxiety via greater levels of marital aggression among families with alcohol problems. Results also indicated that there was a significant indirect association between parents' depression symptoms and child anxiety via marital aggression. The results highlight the nested nature of risk characteristics in alcoholic families and the important role of marital aggression in predicting children's anxiety/depression. Interventions targeting both parents' alcohol problems and associated marital aggression are likely to provide the dual benefits of improving family interactions and lowering risk of children's internalizing behavior problems.
Caron, Melissa; Allard, Robert; Bédard, Lucie; Latreille, Jérôme; Buckeridge, David L
2016-11-01
The sexual transmission of enteric diseases poses an important public health challenge. We aimed to build a prediction model capable of identifying individuals with a reported enteric disease who could be at risk of acquiring future sexually transmitted infections (STIs). Passive surveillance data on Montreal residents with at least 1 enteric disease report was used to construct the prediction model. Cases were defined as all subjects with at least 1 STI report following their initial enteric disease episode. A final logistic regression prediction model was chosen using forward stepwise selection. The prediction model with the greatest validity included age, sex, residential location, number of STI episodes experienced prior to the first enteric disease episode, type of enteric disease acquired, and an interaction term between age and male sex. This model had an area under the curve of 0.77 and had acceptable calibration. A coordinated public health response to the sexual transmission of enteric diseases requires that a distinction be made between cases of enteric diseases transmitted through sexual activity from those transmitted through contaminated food or water. A prediction model can aid public health officials in identifying individuals who may have a higher risk of sexually acquiring a reportable disease. Once identified, these individuals could receive specialized intervention to prevent future infection. The information produced from a prediction model capable of identifying higher risk individuals can be used to guide efforts in investigating and controlling reported cases of enteric diseases and STIs. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Cant, Michael A; Llop, Justine B; Field, Jeremy
2006-06-01
Recent theory suggests that much of the wide variation in individual behavior that exists within cooperative animal societies can be explained by variation in the future direct component of fitness, or the probability of inheritance. Here we develop two models to explore the effect of variation in future fitness on social aggression. The models predict that rates of aggression will be highest toward the front of the queue to inherit and will be higher in larger, more productive groups. A third prediction is that, in seasonal animals, aggression will increase as the time available to inherit the breeding position runs out. We tested these predictions using a model social species, the paper wasp Polistes dominulus. We found that rates of both aggressive "displays" (aimed at individuals of lower rank) and aggressive "tests" (aimed at individuals of higher rank) decreased down the hierarchy, as predicted by our models. The only other significant factor affecting aggression rates was date, with more aggression observed later in the season, also as predicted. Variation in future fitness due to inheritance rank is the hidden factor accounting for much of the variation in aggressiveness among apparently equivalent individuals in this species.
Prediction of Airfoil Characteristics With Higher Order Turbulence Models
NASA Technical Reports Server (NTRS)
Gatski, Thomas B.
1996-01-01
This study focuses on the prediction of airfoil characteristics, including lift and drag over a range of Reynolds numbers. Two different turbulence models, which represent two different types of models, are tested. The first is a standard isotropic eddy-viscosity two-equation model, and the second is an explicit algebraic stress model (EASM). The turbulent flow field over a general-aviation airfoil (GA(W)-2) at three Reynolds numbers is studied. At each Reynolds number, predicted lift and drag values at different angles of attack are compared with experimental results, and predicted variations of stall locations with Reynolds number are compared with experimental data. Finally, the size of the separation zone predicted by each model is analyzed, and correlated with the behavior of the lift coefficient near stall. In summary, the EASM model is able to predict the lift and drag coefficients over a wider range of angles of attack than the two-equation model for the three Reynolds numbers studied. However, both models are unable to predict the correct lift and drag behavior near the stall angle, and for the lowest Reynolds number case, the two-equation model did not predict separation on the airfoil near stall.
Azadi, Sama; Karimi-Jashni, Ayoub
2016-02-01
Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Health risk factors as predictors of workers' compensation claim occurrence and cost
Schwatka, Natalie V; Atherly, Adam; Dally, Miranda J; Fang, Hai; vS Brockbank, Claire; Tenney, Liliana; Goetzel, Ron Z; Jinnett, Kimberly; Witter, Roxana; Reynolds, Stephen; McMillen, James; Newman, Lee S
2017-01-01
Objective The objective of this study was to examine the predictive relationships between employee health risk factors (HRFs) and workers' compensation (WC) claim occurrence and costs. Methods Logistic regression and generalised linear models were used to estimate the predictive association between HRFs and claim occurrence and cost among a cohort of 16 926 employees from 314 large, medium and small businesses across multiple industries. First, unadjusted (HRFs only) models were estimated, and second, adjusted (HRFs plus demographic and work organisation variables) were estimated. Results Unadjusted models demonstrated that several HRFs were predictive of WC claim occurrence and cost. After adjusting for demographic and work organisation differences between employees, many of the relationships previously established did not achieve statistical significance. Stress was the only HRF to display a consistent relationship with claim occurrence, though the type of stress mattered. Stress at work was marginally predictive of a higher odds of incurring a WC claim (p<0.10). Stress at home and stress over finances were predictive of higher and lower costs of claims, respectively (p<0.05). Conclusions The unadjusted model results indicate that HRFs are predictive of future WC claims. However, the disparate findings between unadjusted and adjusted models indicate that future research is needed to examine the multilevel relationship between employee demographics, organisational factors, HRFs and WC claims. PMID:27530688
Seasonal prediction of winter haze days in the north central North China Plain
NASA Astrophysics Data System (ADS)
Yin, Zhicong; Wang, Huijun
2016-11-01
Recently, the winter (December-February) haze pollution over the north central North China Plain (NCP) has become severe. By treating the year-to-year increment as the predictand, two new statistical schemes were established using the multiple linear regression (MLR) and the generalized additive model (GAM). By analyzing the associated increment of atmospheric circulation, seven leading predictors were selected to predict the upcoming winter haze days over the NCP (WHDNCP). After cross validation, the root mean square error and explained variance of the MLR (GAM) prediction model was 3.39 (3.38) and 53 % (54 %), respectively. For the final predicted WHDNCP, both of these models could capture the interannual and interdecadal trends and the extremums successfully. Independent prediction tests for 2014 and 2015 also confirmed the good predictive skill of the new schemes. The predicted bias of the MLR (GAM) prediction model in 2014 and 2015 was 0.09 (-0.07) and -3.33 (-1.01), respectively. Compared to the MLR model, the GAM model had a higher predictive skill in reproducing the rapid and continuous increase of WHDNCP after 2010.
NASA Astrophysics Data System (ADS)
Liu, L.; Du, L.; Liao, Y.
2017-12-01
Based on the ensemble hindcast dataset of CSM1.1m by NCC, CMA, Bayesian merging models and a two-step statistical model are developed and employed to predict monthly grid/station precipitation in the Huaihe River China during summer at the lead-time of 1 to 3 months. The hindcast datasets span a period of 1991 to 2014. The skill of the two models is evaluated using area under the ROC curve (AUC) in a leave-one-out cross-validation framework, and is compared to the skill of CSM1.1m. CSM1.1m has highest skill for summer precipitation from April while lowest from May, and has highest skill for precipitation in June but lowest for precipitation in July. Compared with raw outputs of climate models, some schemes of the two approaches have higher skill for the prediction from March and May, but almost schemes have lower skill for prediction from April. Compared to two-step approach, one sampling scheme of Bayesian merging approach has higher skill for the prediction from March, but has lower skill from May. The results suggest that there is potential to apply the two statistical models for monthly precipitation forecast in summer from March and from May over Huaihe River basin, but is potential to apply CSM1.1m forecast from April. Finally, the summer runoff during 1991 to 2014 is simulated based on one hydrological model using the climate hindcast of CSM1.1m and the two statistical models.
A two-component rain model for the prediction of attenuation and diversity improvement
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A new model was developed to predict attenuation statistics for a single Earth-satellite or terrestrial propagation path. The model was extended to provide predictions of the joint occurrences of specified or higher attenuation values on two closely spaced Earth-satellite paths. The joint statistics provide the information required to obtain diversity gain or diversity advantage estimates. The new model is meteorologically based. It was tested against available Earth-satellite beacon observations and terrestrial path measurements. The model employs the rain climate region descriptions of the Global rain model. The rms deviation between the predicted and observed attenuation values for the terrestrial path data was 35 percent, a result consistent with the expectations of the Global model when the rain rate distribution for the path is not used in the calculation. Within the United States the rms deviation between measurement and prediction was 36 percent but worldwide it was 79 percent.
Bayesian decision support for coding occupational injury data.
Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R
2016-06-01
Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.
Underestimated AMOC Variability and Implications for AMV and Predictability in CMIP Models
NASA Astrophysics Data System (ADS)
Yan, Xiaoqin; Zhang, Rong; Knutson, Thomas R.
2018-05-01
The Atlantic Meridional Overturning Circulation (AMOC) has profound impacts on various climate phenomena. Using both observations and simulations from the Coupled Model Intercomparison Project Phase 3 and 5, here we show that most models underestimate the amplitude of low-frequency AMOC variability. We further show that stronger low-frequency AMOC variability leads to stronger linkages between the AMOC and key variables associated with the Atlantic multidecadal variability (AMV), and between the subpolar AMV signal and northern hemisphere surface air temperature. Low-frequency extratropical northern hemisphere surface air temperature variability might increase with the amplitude of low-frequency AMOC variability. Atlantic decadal predictability is much higher in models with stronger low-frequency AMOC variability and much lower in models with weaker or without AMOC variability. Our results suggest that simulating realistic low-frequency AMOC variability is very important, both for simulating realistic linkages between AMOC and AMV-related variables and for achieving substantially higher Atlantic decadal predictability.
Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu
2012-02-01
In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
Short time ahead wind power production forecast
NASA Astrophysics Data System (ADS)
Sapronova, Alla; Meissner, Catherine; Mana, Matteo
2016-09-01
An accurate prediction of wind power output is crucial for efficient coordination of cooperative energy production from different sources. Long-time ahead prediction (from 6 to 24 hours) of wind power for onshore parks can be achieved by using a coupled model that would bridge the mesoscale weather prediction data and computational fluid dynamics. When a forecast for shorter time horizon (less than one hour ahead) is anticipated, an accuracy of a predictive model that utilizes hourly weather data is decreasing. That is because the higher frequency fluctuations of the wind speed are lost when data is averaged over an hour. Since the wind speed can vary up to 50% in magnitude over a period of 5 minutes, the higher frequency variations of wind speed and direction have to be taken into account for an accurate short-term ahead energy production forecast. In this work a new model for wind power production forecast 5- to 30-minutes ahead is presented. The model is based on machine learning techniques and categorization approach and using the historical park production time series and hourly numerical weather forecast.
Predictors of medication adherence in high risk youth of color living with HIV.
Macdonell, Karen E; Naar-King, Sylvie; Murphy, Debra A; Parsons, Jeffrey T; Harper, Gary W
2010-07-01
To test predictors of medication adherence in high-risk racial or ethnic minority youth living with HIV (YLH) using a conceptual model of social cognitive predictors including a continuous measure of motivational readiness. Youth were participants in a multi-site clinical trial examining the efficacy of a motivational intervention. Racial-minority YLH (primarily African American) who were prescribed antiretroviral medication were included (N = 104). Data were collected using computer-assisted personal interviewing method via an Internet-based application and questionnaires. Using path analysis with bootstrapping, most youth reported suboptimal adherence, which predicted higher viral load. Higher motivational readiness predicted optimal adherence, and higher social support predicted readiness. Decisional balance was indirectly related to adherence. The model provided a plausible framework for understanding adherence in this population. Culturally competent interventions focused on readiness and social support may be helpful for improving adherence in YLH.
Iakova, Maria; Ballabeni, Pierluigi; Erhart, Peter; Seichert, Nikola; Luthi, François; Dériaz, Olivier
2012-12-01
This study aimed to identify self-perception variables which may predict return to work (RTW) in orthopedic trauma patients 2 years after rehabilitation. A prospective cohort investigated 1,207 orthopedic trauma inpatients, hospitalised in rehabilitation, clinics at admission, discharge, and 2 years after discharge. Information on potential predictors was obtained from self administered questionnaires. Multiple logistic regression models were applied. In the final model, a higher likelihood of RTW was predicted by: better general health and lower pain at admission; health and pain improvements during hospitalisation; lower impact of event (IES-R) avoidance behaviour score; higher IES-R hyperarousal score, higher SF-36 mental score and low perceived severity of the injury. RTW is not only predicted by perceived health, pain and severity of the accident at the beginning of a rehabilitation program, but also by the changes in pain and health perceptions observed during hospitalisation.
Predictive information processing in music cognition. A critical review.
Rohrmeier, Martin A; Koelsch, Stefan
2012-02-01
Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.
A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks
NASA Astrophysics Data System (ADS)
Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan
2018-01-01
Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.
Real-time reservoir operation considering non-stationary inflow prediction
NASA Astrophysics Data System (ADS)
Zhao, J.; Xu, W.; Cai, X.; Wang, Z.
2011-12-01
Stationarity of inflow has been a basic assumption for reservoir operation rule design, which is now facing challenges due to climate change and human interferences. This paper proposes a modeling framework to incorporate non-stationary inflow prediction for optimizing the hedging operation rule of large reservoirs with multiple-year flow regulation capacity. A multi-stage optimization model is formulated and a solution algorithm based on the optimality conditions is developed to incorporate non-stationary annual inflow prediction through a rolling, dynamic framework that updates the prediction from period to period and adopt the updated prediction in reservoir operation decision. The prediction model is ARIMA(4,1,0), in which parameter 4 stands for the order of autoregressive, 1 represents a linear trend, and 0 is the order of moving average. The modeling framework and solution algorithm is applied to the Miyun reservoir in China, determining a yearly operating schedule during the period from 1996 to 2009, during which there was a significant declining trend of reservoir inflow. Different operation policy scenarios are modeled, including standard operation policy (SOP, matching the current demand as much as possible), hedging rule (i.e., leaving a certain amount of water for future to avoid large risk of water deficit) with forecast from ARIMA (HR-1), hedging (HR) with perfect forecast (HR-2 ). Compared to the results of these scenarios to that of the actual reservoir operation (AO), the utility of the reservoir operation under HR-1 is 3.0% lower than HR-2, but 3.7% higher than the AO and 14.4% higher than SOP. Note that the utility under AO is 10.3% higher than that under SOP, which shows that a certain level of hedging under some inflow prediction or forecast was used in the real-world operation. Moreover, the impacts of discount rate and forecast uncertainty level on the operation will be discussed.
Job-related resources and the pressures of working life.
Schieman, Scott
2013-03-01
Data from a 2011 representative sample of Canadian workers are used to test the resource versus the stress of higher status hypotheses. Drawing on the Job Demands-Resources model (JD-R), the resource hypothesis predicts that job-related resources reduce job pressure. The stress of higher status hypothesis predicts that job-related resources increase job pressure. Findings tend to favor the resource hypothesis for job autonomy and schedule control, while supporting the stress of higher status for job authority and challenging work. These findings help elaborate on the "resource" concept in the JD-R model and identify unique ways that such resources might contribute to the pressures of working life. Copyright © 2012 Elsevier Inc. All rights reserved.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
Vacuum Stability in Split SUSY and Little Higgs Models
NASA Astrophysics Data System (ADS)
Datta, Alakabha; Zhang, Xinmin
We study the stability of the effective Higgs potential in the split supersymmetry and Little Higgs models. In particular, we study the effects of higher dimensional operators in the effective potential on the Higgs mass predictions. We find that the size and sign of the higher dimensional operators can significantly change the Higgs mass required to maintain vacuum stability in Split SUSY models. In the Little Higgs models the effects of higher dimensional operators can be large because of a relatively lower cutoff scale. Working with a specific model we find that a contribution from the higher dimensional operator with coefficient of O(1) can destabilize the vacuum.
NASA Astrophysics Data System (ADS)
Li, Wenhong; Fu, Rong; Dickinson, Robert E.
2006-01-01
The global climate models for the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) predict very different changes of rainfall over the Amazon under the SRES A1B scenario for global climate change. Five of the eleven models predict an increase of annual rainfall, three models predict a decrease of rainfall, and the other three models predict no significant changes in the Amazon rainfall. We have further examined two models. The UKMO-HadCM3 model predicts an El Niño-like sea surface temperature (SST) change and warming in the northern tropical Atlantic which appear to enhance atmospheric subsidence and consequently reduce clouds over the Amazon. The resultant increase of surface solar absorption causes a stronger surface sensible heat flux and thus reduces relative humidity of the surface air. These changes decrease the rate and length of wet season rainfall and surface latent heat flux. This decreased wet season rainfall leads to drier soil during the subsequent dry season, which in turn can delay the transition from the dry to wet season. GISS-ER predicts a weaker SST warming in the western Pacific and the southern tropical Atlantic which increases moisture transport and hence rainfall in the Amazon. In the southern Amazon and Nordeste where the strongest rainfall increase occurs, the resultant higher soil moisture supports a higher surface latent heat flux during the dry and transition season and leads to an earlier wet season onset.
Predicting Student Success: A Naïve Bayesian Application to Community College Data
ERIC Educational Resources Information Center
Ornelas, Fermin; Ordonez, Carlos
2017-01-01
This research focuses on developing and implementing a continuous Naïve Bayesian classifier for GEAR courses at Rio Salado Community College. Previous implementation efforts of a discrete version did not predict as well, 70%, and had deployment issues. This predictive model has higher prediction, over 90%, accuracy for both at-risk and successful…
A 30-day-ahead forecast model for grass pollen in north London, United Kingdom.
Smith, Matt; Emberlin, Jean
2006-03-01
A 30-day-ahead forecast method has been developed for grass pollen in north London. The total period of the grass pollen season is covered by eight multiple regression models, each covering a 10-day period running consecutively from 21 May to 8 August. This means that three models were used for each 30-day forecast. The forecast models were produced using grass pollen and environmental data from 1961 to 1999 and tested on data from 2000 and 2002. Model accuracy was judged in two ways: the number of times the forecast model was able to successfully predict the severity (relative to the 1961-1999 dataset as a whole) of grass pollen counts in each of the eight forecast periods on a scale of 1 to 4; the number of times the forecast model was able to predict whether grass pollen counts were higher or lower than the mean. The models achieved 62.5% accuracy in both assessment years when predicting the relative severity of grass pollen counts on a scale of 1 to 4, which equates to six of the eight 10-day periods being forecast correctly. The models attained 87.5% and 100% accuracy in 2000 and 2002, respectively, when predicting whether grass pollen counts would be higher or lower than the mean. Attempting to predict pollen counts during distinct 10-day periods throughout the grass pollen season is a novel approach. The models also employed original methodology in the use of winter averages of the North Atlantic Oscillation to forecast 10-day means of allergenic pollen counts.
A Bayesian prediction model between a biomarker and the clinical endpoint for dichotomous variables.
Jiang, Zhiwei; Song, Yang; Shou, Qiong; Xia, Jielai; Wang, William
2014-12-20
Early biomarkers are helpful for predicting clinical endpoints and for evaluating efficacy in clinical trials even if the biomarker cannot replace clinical outcome as a surrogate. The building and evaluation of an association model between biomarkers and clinical outcomes are two equally important concerns regarding the prediction of clinical outcome. This paper is to address both issues in a Bayesian framework. A Bayesian meta-analytic approach is proposed to build a prediction model between the biomarker and clinical endpoint for dichotomous variables. Compared with other Bayesian methods, the proposed model only requires trial-level summary data of historical trials in model building. By using extensive simulations, we evaluate the link function and the application condition of the proposed Bayesian model under scenario (i) equal positive predictive value (PPV) and negative predictive value (NPV) and (ii) higher NPV and lower PPV. In the simulations, the patient-level data is generated to evaluate the meta-analytic model. PPV and NPV are employed to describe the patient-level relationship between the biomarker and the clinical outcome. The minimum number of historical trials to be included in building the model is also considered. It is seen from the simulations that the logit link function performs better than the odds and cloglog functions under both scenarios. PPV/NPV ≥0.5 for equal PPV and NPV, and PPV + NPV ≥1 for higher NPV and lower PPV are proposed in order to predict clinical outcome accurately and precisely when the proposed model is considered. Twenty historical trials are required to be included in model building when PPV and NPV are equal. For unequal PPV and NPV, the minimum number of historical trials for model building is proposed to be five. A hypothetical example shows an application of the proposed model in global drug development. The proposed Bayesian model is able to predict well the clinical endpoint from the observed biomarker data for dichotomous variables as long as the conditions are satisfied. It could be applied in drug development. But the practical problems in applications have to be studied in further research.
Psychopathy and Deviant Workplace Behavior: A Comparison of Two Psychopathy Models.
Carre, Jessica R; Mueller, Steven M; Schleicher, Karly M; Jones, Daniel N
2018-04-01
Although psychopathy is an interpersonally harmful construct, few studies have compared different psycho athy models in predicting different types of workplace deviance. We examined how the Triarchic Psychopathy Model (TRI-PM) and the Self-Report Psychopathy-Short Form (SRP-SF) predicted deviant workplace behaviors in two forms: sexual harassment and deviant work behaviors. Using structural equations modeling, the latent factor of psychopathy was predictive for both types of deviant workplace behavior. Specifically, the SRP-SF signif cantly predicted both measures of deviant workplace behavior. With respect to the TRI-PM, meanness and disinhibition significantly predicted higher scores of workplace deviance and workplace sexual harassment measures. Future research needs to investigate the influence of psychopathy on deviant workplace behaviors, and consider the measures they use when they investigate these constructs.
Reum, J C P
2011-12-01
Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.
Backović, Mihailo; Krämer, Michael; Maltoni, Fabio; Martini, Antony; Mawatari, Kentarou; Pellen, Mathieu
Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s -channel mediators. We implement such models in the FeynRules/MadGraph5_aMC@NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s -channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties.
The Anatomy of a Likely Donor: Econometric Evidence on Philanthropy to Higher Education
ERIC Educational Resources Information Center
Lara, Christen; Johnson, Daniel
2014-01-01
In 2011, philanthropic giving to higher education institutions totaled $30.3 billion, an 8.2% increase over the previous year. Roughly, 26% of those funds came from alumni donations. This article builds upon existing economic models to create an econometric model to explain and predict the pattern of alumni giving. We test the model using data…
NASA Technical Reports Server (NTRS)
Bansal, P. N.; Arseneaux, P. J.; Smith, A. F.; Turnberg, J. E.; Brooks, B. M.
1985-01-01
Results of dynamic response and stability wind tunnel tests of three 62.2 cm (24.5 in) diameter models of the Prop-Fan, advanced turboprop, are presented. Measurements of dynamic response were made with the rotors mounted on an isolated nacelle, with varying tilt for nonuniform inflow. One model was also tested using a semi-span wing and fuselage configuration for response to realistic aircraft inflow. Stability tests were performed using tunnel turbulence or a nitrogen jet for excitation. Measurements are compared with predictions made using beam analysis methods for the model with straight blades, and finite element analysis methods for the models with swept blades. Correlations between measured and predicted rotating blade natural frequencies for all the models are very good. The IP dynamic response of the straight blade model is reasonably well predicted. The IP response of the swept blades is underpredicted and the wing induced response of the straight blade is overpredicted. Two models did not flutter, as predicted. One swept blade model encountered an instability at a higher RPM than predicted, showing predictions to be conservative.
Potential predictability of Northern America surface temperature in AGCMs and CGCMs
NASA Astrophysics Data System (ADS)
Tang, Youmin; Chen, Dake; Yan, Xiaoqin
2015-07-01
In this study, the potential predictability of the Northern America (NA) surface air temperature (SAT) was explored using an information-based predictability framework and two multiple model ensemble products: a one-tier prediction by coupled models (T1), and a two-tier prediction by atmospheric models only (T2). Furthermore, the potential predictability was optimally decomposed into different modes for both T1 and T2, by extracting the most predictable structures. Emphasis was placed on the comparison of the predictability between T1 and T2. It was found that the potential predictability of the NA SAT is seasonal and spatially dependent in both T1 and T2. Higher predictability occurs in spring and winter and over the southeastern US and northwestern Canada. There is no significant difference of potential predictability between T1 and T2 for most areas of NA, although T1 has higher potential predictability than T2 in the southeastern US. Both T1 and T2 display similar most predictable components (PrCs) for the NA SAT, characterized by the inter-annual variability mode and the long-term trend mode. The first one is inherent to the tropical Pacific sea surface temperature forcing, such as the El Nino-Southern Oscillation, whereas the second one is closely associated with global warming. In general, the PrC modes can better characterize the predictability in T1 than in T2, in particular for the inter-annual variability mode in the fall. The prediction skill against observations is better measured by the PrC analysis than by principal component analysis for all seasons, indicating the stronger capability of PrCA in extracting prediction targets.
External validation of the Garvan nomograms for predicting absolute fracture risk: the Tromsø study.
Ahmed, Luai A; Nguyen, Nguyen D; Bjørnerem, Åshild; Joakimsen, Ragnar M; Jørgensen, Lone; Størmer, Jan; Bliuc, Dana; Center, Jacqueline R; Eisman, John A; Nguyen, Tuan V; Emaus, Nina
2014-01-01
Absolute risk estimation is a preferred approach for assessing fracture risk and treatment decision making. This study aimed to evaluate and validate the predictive performance of the Garvan Fracture Risk Calculator in a Norwegian cohort. The analysis included 1637 women and 1355 aged 60+ years from the Tromsø study. All incident fragility fractures between 2001 and 2009 were registered. The predicted probabilities of non-vertebral osteoporotic and hip fractures were determined using models with and without BMD. The discrimination and calibration of the models were assessed. Reclassification analysis was used to compare the models performance. The incidence of osteoporotic and hip fracture was 31.5 and 8.6 per 1000 population in women, respectively; in men the corresponding incidence was 12.2 and 5.1. The predicted 5-year and 10-year probability of fractures was consistently higher in the fracture group than the non-fracture group for all models. The 10-year predicted probabilities of hip fracture in those with fracture was 2.8 (women) to 3.1 times (men) higher than those without fracture. There was a close agreement between predicted and observed risk in both sexes and up to the fifth quintile. Among those in the highest quintile of risk, the models over-estimated the risk of fracture. Models with BMD performed better than models with body weight in correct classification of risk in individuals with and without fracture. The overall net decrease in reclassification of the model with weight compared to the model with BMD was 10.6% (p = 0.008) in women and 17.2% (p = 0.001) in men for osteoporotic fractures, and 13.3% (p = 0.07) in women and 17.5% (p = 0.09) in men for hip fracture. The Garvan Fracture Risk Calculator is valid and clinically useful in identifying individuals at high risk of fracture. The models with BMD performed better than those with body weight in fracture risk prediction.
De Laet, Steven; Doumen, Sarah; Vervoort, Eleonora; Colpin, Hilde; Van Leeuwen, Karla; Goossens, Luc; Verschueren, Karine
2014-01-01
This study examined how peer relationships (i.e., sociometric and perceived popularity) and teacher-child relationships (i.e., support and conflict) impact one another throughout late childhood. The sample included 586 children (46% boys), followed annually from Grades 4 to 6 (M(age.wave1) = 9.26 years). Autoregressive cross-lagged modeling was applied. Results stress the importance of peer relationships in shaping teacher-child relationships and vice versa. Higher sociometric popularity predicted more teacher-child support, which in turn predicted higher sociometric popularity, beyond changes in children's prosocial behavior. Higher perceived popularity predicted more teacher-child conflict (driven by children's aggressive behavior), which, in turn and in itself, predicted higher perceived popularity. The influence of the "invisible hand" of both teachers and peers in classrooms has been made visible. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.
ERIC Educational Resources Information Center
Christal, Melodie E., Ed.
Practitioner papers and research papers on higher education planning and budgeting are presented. "Before the Roof Caves In: A Predictive Model for Physical Plant Renewal" by Frederick M. Biedenweg and Robert E. Hutson outlines a systematic approach that was used at Stanford University to predict the associated costs of physical plant…
Zhuang, Kai; Izallalen, Mounir; Mouser, Paula; Richter, Hanno; Risso, Carla; Mahadevan, Radhakrishnan; Lovley, Derek R
2011-02-01
The advent of rapid complete genome sequencing, and the potential to capture this information in genome-scale metabolic models, provide the possibility of comprehensively modeling microbial community interactions. For example, Rhodoferax and Geobacter species are acetate-oxidizing Fe(III)-reducers that compete in anoxic subsurface environments and this competition may have an influence on the in situ bioremediation of uranium-contaminated groundwater. Therefore, genome-scale models of Geobacter sulfurreducens and Rhodoferax ferrireducens were used to evaluate how Geobacter and Rhodoferax species might compete under diverse conditions found in a uranium-contaminated aquifer in Rifle, CO. The model predicted that at the low rates of acetate flux expected under natural conditions at the site, Rhodoferax will outcompete Geobacter as long as sufficient ammonium is available. The model also predicted that when high concentrations of acetate are added during in situ bioremediation, Geobacter species would predominate, consistent with field-scale observations. This can be attributed to the higher expected growth yields of Rhodoferax and the ability of Geobacter to fix nitrogen. The modeling predicted relative proportions of Geobacter and Rhodoferax in geochemically distinct zones of the Rifle site that were comparable to those that were previously documented with molecular techniques. The model also predicted that under nitrogen fixation, higher carbon and electron fluxes would be diverted toward respiration rather than biomass formation in Geobacter, providing a potential explanation for enhanced in situ U(VI) reduction in low-ammonium zones. These results show that genome-scale modeling can be a useful tool for predicting microbial interactions in subsurface environments and shows promise for designing bioremediation strategies.
ERIC Educational Resources Information Center
Herridge, Bart; Heil, Robert
2003-01-01
Predictive modeling has been a popular topic in higher education for the last few years. This case study shows an example of an effective use of modeling combined with market segmentation to strategically divide large, unmanageable prospect and inquiry pools and convert them into applicants, and eventually, enrolled students. (Contains 6 tables.)
ERIC Educational Resources Information Center
Redmond, M. William, Jr.
2011-01-01
The purpose of this study is to develop a preadmission predictive model of student success for prospective first-time African American college applicants at a predominately White four-year public institution within the Pennsylvania State System of Higher Education. This model will use two types of variables. They are (a) cognitive variables (i.e.,…
NASA Astrophysics Data System (ADS)
Bardant, Teuku Beuna; Dahnum, Deliana; Amaliyah, Nur
2017-11-01
Simultaneous Saccharification Fermentation (SSF) of palm oil (Elaeis guineensis) empty fruit bunch (EFB) pulp were investigated as a part of ethanol production process. SSF was investigated by observing the effect of substrate loading variation in range 10-20%w, cellulase loading 5-30 FPU/gr substrate and yeast addition 1-2%v to the ethanol yield. Mathematical model for describing the effects of these three variables to the ethanol yield were developed using Response Surface Methodology-Cheminformatics (RSM-CI). The model gave acceptable accuracy in predicting ethanol yield for Simultaneous Saccharification and Fermentation (SSF) with coefficient of determination (R2) 0.8899. Model validation based on data from previous study gave (R2) 0.7942 which was acceptable for using this model for trend prediction analysis. Trend prediction analysis based on model prediction yield showed that SSF gave trend for higher yield when the process was operated in high enzyme concentration and low substrate concentration. On the other hand, even SHF model showed better yield will be obtained if operated in lower substrate concentration, it still possible to operate in higher substrate concentration with slightly lower yield. Opportunity provided by SHF to operate in high loading substrate make it preferable option for application in commercial scale.
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Case, Jonathan L.; LaFontaine, Frank J.; Kumar, Sujay V.
2012-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the SPoRT-MODIS GVF dataset on a land surface model (LSM) apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. In the West, higher latent heat fluxes prevailed, which enhanced the rates of evapotranspiration and soil moisture depletion in the LSM. By late Summer and Autumn, both the average sensible and latent heat fluxes increased in the West as a result of the more rapid soil drying and higher coverage of GVF. The impacts of the SPoRT GVF dataset on NWP was also examined for a single severe weather case study using the Weather Research and Forecasting (WRF) model. Two separate coupled LIS/WRF model simulations were made for the 17 July 2010 severe weather event in the Upper Midwest using the NCEP and SPoRT GVFs, with all other model parameters remaining the same. Based on the sensitivity results, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and lower direct surface heating, which typically resulted in lower (higher) predicted 2-m temperatures (2-m dewpoint temperatures). Portions of the Northern Plains states experienced substantial increases in convective available potential energy as a result of the higher SPoRT/MODIS GVFs. These differences produced subtle yet quantifiable differences in the simulated convective precipitation systems for this event.
Health risk factors as predictors of workers' compensation claim occurrence and cost.
Schwatka, Natalie V; Atherly, Adam; Dally, Miranda J; Fang, Hai; vS Brockbank, Claire; Tenney, Liliana; Goetzel, Ron Z; Jinnett, Kimberly; Witter, Roxana; Reynolds, Stephen; McMillen, James; Newman, Lee S
2017-01-01
The objective of this study was to examine the predictive relationships between employee health risk factors (HRFs) and workers' compensation (WC) claim occurrence and costs. Logistic regression and generalised linear models were used to estimate the predictive association between HRFs and claim occurrence and cost among a cohort of 16 926 employees from 314 large, medium and small businesses across multiple industries. First, unadjusted (HRFs only) models were estimated, and second, adjusted (HRFs plus demographic and work organisation variables) were estimated. Unadjusted models demonstrated that several HRFs were predictive of WC claim occurrence and cost. After adjusting for demographic and work organisation differences between employees, many of the relationships previously established did not achieve statistical significance. Stress was the only HRF to display a consistent relationship with claim occurrence, though the type of stress mattered. Stress at work was marginally predictive of a higher odds of incurring a WC claim (p<0.10). Stress at home and stress over finances were predictive of higher and lower costs of claims, respectively (p<0.05). The unadjusted model results indicate that HRFs are predictive of future WC claims. However, the disparate findings between unadjusted and adjusted models indicate that future research is needed to examine the multilevel relationship between employee demographics, organisational factors, HRFs and WC claims. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Modeling charmonium-η decays of JP C=1- higher charmonia
NASA Astrophysics Data System (ADS)
Anwar, Muhammad Naeem; Lu, Yu; Zou, Bing-Song
2017-06-01
We propose a new model to create a light meson in the heavy quarkonium transition, which is inspired by the Nambu-Jona-Lasinio (NJL) model. Hadronic transitions of JP C=1- higher charmonia with the emission of an η meson are studied in the framework of the proposed model. The model shows its potential to reproduce the observed decay widths and make predictions for the unobserved channels. We present our predictions for the decay width of Ψ →J /ψ η and Ψ →hc(1 P )η , where Ψ are higher S and D wave vector charmonia, which provide useful references to search for higher charmonia and determine their properties in forthcoming experiments. The predicted branching fraction B (ψ (4415 )→hc(1 P )η )=4.62 ×10-4 is one order of magnitude smaller than the J /ψ η channel. Estimates of partial decay width Γ (Y →J /ψ η ) are given for Y (4360 ), Y (4390 ), and Y (4660 ) by assuming them as c c ¯ bound states with quantum numbers 33D1, 33D1, and 53S1, respectively. Our results are in favor of these assignments for Y (4360 ) and Y (4660 ). The corresponding experimental data for these Y states has large statistical errors which do not provide any constraint on the mixing angle if we introduce S -D mixing. To identify Y (4390 ), precise measurements on its hadronic branching fraction are required which are eagerly awaited from BESIII.
NASA Technical Reports Server (NTRS)
Cronkhite, James D.
1993-01-01
Accurate vibration prediction for helicopter airframes is needed to 'fly from the drawing board' without costly development testing to solve vibration problems. The principal analytical tool for vibration prediction within the U.S. helicopter industry is the NASTRAN finite element analysis. Under the NASA DAMVIBS research program, Bell conducted NASTRAN modeling, ground vibration testing, and correlations of both metallic (AH-1G) and composite (ACAP) airframes. The objectives of the program were to assess NASTRAN airframe vibration correlations, to investigate contributors to poor agreement, and to improve modeling techniques. In the past, there has been low confidence in higher frequency vibration prediction for helicopters that have multibladed rotors (three or more blades) with predominant excitation frequencies typically above 15 Hz. Bell's findings under the DAMVIBS program, discussed in this paper, included the following: (1) accuracy of finite element models (FEM) for composite and metallic airframes generally were found to be comparable; (2) more detail is needed in the FEM to improve higher frequency prediction; (3) secondary structure not normally included in the FEM can provide significant stiffening; (4) damping can significantly affect phase response at higher frequencies; and (5) future work is needed in the areas of determination of rotor-induced vibratory loads and optimization.
Prediction model of dissolved oxygen in ponds based on ELM neural network
NASA Astrophysics Data System (ADS)
Li, Xinfei; Ai, Jiaoyan; Lin, Chunhuan; Guan, Haibin
2018-02-01
Dissolved oxygen in ponds is affected by many factors, and its distribution is unbalanced. In this paper, in order to improve the imbalance of dissolved oxygen distribution more effectively, the dissolved oxygen prediction model of Extreme Learning Machine (ELM) intelligent algorithm is established, based on the method of improving dissolved oxygen distribution by artificial push flow. Select the Lake Jing of Guangxi University as the experimental area. Using the model to predict the dissolved oxygen concentration of different voltage pumps, the results show that the ELM prediction accuracy is higher than the BP algorithm, and its mean square error is MSEELM=0.0394, the correlation coefficient RELM=0.9823. The prediction results of the 24V voltage pump push flow show that the discrete prediction curve can approximate the measured values well. The model can provide the basis for the artificial improvement of the dissolved oxygen distribution decision.
Model Forecast Skill and Sensitivity to Initial Conditions in the Seasonal Sea Ice Outlook
NASA Technical Reports Server (NTRS)
Blanchard-Wrigglesworth, E.; Cullather, R. I.; Wang, W.; Zhang, J.; Bitz, C. M.
2015-01-01
We explore the skill of predictions of September Arctic sea ice extent from dynamical models participating in the Sea Ice Outlook (SIO). Forecasts submitted in August, at roughly 2 month lead times, are skillful. However, skill is lower in forecasts submitted to SIO, which began in 2008, than in hindcasts (retrospective forecasts) of the last few decades. The multimodel mean SIO predictions offer slightly higher skill than the single-model SIO predictions, but neither beats a damped persistence forecast at longer than 2 month lead times. The models are largely unsuccessful at predicting each other, indicating a large difference in model physics and/or initial conditions. Motivated by this, we perform an initial condition sensitivity experiment with four SIO models, applying a fixed -1 m perturbation to the initial sea ice thickness. The significant range of the response among the models suggests that different model physics make a significant contribution to forecast uncertainty.
Designing a Predictive Model of Student Satisfaction in Online Learning
ERIC Educational Resources Information Center
Parahoo, Sanjai K; Santally, Mohammad Issack; Rajabalee, Yousra; Harvey, Heather Lea
2016-01-01
Higher education institutions consider student satisfaction to be one of the major elements in determining the quality of their programs. The objective of the study was to develop a model of student satisfaction to identify the influencers that emerged in online higher education settings. The study adopted a mixed method approach to identify…
Prediction of protein-protein interactions based on PseAA composition and hybrid feature selection.
Liu, Liang; Cai, Yudong; Lu, Wencong; Feng, Kaiyan; Peng, Chunrong; Niu, Bing
2009-03-06
Based on pseudo amino acid (PseAA) composition and a novel hybrid feature selection frame, this paper presents a computational system to predict the PPIs (protein-protein interactions) using 8796 protein pairs. These pairs are coded by PseAA composition, resulting in 114 features. A hybrid feature selection system, mRMR-KNNs-wrapper, is applied to obtain an optimized feature set by excluding poor-performed and/or redundant features, resulting in 103 remaining features. Using the optimized 103-feature subset, a prediction model is trained and tested in the k-nearest neighbors (KNNs) learning system. This prediction model achieves an overall accurate prediction rate of 76.18%, evaluated by 10-fold cross-validation test, which is 1.46% higher than using the initial 114 features and is 6.51% higher than the 20 features, coded by amino acid compositions. The PPIs predictor, developed for this research, is available for public use at http://chemdata.shu.edu.cn/ppi.
Piñero, Federico; Tisi Baña, Matías; de Ataide, Elaine Cristina; Hoyos Duque, Sergio; Marciano, Sebastian; Varón, Adriana; Anders, Margarita; Zerega, Alina; Menéndez, Josemaría; Zapata, Rodrigo; Muñoz, Linda; Padilla Machaca, Martín; Soza, Alejandro; McCormack, Lucas; Poniachik, Jaime; Podestá, Luis G; Gadano, Adrian; Boin, Ilka S F Fatima; Duvoux, Christophe; Silva, Marcelo
2016-11-01
The French alpha-fetoprotein (AFP) model has recently shown superior results compared to Milan criteria (MC) for prediction of hepatocellular carcinoma (HCC) recurrence after liver transplantation (LT) in European populations. The aim of this study was to explore the predictive capacity of the AFP model for HCC recurrence in a Latin-American cohort. Three hundred twenty-seven patients with HCC were included from a total of 2018 patients transplanted at 15 centres. Serum AFP and imaging data were both recorded at listing. Predictability was assessed by the Net Reclassification Improvement (NRI) method. Overall, 82 and 79% of the patients were within MC and the AFP model respectively. NRI showed a superior predictability of the AFP model against MC. Patients with an AFP score >2 points had higher risk of recurrence at 5 years Hazard Ratio (HR) of 3.15 (P = 0.0001) and lower patient survival (HR = 1.51; P = 0.03). Among patients exceeding MC, a score ≤2 points identified a subgroup of patients with lower recurrence (5% vs 42%; P = 0.013) and higher survival rates (84% vs 45%; P = 0.038). In cases treated with bridging procedures, following restaging, a score >2 points identified a higher recurrence (HR 2.2, P = 0.12) and lower survival rate (HR 2.25, P = 0.03). A comparative analysis between HBV and non-HBV patients showed that the AFP model performed better in non-HBV patients. The AFP model could be useful in Latin-American countries to better select patients for LT in subgroups presenting with extended criteria. However, particular attention should be focused on patients with HBV. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Pathways between self-esteem and depression in couples.
Johnson, Matthew D; Galambos, Nancy L; Finn, Christine; Neyer, Franz J; Horne, Rebecca M
2017-04-01
Guided by concepts from a relational developmental perspective, this study examined intra- and interpersonal associations between self-esteem and depressive symptoms in a sample of 1,407 couples surveyed annually across 6 years in the Panel Analysis of Intimate Relations and Family Dynamics (pairfam) study. Autoregressive cross-lagged model results demonstrated that self-esteem predicted future depressive symptoms for male partners at all times, replicating the vulnerability model for men (low self-esteem is a risk factor for future depression). Additionally, a cross-partner association emerged between symptoms of depression: Higher depressive symptoms in one partner were associated with higher levels of depression in the other partner one year later. Finally, supportive dyadic coping, the support that partners reported providing to one another in times of stress, was tested as a potential interpersonal mediator of pathways between self-esteem and depression. Female partners' higher initial levels of self-esteem predicted male partners' subsequent reports of increased supportive dyadic coping, which, in turn, predicted higher self-esteem and fewer symptoms of depression among female partners in the future. Male partners' initially higher symptoms of depression predicted less frequent supportive dyadic coping subsequently reported by female partners, which was associated with increased feelings of depression in the future. Couple relations represent an important contextual factor that may be implicated in the developmental pathways connecting self-esteem and symptoms of depression. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Salgado, J Cristian; Andrews, Barbara A; Ortuzar, Maria Fernanda; Asenjo, Juan A
2008-01-18
The prediction of the partition behaviour of proteins in aqueous two-phase systems (ATPS) using mathematical models based on their amino acid composition was investigated. The predictive models are based on the average surface hydrophobicity (ASH). The ASH was estimated by means of models that use the three-dimensional structure of proteins and by models that use only the amino acid composition of proteins. These models were evaluated for a set of 11 proteins with known experimental partition coefficient in four-phase systems: polyethylene glycol (PEG) 4000/phosphate, sulfate, citrate and dextran and considering three levels of NaCl concentration (0.0% w/w, 0.6% w/w and 8.8% w/w). The results indicate that such prediction is feasible even though the quality of the prediction depends strongly on the ATPS and its operational conditions such as the NaCl concentration. The ATPS 0 model which use the three-dimensional structure obtains similar results to those given by previous models based on variables measured in the laboratory. In addition it maintains the main characteristics of the hydrophobic resolution and intrinsic hydrophobicity reported before. Three mathematical models, ATPS I-III, based only on the amino acid composition were evaluated. The best results were obtained by the ATPS I model which assumes that all of the amino acids are completely exposed. The performance of the ATPS I model follows the behaviour reported previously, i.e. its correlation coefficients improve as the NaCl concentration increases in the system and, therefore, the effect of the protein hydrophobicity prevails over other effects such as charge or size. Its best predictive performance was obtained for the PEG/dextran system at high NaCl concentration. An increase in the predictive capacity of at least 54.4% with respect to the models which use the three-dimensional structure of the protein was obtained for that system. In addition, the ATPS I model exhibits high correlation coefficients in that system being higher than 0.88 on average. The ATPS I model exhibited correlation coefficients higher than 0.67 for the rest of the ATPS at high NaCl concentration. Finally, we tested our best model, the ATPS I model, on the prediction of the partition coefficient of the protein invertase. We found that the predictive capacities of the ATPS I model are better in PEG/dextran systems, where the relative error of the prediction with respect to the experimental value is 15.6%.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
Assessment of prediction skill in equatorial Pacific Ocean in high resolution model of CFS
NASA Astrophysics Data System (ADS)
Arora, Anika; Rao, Suryachandra A.; Pillai, Prasanth; Dhakate, Ashish; Salunke, Kiran; Srivastava, Ankur
2018-01-01
The effect of increasing atmospheric resolution on prediction skill of El Niño southern oscillation phenomenon in climate forecast system model is explored in this paper. Improvement in prediction skill for sea surface temperature (SST) and winds at all leads compared to low resolution model in the tropical Indo-Pacific basin is observed. High resolution model is able to capture extreme events reasonably well. As a result, the signal to noise ratio is improved in the high resolution model. However, spring predictability barrier (SPB) for summer months in Nino 3 and Nino 3.4 region is stronger in high resolution model, in spite of improvement in overall prediction skill and dynamics everywhere else. Anomaly correlation coefficient of SST in high resolution model with observations in Nino 3.4 region targeting boreal summer months when predicted at lead times of 3-8 months in advance decreased compared its lower resolution counterpart. It is noted that higher variance of winds predicted in spring season over central equatorial Pacific compared to observed variance of winds results in stronger than normal response on subsurface ocean, hence increases SPB for boreal summer months in high resolution model.
Van Onselen, Christina; Paul, Steven M; Lee, Kathryn; Dunn, Laura; Aouizerat, Bradley E; West, Claudia; Dodd, Marylin; Cooper, Bruce; Miaskowski, Christine
2013-02-01
Sleep disturbance is a problem for oncology patients. To evaluate how sleep disturbance and daytime sleepiness (DS) changed from before to six months following surgery and whether certain characteristics predicted initial levels and/or the trajectories of these parameters. Patients (n=396) were enrolled prior to surgery and completed monthly assessments for six months following surgery. The General Sleep Disturbance Scale was used to assess sleep disturbance and DS. Using hierarchical linear modeling, demographic, clinical, symptom, and psychosocial adjustment characteristics were evaluated as predictors of initial levels and trajectories of sleep disturbance and DS. All seven General Sleep Disturbance Scale scores were above the cutoff for clinically meaningful levels of sleep disturbance. Lower performance status; higher comorbidity, attentional fatigue, and physical fatigue; and more severe hot flashes predicted higher preoperative levels of sleep disturbance. Higher levels of education predicted higher sleep disturbance scores over time. Higher levels of depressive symptoms predicted higher preoperative levels of sleep disturbance, which declined over time. Lower performance status; higher body mass index; higher fear of future diagnostic tests; not having had sentinel lymph node biopsy; having had an axillary lymph node dissection; and higher depression, physical fatigue, and attentional fatigue predicted higher DS prior to surgery. Higher levels of education, not working for pay, and not having undergone neo-adjuvant chemotherapy predicted higher DS scores over time. Sleep disturbance is a persistent problem for patients with breast cancer. The effects of interventions that can address modifiable risk factors need to be evaluated. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Gupta, Punkaj; Rettiganti, Mallikarjuna; Gossett, Jeffrey M; Daufeldt, Jennifer; Rice, Tom B; Wetzel, Randall C
2018-01-01
To create a novel tool to predict favorable neurologic outcomes during ICU stay among children with critical illness. Logistic regression models using adaptive lasso methodology were used to identify independent factors associated with favorable neurologic outcomes. A mixed effects logistic regression model was used to create the final prediction model including all predictors selected from the lasso model. Model validation was performed using a 10-fold internal cross-validation approach. Virtual Pediatric Systems (VPS, LLC, Los Angeles, CA) database. Patients less than 18 years old admitted to one of the participating ICUs in the Virtual Pediatric Systems database were included (2009-2015). None. A total of 160,570 patients from 90 hospitals qualified for inclusion. Of these, 1,675 patients (1.04%) were associated with a decline in Pediatric Cerebral Performance Category scale by at least 2 between ICU admission and ICU discharge (unfavorable neurologic outcome). The independent factors associated with unfavorable neurologic outcome included higher weight at ICU admission, higher Pediatric Index of Morality-2 score at ICU admission, cardiac arrest, stroke, seizures, head/nonhead trauma, use of conventional mechanical ventilation and high-frequency oscillatory ventilation, prolonged hospital length of ICU stay, and prolonged use of mechanical ventilation. The presence of chromosomal anomaly, cardiac surgery, and utilization of nitric oxide were associated with favorable neurologic outcome. The final online prediction tool can be accessed at https://soipredictiontool.shinyapps.io/GNOScore/. Our model predicted 139,688 patients with favorable neurologic outcomes in an internal validation sample when the observed number of patients with favorable neurologic outcomes was among 139,591 patients. The area under the receiver operating curve for the validation model was 0.90. This proposed prediction tool encompasses 20 risk factors into one probability to predict favorable neurologic outcome during ICU stay among children with critical illness. Future studies should seek external validation and improved discrimination of this prediction tool.
NASA Astrophysics Data System (ADS)
Bernardes, S.
2017-12-01
Outputs from coupled carbon-climate models show considerable variability in atmospheric and land fields over the 21st century, including changes in temperature and in the spatiotemporal distribution and quantity of precipitation over the planet. Reductions in water availability due to decreased precipitation and increased water demand by the atmosphere may reduce carbon uptake by critical ecosystems. Conversely, increases in atmospheric carbon dioxide have the potential to offset reductions in productivity. This work focuses on predicted responses of plants to environmental changes and on how plants will adjust their water use efficiency (WUE, plant production per water loss by evapotranspiration) in the 21st century. Predicted changes in WUE were investigated using an ensemble of Earth System Models from the Coupled Model Intercomparison Project 5 (CMIP5), flux tower data and products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Scenarios for climate futures used two representative concentration pathways, including carbon concentration peak in 2040 (RCP4.5) and rising emissions throughout the 21st century (RCP8.5). Model results included the periods 2006-2009 (predicted) and 1850-2005 (reference). IPCC SREX regions were used to compare modeled, flux and satellite data and to address the significant intermodel variability observed for the CMIP5 ensemble (larger variability for RCP8.5, higher intermodel agreement in Southeast Asia, lower intermodel agreement in arid areas). An evaluation of model skill at the regional level supported model selection and the spatiotemporal analysis of changes in WUE. Departures of projected conditions in relation to historical values are presented for both concentration pathways at global, regional levels, including latitudinal distributions. High model sensitivity to different concentration pathways and increase in GPP and WUE was observed for most of the planet (increases consistently higher for RCP8.5). Higher latitudes in the northern hemisphere (boreal region) are predicted to experience higher increases in GPP and WUE, with WUE usually following GPP in changes. Models point to decreases in productivity and WUE mostly in the tropics, affecting tropical forests in the Amazon and in Central America.
A Predictive Model of Anesthesia Depth Based on SVM in the Primary Visual Cortex
Shi, Li; Li, Xiaoyuan; Wan, Hong
2013-01-01
In this paper, a novel model for predicting anesthesia depth is put forward based on local field potentials (LFPs) in the primary visual cortex (V1 area) of rats. The model is constructed using a Support Vector Machine (SVM) to realize anesthesia depth online prediction and classification. The raw LFP signal was first decomposed into some special scaling components. Among these components, those containing higher frequency information were well suited for more precise analysis of the performance of the anesthetic depth by wavelet transform. Secondly, the characteristics of anesthetized states were extracted by complexity analysis. In addition, two frequency domain parameters were selected. The above extracted features were used as the input vector of the predicting model. Finally, we collected the anesthesia samples from the LFP recordings under the visual stimulus experiments of Long Evans rats. Our results indicate that the predictive model is accurate and computationally fast, and that it is also well suited for online predicting. PMID:24044024
Mental health measures in predicting outcomes for the selection and training of navy divers.
van Wijk, Charles H
2011-03-01
Two models have previously been enlisted to predict success in training using psychological markers. Both the Mental Health Model and Trait Anxiety Model have shown some success in predicting behaviours associated with arousal among student divers. This study investigated the potential of these two models to predict outcome in naval diving selection and training. Navy diving candidates (n = 137) completed the Brunel Mood Scale and the State-Trait Personality Inventory (trait-anxiety scale) prior to selection. The mean scores of the candidates accepted for training were compared to those who were not accepted. The mean scores of the candidates who passed training were then compared to those who failed. A number of trainees withdrew from training due to injury, and their scores were also compared to those who completed the training. Candidates who were not accepted were more depressed, fatigued and confused than those who were accepted for training, and reported higher trait anxiety. There were no significant differences between the candidates who passed training and those who did not. However, injured trainees were tenser, more fatigued and reported higher trait anxiety than the rest. Age, gender, home language, geographical region of origin and race had no significant interaction with outcome results. While the models could partially discriminate between the mean scores of different outcome groups, none of them contributed meaningfully to predicting individual outcome in diving training. Both models may have potential in identifying proneness to injury, and this requires further study.
Aerodynamic-structural model of offwind yacht sails
NASA Astrophysics Data System (ADS)
Mairs, Christopher M.
An aerodynamic-structural model of offwind yacht sails was created that is useful in predicting sail forces. Two sails were examined experimentally and computationally at several wind angles to explore a variety of flow regimes. The accuracy of the numerical solutions was measured by comparing to experimental results. The two sails examined were a Code 0 and a reaching asymmetric spinnaker. During experiment, balance, wake, and sail shape data were recorded for both sails in various configurations. Two computational steps were used to evaluate the computational model. First, an aerodynamic flow model that includes viscosity effects was used to examine the experimental flying shapes that were recorded. Second, the aerodynamic model was combined with a nonlinear, structural, finite element analysis (FEA) model. The aerodynamic and structural models were used iteratively to predict final flying shapes of offwind sails, starting with the design shapes. The Code 0 has relatively low camber and is used at small angles of attack. It was examined experimentally and computationally at a single angle of attack in two trim configurations, a baseline and overtrimmed setting. Experimentally, the Code 0 was stable and maintained large flow attachment regions. The digitized flying shapes from experiment were examined in the aerodynamic model. Force area predictions matched experimental results well. When the aerodynamic-structural tool was employed, the predictive capability was slightly worse. The reaching asymmetric spinnaker has higher camber and operates at higher angles of attack than the Code 0. Experimentally and computationally, it was examined at two angles of attack. Like the Code 0, at each wind angle, baseline and overtrimmed settings were examined. Experimentally, sail oscillations and large flow detachment regions were encountered. The computational analysis began by examining the experimental flying shapes in the aerodynamic model. In the baseline setting, the computational force predictions were fair at both wind angles examined. Force predictions were much improved in the overtrimmed setting when the sail was highly stalled and more stable. The same trends in force prediction were seen when employing the aerodynamic-structural model. Predictions were good to fair in the baseline setting but improved in the overtrimmed configuration.
Influences of misprediction costs on solar flare prediction
NASA Astrophysics Data System (ADS)
Huang, Xin; Wang, HuaNing; Dai, XingHua
2012-10-01
The mispredictive costs of flaring and non-flaring samples are different for different applications of solar flare prediction. Hence, solar flare prediction is considered a cost sensitive problem. A cost sensitive solar flare prediction model is built by modifying the basic decision tree algorithm. Inconsistency rate with the exhaustive search strategy is used to determine the optimal combination of magnetic field parameters in an active region. These selected parameters are applied as the inputs of the solar flare prediction model. The performance of the cost sensitive solar flare prediction model is evaluated for the different thresholds of solar flares. It is found that more flaring samples are correctly predicted and more non-flaring samples are wrongly predicted with the increase of the cost for wrongly predicting flaring samples as non-flaring samples, and the larger cost of wrongly predicting flaring samples as non-flaring samples is required for the higher threshold of solar flares. This can be considered as the guide line for choosing proper cost to meet the requirements in different applications.
NASA Astrophysics Data System (ADS)
Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander
2017-06-01
Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.
Powell, S M; Ratkowsky, D A; Tamplin, M L
2015-05-01
Most existing models for the spoilage of modified atmosphere packed Atlantic salmon are based on the growth of the spoilage organism Photobacterium phosphoreum. However, there is evidence that this organism is not the specific spoilage organism on salmon produced and packaged in Australia. We developed a predictive model for the growth of bacteria in Australian-produced Atlantic salmon stored under modified atmosphere conditions (30-98% carbon dioxide in nitrogen) at refrigeration temperatures (0-10 °C). As expected, both higher levels of carbon dioxide and lower temperatures decreased the observed growth rates of the total population. A Bělehrádek-type model for growth rate fitted the data best with an acceptably low root mean square error. At low temperatures (∼0 °C) the growth rates in this study were similar to those predicted by other models but at higher temperatures (∼10 °C) the growth rates were significantly lower in the current study. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models
NASA Astrophysics Data System (ADS)
Son, S. W.; Lim, Y.; Kim, D.
2017-12-01
The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.
Multicomponent model of deformation and detachment of a biofilm under fluid flow
Tierra, Giordano; Pavissich, Juan P.; Nerenberg, Robert; Xu, Zhiliang; Alber, Mark S.
2015-01-01
A novel biofilm model is described which systemically couples bacteria, extracellular polymeric substances (EPS) and solvent phases in biofilm. This enables the study of contributions of rheology of individual phases to deformation of biofilm in response to fluid flow as well as interactions between different phases. The model, which is based on first and second laws of thermodynamics, is derived using an energetic variational approach and phase-field method. Phase-field coupling is used to model structural changes of a biofilm. A newly developed unconditionally energy-stable numerical splitting scheme is implemented for computing the numerical solution of the model efficiently. Model simulations predict biofilm cohesive failure for the flow velocity between and m s−1 which is consistent with experiments. Simulations predict biofilm deformation resulting in the formation of streamers for EPS exhibiting a viscous-dominated mechanical response and the viscosity of EPS being less than . Higher EPS viscosity provides biofilm with greater resistance to deformation and to removal by the flow. Moreover, simulations show that higher EPS elasticity yields the formation of streamers with complex geometries that are more prone to detachment. These model predictions are shown to be in qualitative agreement with experimental observations. PMID:25808342
Assessing Predictive Properties of Genome-Wide Selection in Soybeans
Xavier, Alencar; Muir, William M.; Rainey, Katy Martin
2016-01-01
Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786
Numerical Modeling of STARx for Ex Situ Soil Remediation
NASA Astrophysics Data System (ADS)
Gerhard, J.; Solinger, R. L.; Grant, G.; Scholes, G.
2016-12-01
Growing stockpiles of contaminated soils contaminated with petroleum hydrocarbons are an outstanding problem worldwide. Self-sustaining Treatment for Active Remediation (STAR) is an emerging technology based on smouldering combustion that has been successfully deployed for in situ remediation. STAR has also been developed for ex situ applications (STARx). This work used a two-dimensional numerical model to systematically explore the sensitivity of ex situ remedial performance to key design and operational parameters. First the model was calibrated and validated against pilot scale experiments, providing confidence that the rate and extent of treatment were correctly predicted. Simulations then investigated sensitivity of remedial performance to injected air flux, contaminant saturation, system configuration, heterogeneity of intrinsic permeability, heterogeneity of contaminant saturation, and system scale. Remedial performance was predicted to be most sensitive to the injected air flux, with higher air fluxes achieving higher treatment rates and remediating larger fractions of the initial contaminant mass. The uniformity of the advancing smouldering front was predicted to be highly dependent on effective permeability contrasts between treated and untreated sections of the contaminant pack. As a result, increased heterogeneity (of intrinsic permeability in particular) is predicted to lower remedial performance. Full-scale systems were predicted to achieve treatment rates an order of magnitude higher than the pilot scale for similar contaminant saturation and injected air flux. This work contributed to the large scale STARx treatment system that is being tested at a field site in Fall 2016.
Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru
2014-10-15
Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hazza, Muataz Hazza F. Al; Adesta, Erry Y. T.; Riza, Muhammad
2013-12-01
High speed milling has many advantages such as higher removal rate and high productivity. However, higher cutting speed increase the flank wear rate and thus reducing the cutting tool life. Therefore estimating and predicting the flank wear length in early stages reduces the risk of unaccepted tooling cost. This research presents a neural network model for predicting and simulating the flank wear in the CNC end milling process. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted to measure the flank wear length. Then the measured data have been used to train the developed neural network model. Artificial neural network (ANN) was applied to predict the flank wear length. The neural network contains twenty hidden layer with feed forward back propagation hierarchical. The neural network has been designed with MATLAB Neural Network Toolbox. The results show a high correlation between the predicted and the observed flank wear which indicates the validity of the models.
Research on Fault Rate Prediction Method of T/R Component
NASA Astrophysics Data System (ADS)
Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu
2017-07-01
T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Cogswell, Rebecca; Kobashigawa, Erin; McGlothlin, Dana; Shaw, Robin; De Marco, Teresa
2012-11-01
The Registry to Evaluate Early and Long-Term Pulmonary Arterial (PAH) Hypertension Disease Management (REVEAL) model was designed to predict 1-year survival in patients with PAH. Multivariate prediction models need to be evaluated in cohorts distinct from the derivation set to determine external validity. In addition, limited data exist on the utility of this model in the prediction of long-term survival. REVEAL model performance was assessed to predict 1-year and 5-year outcomes, defined as survival or composite survival or freedom from lung transplant, in 140 patients with PAH. The validation cohort had a higher proportion of human immunodeficiency virus (7.9% vs 1.9%, p < 0.0001), methamphetamine use (19.3% vs 4.9%, p < 0.0001), and portal hypertension PAH (16.4% vs 5.1%, p < 0.0001) compared with the development cohort. The C-index of the model to predict survival was 0.765 at 1 year and 0.712 at 5 years of follow-up. The C-index of the model to predict composite survival or freedom from lung transplant was 0.805 and 0.724 at 1 and 5 years of follow-up, respectively. Prediction by the model, however, was weakest among patients with intermediate-risk predicted survival. The REVEAL model had adequate discrimination to predict 1-year survival in this small but clinically distinct validation cohort. Although the model also had predictive ability out to 5 years, prediction was limited among patients of intermediate risk, suggesting our prediction methods can still be improved. Copyright © 2012. Published by Elsevier Inc.
Poverty, hunger, education, and residential status impact survival in HIV.
McMahon, James; Wanke, Christine; Terrin, Norma; Skinner, Sally; Knox, Tamsin
2011-10-01
Despite combination antiretroviral therapy (ART), HIV infected people have higher mortality than non-infected. Lower socioeconomic status (SES) predicts higher mortality in many chronic illnesses but data in people with HIV is limited. We evaluated 878 HIV infected individuals followed from 1995 to 2005. Cox proportional hazards for all-cause mortality were estimated for SES measures and other factors. Mixed effects analyses examined how SES impacts factors predicting death. The 200 who died were older, had lower CD4 counts, and higher viral loads (VL). Age, transmission category, education, albumin, CD4 counts, VL, hunger, and poverty predicted death in univariate analyses; age, CD4 counts, albumin, VL, and poverty in the multivariable model. Mixed models showed associations between (1) CD4 counts with education and hunger; (2) albumin with education, homelessness, and poverty; and (3) VL with education and hunger. SES contributes to mortality in HIV infected persons directly and indirectly, and should be a target of health policy in this population.
A range-based predictive localization algorithm for WSID networks
NASA Astrophysics Data System (ADS)
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
Turnell, Adrienne; Rasmussen, Victoria; Butow, Phyllis; Juraskova, Ilona; Kirsten, Laura; Wiener, Lori; Patenaude, Andrea; Hoekstra-Weebers, Josette; Grassi, Luigi
2016-02-01
Burnout is reportedly high among oncology healthcare workers. Psychosocial oncologists may be particularly vulnerable to burnout. However, their work engagement may also be high, counteracting stress in the workplace. This study aimed to document the prevalence of both burnout and work engagement, and the predictors of both, utilizing the job demands-resources (JD-R) model, within a sample of psychosocial oncologists. Psychosocial-oncologist (N = 417) clinicians, recruited through 10 international and national psychosocial-oncology societies, completed an online questionnaire. Measures included demographic and work characteristics, burnout (the MBI-HSS Emotional Exhaustion (EE) and Depersonalization (DP) subscales), the Utrecht Work Engagement Scale, and measures of job demands and resources. High EE and DP was reported by 20.2 and 6.6% of participants, respectively, while 95.3% reported average to high work engagement. Lower levels of job resources and higher levels of job demands predicted greater burnout, as predicted by the JD-R model, but the predicted interaction between these characteristics and burnout was not significant. Higher levels of job resources predicted higher levels of work engagement. Burnout was surprisingly low and work engagement high in this sample. Nonetheless, one in five psychosocial oncologists have high EE. Our results suggest that both the positive (resources) and negative (demands) aspects of this work environment have an on impact burnout and engagement, offering opportunities for intervention. Theories such as the JD-R model can be useful in guiding research in this area.
Punnen, Sanoj; Freedland, Stephen J; Polascik, Thomas J; Loeb, Stacy; Risk, Michael C; Savage, Stephen; Mathur, Sharad C; Uchio, Edward; Dong, Yan; Silberstein, Jonathan L
2018-06-01
The 4Kscore® test accurately detects aggressive prostate cancer and reduces unnecessary biopsies. However, its performance in African American men has been unknown. We assessed test performance in a cohort of men with a large African American representation. Men referred for prostate biopsy at 8 Veterans Affairs medical centers were prospectively enrolled in the study. All men underwent phlebotomy for 4Kscore test assessment prior to prostate biopsy. The primary outcome was the detection of Grade Group 2 or higher cancer on biopsy. We assessed the discrimination, calibration and clinical usefulness of 4Kscore to predict Grade Group 2 or higher prostate cancer and compared it to a base model consisting of age, digital rectal examination and prostate specific antigen. Additionally, we compared test performance in African American and nonAfrican American men. Of the 366 enrolled men 205 (56%) were African American and 131 (36%) had Grade Group 2 or higher prostate cancer. The 4Kscore test showed better discrimination (AUC 0.81 vs 0.74, p <0.01) and higher clinical usefulness on decision curve analysis than the base model. Test prediction closely approximated the observed risk of Grade Group 2 or higher prostate cancer. There was no difference in test performance in African American and nonAfrican American men (0.80 vs 0.84, p = 0.32), The test outperformed the base model in each group. The 4Kscore test accurately predicts aggressive prostate cancer for biopsy decision making in African American and nonAfrican American men. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Dodd, Bucky J.
2013-01-01
Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pronk, Marieke; Deeg, Dorly J H; Kramer, Sophia E
2018-04-17
The purpose of this study is to determine which demographic, health-related, mood, personality, or social factors predict discrepancies between older adults' functional speech-in-noise test result and their self-reported hearing problems. Data of 1,061 respondents from the Longitudinal Aging Study Amsterdam were used (ages ranged from 57 to 95 years). Functional hearing problems were measured using a digit triplet speech-in-noise test. Five questions were used to assess self-reported hearing problems. Scores of both hearing measures were dichotomized. Two discrepancy outcomes were created: (a) being unaware: those with functional but without self-reported problems (reference is aware: those with functional and self-reported problems); (b) reporting false complaints: those without functional but with self-reported problems (reference is well: those without functional and self-reported hearing problems). Two multivariable prediction models (logistic regression) were built with 19 candidate predictors. The speech reception threshold in noise was kept (forced) as a predictor in both models. Persons with higher self-efficacy (to initiate behavior) and higher self-esteem had a higher odds to being unaware than persons with lower self-efficacy scores (odds ratio [OR] = 1.13 and 1.11, respectively). Women had a higher odds than men (OR = 1.47). Persons with more chronic diseases and persons with worse (i.e., higher) speech-in-noise reception thresholds in noise had a lower odds to being unaware (OR = 0.85 and 0.91, respectively) than persons with less diseases and better thresholds, respectively. A higher odds to reporting false complaints was predicted by more depressive symptoms (OR = 1.06), more chronic diseases (OR = 1.21), and a larger social network (OR = 1.02). Persons with higher self-efficacy (to complete behavior) had a lower odds (OR = 0.86), whereas persons with higher self-esteem had a higher odds to report false complaints (OR = 1.21). The explained variance of both prediction models was small (Nagelkerke R2 = .11 for the unaware model, and .10 for the false complaints model). The findings suggest that a small proportion of the discrepancies between older individuals' results on a speech-in-noise screening test and their self-reports of hearing problems can be explained by the unique context of these individuals. The likelihood of discrepancies partly depends on a person's health (chronic diseases), demographics (gender), personality (self-efficacy to initiate behavior and to persist in adversity, self-esteem), mood (depressive symptoms), and social situation (social network size). Implications are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Truster, T. J.; Cochran, K. B.
Advanced reactors designed to operate at higher temperatures than current light water reactors require structural materials with high creep strength and creep-fatigue resistance to achieve long design lives. Grade 91 is a ferritic/martensitic steel designed for long creep life at elevated temperatures. It has been selected as a candidate material for sodium fast reactor intermediate heat exchangers and other advanced reactor structural components. This report focuses on the creep deformation and rupture life of Grade 91 steel. The time required to complete an experiment limits the availability of long-life creep data for Grade 91 and other structural materials. Design methodsmore » often extrapolate the available shorter-term experimental data to longer design lives. However, extrapolation methods tacitly assume the underlying material mechanisms causing creep for long-life/low-stress conditions are the same as the mechanisms controlling creep in the short-life/high-stress experiments. A change in mechanism for long-term creep could cause design methods based on extrapolation to be non-conservative. The goal for physically-based microstructural models is to accurately predict material response in experimentally-inaccessible regions of design space. An accurate physically-based model for creep represents all the material mechanisms that contribute to creep deformation and damage and predicts the relative influence of each mechanism, which changes with loading conditions. Ideally, the individual mechanism models adhere to the material physics and not an empirical calibration to experimental data and so the model remains predictive for a wider range of loading conditions. This report describes such a physically-based microstructural model for Grade 91 at 600° C. The model explicitly represents competing dislocation and diffusional mechanisms in both the grain bulk and grain boundaries. The model accurately recovers the available experimental creep curves at higher stresses and the limited experimental data at lower stresses, predominately primary creep rates. The current model considers only one temperature. However, because the model parameters are, for the most part, directly related to the physics of fundamental material processes, the temperature dependence of the properties are known. Therefore, temperature dependence can be included in the model with limited additional effort. The model predicts a mechanism shift for 600° C at approximately 100 MPa from a dislocation- dominated regime at higher stress to a diffusion-dominated regime at lower stress. This mechanism shift impacts the creep life, notch-sensitivity, and, likely, creep ductility of Grade 91. In particular, the model predicts existing extrapolation methods for creep life may be non-conservative when attempting to extrapolate data for higher stress creep tests to low stress, long-life conditions. Furthermore, the model predicts a transition from notchstrengthening behavior at high stress to notch-weakening behavior at lower stresses. Both behaviors may affect the conservatism of existing design methods.« less
Zhuang, Kai; Izallalen, Mounir; Mouser, Paula; Richter, Hanno; Risso, Carla; Mahadevan, Radhakrishnan; Lovley, Derek R
2011-01-01
The advent of rapid complete genome sequencing, and the potential to capture this information in genome-scale metabolic models, provide the possibility of comprehensively modeling microbial community interactions. For example, Rhodoferax and Geobacter species are acetate-oxidizing Fe(III)-reducers that compete in anoxic subsurface environments and this competition may have an influence on the in situ bioremediation of uranium-contaminated groundwater. Therefore, genome-scale models of Geobacter sulfurreducens and Rhodoferax ferrireducens were used to evaluate how Geobacter and Rhodoferax species might compete under diverse conditions found in a uranium-contaminated aquifer in Rifle, CO. The model predicted that at the low rates of acetate flux expected under natural conditions at the site, Rhodoferax will outcompete Geobacter as long as sufficient ammonium is available. The model also predicted that when high concentrations of acetate are added during in situ bioremediation, Geobacter species would predominate, consistent with field-scale observations. This can be attributed to the higher expected growth yields of Rhodoferax and the ability of Geobacter to fix nitrogen. The modeling predicted relative proportions of Geobacter and Rhodoferax in geochemically distinct zones of the Rifle site that were comparable to those that were previously documented with molecular techniques. The model also predicted that under nitrogen fixation, higher carbon and electron fluxes would be diverted toward respiration rather than biomass formation in Geobacter, providing a potential explanation for enhanced in situ U(VI) reduction in low-ammonium zones. These results show that genome-scale modeling can be a useful tool for predicting microbial interactions in subsurface environments and shows promise for designing bioremediation strategies. PMID:20668487
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.
Cucinotta, Francis A.; Cacao, Eliedonna
2017-05-12
Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cucinotta, Francis A.; Cacao, Eliedonna
Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less
Dimethylsulfide Chemistry: Annual, Seasonal, and Spatial Impacts on Sulfate
We incorporated oceanic emissions and atmospheric chemistry of dimethylsulfide (DMS) into the hemispheric Community Multiscale Air Quality model and performed annual model simulations without and with DMS chemistry. The model without DMS chemistry predicts higher concentrations o...
Figueroa, Isabel; Leipold, Doug; Leong, Steve; Zheng, Bing; Triguero-Carrasco, Montserrat; Fourie-O'Donohue, Aimee; Kozak, Katherine R; Xu, Keyang; Schutten, Melissa; Wang, Hong; Polson, Andrew G; Kamath, Amrita V
2018-05-14
For antibody-drug conjugates (ADCs) that carry a cytotoxic drug, doses that can be administered in preclinical studies are typically limited by tolerability, leading to a narrow dose range that can be tested. For molecules with non-linear pharmacokinetics (PK), this limited dose range may be insufficient to fully characterize the PK of the ADC and limits translation to humans. Mathematical PK models are frequently used for molecule selection during preclinical drug development and for translational predictions to guide clinical study design. Here, we present a practical approach that uses limited PK and receptor occupancy (RO) data of the corresponding unconjugated antibody to predict ADC PK when conjugation does not alter the non-specific clearance or the antibody-target interaction. We used a 2-compartment model incorporating non-specific and specific (target mediated) clearances, where the latter is a function of RO, to describe the PK of anti-CD33 ADC with dose-limiting neutropenia in cynomolgus monkeys. We tested our model by comparing PK predictions based on the unconjugated antibody to observed ADC PK data that was not utilized for model development. Prospective prediction of human PK was performed by incorporating in vitro binding affinity differences between species for varying levels of CD33 target expression. Additionally, this approach was used to predict human PK of other previously tested anti-CD33 molecules with published clinical data. The findings showed that, for a cytotoxic ADC with non-linear PK and limited preclinical PK data, incorporating RO in the PK model and using data from the corresponding unconjugated antibody at higher doses allowed the identification of parameters to characterize monkey PK and enabled human PK predictions.
Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics.
Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2017-03-04
Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)₄ model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.
NASA Astrophysics Data System (ADS)
Zhu, Linqi; Zhang, Chong; Zhang, Chaomo; Wei, Yang; Zhou, Xueqing; Cheng, Yuan; Huang, Yuyang; Zhang, Le
2018-06-01
There is increasing interest in shale gas reservoirs due to their abundant reserves. As a key evaluation criterion, the total organic carbon content (TOC) of the reservoirs can reflect its hydrocarbon generation potential. The existing TOC calculation model is not very accurate and there is still the possibility for improvement. In this paper, an integrated hybrid neural network (IHNN) model is proposed for predicting the TOC. This is based on the fact that the TOC information on the low TOC reservoir, where the TOC is easy to evaluate, comes from a prediction problem, which is the inherent problem of the existing algorithm. By comparing the prediction models established in 132 rock samples in the shale gas reservoir within the Jiaoshiba area, it can be seen that the accuracy of the proposed IHNN model is much higher than that of the other prediction models. The mean square error of the samples, which were not joined to the established models, was reduced from 0.586 to 0.442. The results show that TOC prediction is easier after logging prediction has been improved. Furthermore, this paper puts forward the next research direction of the prediction model. The IHNN algorithm can help evaluate the TOC of a shale gas reservoir.
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1978-01-01
Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Modelling of the 10-micrometer natural laser emission from the mesospheres of Mars and Venus
NASA Technical Reports Server (NTRS)
Deming, D.; Mumma, M. J.
1983-01-01
The NLTE radiative transfer problem is solved to obtain the 00 deg 1 vibrational state population. This model successfully reproduces the existing center-to-limb observations, although higher spatial resolution observations are needed for a definitive test. The model also predicts total fluxes which are close to the observed values. The strength of the emission is predicted to be closely related to the instantaneous near-IR solar heating rate.
Modeling of the 10-micron natural laser emission from the mesospheres of Mars and Venus
NASA Technical Reports Server (NTRS)
Deming, D.; Mumma, M. J.
1983-01-01
The NLTE radiative transfer problem is solved to obtain the 00 deg 1 vibrational state population. This model successfully reproduces the existing center-to-limb observations, although higher spatial resolution observations are needed for a definitive test. The model also predicts total fluxes which are close to the observed values. The strength of the emission is predicted to be closely related to the instantaneous near-IR solar heating rate.
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Zhao, Yue; Liu, Zhiyong; Liu, Chenfeng; Hu, Zhipeng
2017-01-01
Microalgae are considered to be a potential major biomass feedstock for biofuel due to their high lipid content. However, no correlation equations as a function of initial nitrogen concentration for lipid accumulation have been developed for simplicity to predict lipid production and optimize the lipid production process. In this study, a lipid accumulation model was developed with simple parameters based on the assumption protein synthesis shift to lipid synthesis by a linear function of nitrogen quota. The model predictions fitted well for the growth, lipid content, and nitrogen consumption of Coelastrum sp. HA-1 under various initial nitrogen concentrations. Then the model was applied successfully in Chlorella sorokiniana to predict the lipid content with different light intensities. The quantitative relationship between initial nitrogen concentrations and the final lipid content with sensitivity analysis of the model were also discussed. Based on the model results, the conversion efficiency from protein synthesis to lipid synthesis is higher and higher in microalgae metabolism process as nitrogen decreases; however, the carbohydrate composition content remains basically unchanged neither in HA-1 nor in C. sorokiniana. PMID:28194424
GEOS S2S-2_1: GMAO's New High Resolution Seasonal Prediction System
NASA Technical Reports Server (NTRS)
Molod, Andrea; Akella, Santha; Andrews, Lauren; Barahona, Donifan; Borovikov, Anna; Chang, Yehui; Cullather, Richard; Hackert, Eric; Kovach, Robin; Koster, Randal;
2017-01-01
A new version of the modeling and analysis system used to produce sub-seasonal to seasonal forecasts has just been released by the NASA Goddard Global Modeling and Assimilation Office. The new version runs at higher atmospheric resolution (approximately 12 degree globally), contains a substantially improved model description of the cryosphere, and includes additional interactive earth system model components (aerosol model). In addition, the Ocean data assimilation system has been replaced with a Local Ensemble Transform Kalman Filter. Here will describe the new system, along with the plans for the future (GEOS S2S-3_0) which will include a higher resolution ocean model and more interactive earth system model components (interactive vegetation, biomass burning from fires). We will also present results from a free-running coupled simulation with the new system and results from a series of retrospective seasonal forecasts. Results from retrospective forecasts show significant improvements in surface temperatures over much of the northern hemisphere and a much improved prediction of sea ice extent in both hemispheres. The precipitation forecast skill is comparable to previous S2S systems, and the only trade off is an increased double ITCZ, which is expected as we go to higher atmospheric resolution.
What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models
NASA Astrophysics Data System (ADS)
Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman
2018-01-01
Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.
Wu, Chun-Sheng; Huang, Ju-Sheng; Chou, Hsin-Hsien
2006-01-01
Predictive models for describing the hydrodynamic behavior (bed-expansion and bed-pressure gradient) of a three-phase anaerobic fluidized bed reactor (AFBR) was developed according to wake theory together with more realistic dynamic bed-expansion experiments (with and without internal biogas production). A reliable correlation equation for the parameter k (mean volume ratio of wakes to bubbles) was also established, which is of help in estimating liquid hold up of fluidized beds. The experimental expansion ratio of three-phase fluidized beds (E(GLS)) was approximately 18% higher than that of two-phase fluidized beds (E(LS)); whereas the experimental bed-pressure gradient of the former [(-DeltaP/H)(GLS)] was approximately 9.3% lower than that of the latter [(-DeltaP/H)(LS)]. Both the experimental and modeling results indicated that a higher superficial gas velocity (u(g)) gave a higher E(GLS) and a higher E(GLS) to E(LS) ratio as well as a lower (-DeltaP/H)(GLS) and a lower (-DeltaP/H)(GLS) to (-DeltaP/H)(LS) ratio. As for the operation stability of the AFBR, the sensitivity of u(g) to expansion height (H(GLS)) and (-DeltaP/H)(GLS) is between the sensitivity of superficial liquid velocity and biofilm thickness. The model predictions of E(GLS), (-DeltaP)(GLS), and (-DeltaP/H)(GLS) agreed well the experimental measurements. Accordingly, the predictive models accounting for internal biogas production described fairly well the hydrodynamic behavior of the AFBR.
Using a knowledge-based planning solution to select patients for proton therapy.
Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R
2017-08-01
Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.
Pfeiffer, Ruth M.; Miglioretti, Diana L.; Kerlikowske, Karla; Tice, Jeffery; Vacek, Pamela M.; Gierach, Gretchen L.
2016-01-01
Purpose Breast cancer risk prediction models are used to plan clinical trials and counsel women; however, relationships of predicted risks of breast cancer incidence and prognosis after breast cancer diagnosis are unknown. Methods Using largely pre-diagnostic information from the Breast Cancer Surveillance Consortium (BCSC) for 37,939 invasive breast cancers (1996–2007), we estimated 5-year breast cancer risk (<1%; 1–1.66%; ≥1.67%) with three models: BCSC 1-year risk model (BCSC-1; adapted to 5-year predictions); Breast Cancer Risk Assessment Tool (BCRAT); and BCSC 5-year risk model (BCSC-5). Breast cancer-specific mortality post-diagnosis (range: 1–13 years; median: 5.4–5.6 years) was related to predicted risk of developing breast cancer using unadjusted Cox proportional hazards models, and in age-stratified (35–44; 45–54; 55–69; 70–89 years) models adjusted for continuous age, BCSC registry, calendar period, income, mode of presentation, stage and treatment. Mean age at diagnosis was 60 years. Results Of 6,021 deaths, 2,993 (49.7%) were ascribed to breast cancer. In unadjusted case-only analyses, predicted breast cancer risk ≥1.67% versus <1.0% was associated with lower risk of breast cancer death; BCSC-1: hazard ratio (HR) = 0.82 (95% CI = 0.75–0.90); BCRAT: HR = 0.72 (95% CI = 0.65–0.81) and BCSC-5: HR = 0.84 (95% CI = 0.75–0.94). Age-stratified, adjusted models showed similar, although mostly non-significant HRs. Among women ages 55–69 years, HRs approximated 1.0. Generally, higher predicted risk was inversely related to percentages of cancers with unfavorable prognostic characteristics, especially among women 35–44 years. Conclusions Among cases assessed with three models, higher predicted risk of developing breast cancer was not associated with greater risk of breast cancer death; thus, these models would have limited utility in planning studies to evaluate breast cancer mortality reduction strategies. Further, when offering women counseling, it may be useful to note that high predicted risk of developing breast cancer does not imply that if cancer develops it will behave aggressively. PMID:27560501
Job Preferences in the Anticipatory Socialization Phase: A Comparison of Two Matching Models.
ERIC Educational Resources Information Center
Moss, Mira K.; Frieze, Irene Hanson
1993-01-01
Responses from 86 business administration graduate students tested (1) a model matching self-concept to development of job preferences and (2) an expectancy-value model. Both models significantly predicted job preferences; a higher proportion of variance was explained by the expectancy-value model. (SK)
Park, Seungman
2017-09-01
Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present computational models will provide new tools for predicting accurate functional properties and designing fibrous porous materials, thereby significantly advancing tissue engineering. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, H.; Gu, H.
2017-12-01
A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.
NASA Astrophysics Data System (ADS)
Matthaios, Vasileios N.; Triantafyllou, Athanasios G.; Albanis, Triantafyllos A.; Sakkas, Vasileios; Garas, Stelios
2018-05-01
Atmospheric modeling is considered an important tool with several applications such as prediction of air pollution levels, air quality management, and environmental impact assessment studies. Therefore, evaluation studies must be continuously made, in order to improve the accuracy and the approaches of the air quality models. In the present work, an attempt is made to examine the air pollution model (TAPM) efficiency in simulating the surface meteorology, as well as the SO2 concentrations in a mountainous complex terrain industrial area. Three configurations under different circumstances, firstly with default datasets, secondly with data assimilation, and thirdly with updated land use, ran in order to investigate the surface meteorology for a 3-year period (2009-2011) and one configuration applied to predict SO2 concentration levels for the year of 2011.The modeled hourly averaged meteorological and SO2 concentration values were statistically compared with those from five monitoring stations across the domain to evaluate the model's performance. Statistical measures showed that the surface temperature and relative humidity are predicted well in all three simulations, with index of agreement (IOA) higher than 0.94 and 0.70 correspondingly, in all monitoring sites, while an overprediction of extreme low temperature values is noted, with mountain altitudes to have an important role. However, the results also showed that the model's performance is related to the configuration regarding the wind. TAPM default dataset predicted better the wind variables in the center of the simulation than in the boundaries, while improvement in the boundary horizontal winds implied the performance of TAPM with updated land use. TAPM assimilation predicted the wind variables fairly good in the whole domain with IOA higher than 0.83 for the wind speed and higher than 0.85 for the horizontal wind components. Finally, the SO2 concentrations were assessed by the model with IOA varied from 0.37 to 0.57, mostly dependent on the grid/monitoring station of the simulated domain. The present study can be used, with relevant adaptations, as a user guideline for future conducting simulations in mountainous complex terrain.
NASA Astrophysics Data System (ADS)
Day, Jonathan J.; Tietsche, Steffen; Collins, Mat; Goessling, Helge F.; Guemas, Virginie; Guillory, Anabelle; Hurlin, William J.; Ishii, Masayoshi; Keeley, Sarah P. E.; Matei, Daniela; Msadek, Rym; Sigmond, Michael; Tatebe, Hiroaki; Hawkins, Ed
2016-06-01
Recent decades have seen significant developments in climate prediction capabilities at seasonal-to-interannual timescales. However, until recently the potential of such systems to predict Arctic climate had rarely been assessed. This paper describes a multi-model predictability experiment which was run as part of the Arctic Predictability and Prediction On Seasonal to Interannual Timescales (APPOSITE) project. The main goal of APPOSITE was to quantify the timescales on which Arctic climate is predictable. In order to achieve this, a coordinated set of idealised initial-value predictability experiments, with seven general circulation models, was conducted. This was the first model intercomparison project designed to quantify the predictability of Arctic climate on seasonal to interannual timescales. Here we present a description of the archived data set (which is available at the British Atmospheric Data Centre), an assessment of Arctic sea ice extent and volume predictability estimates in these models, and an investigation into to what extent predictability is dependent on the initial state. The inclusion of additional models expands the range of sea ice volume and extent predictability estimates, demonstrating that there is model diversity in the potential to make seasonal-to-interannual timescale predictions. We also investigate whether sea ice forecasts started from extreme high and low sea ice initial states exhibit higher levels of potential predictability than forecasts started from close to the models' mean state, and find that the result depends on the metric. Although designed to address Arctic predictability, we describe the archived data here so that others can use this data set to assess the predictability of other regions and modes of climate variability on these timescales, such as the El Niño-Southern Oscillation.
Dimethylsulfide chemistry: annual, seasonal, and spatial impacts on SO_4^(2-)
We incorporated oceanic emissions and atmospheric chemistry of dimethylsulfide (DMS) into the hemispheric Community Multiscale Air Quality model and performed annual model simulations without and with DMS chemistry. The model without DMS chemistry predicts higher concentrations o...
Zhang, Ji-Li; Liu, Bo-Fei; Chu, Teng-Fei; Di, Xue-Ying; Jin, Sen
2012-06-01
A laboratory burning experiment was conducted to measure the fire spread speed, residual time, reaction intensity, fireline intensity, and flame length of the ground surface fuels collected from a Korean pine (Pinus koraiensis) and Mongolian oak (Quercus mongolica) mixed stand in Maoer Mountains of Northeast China under the conditions of no wind, zero slope, and different moisture content, load, and mixture ratio of the fuels. The results measured were compared with those predicted by the extended Rothermel model to test the performance of the model, especially for the effects of two different weighting methods on the fire behavior modeling of the mixed fuels. With the prediction of the model, the mean absolute errors of the fire spread speed and reaction intensity of the fuels were 0.04 m X min(-1) and 77 kW X m(-2), their mean relative errors were 16% and 22%, while the mean absolute errors of residual time, fireline intensity and flame length were 15.5 s, 17.3 kW X m(-1), and 9.7 cm, and their mean relative errors were 55.5%, 48.7%, and 24%, respectively, indicating that the predicted values of residual time, fireline intensity, and flame length were lower than the observed ones. These errors could be regarded as the lower limits for the application of the extended Rothermel model in predicting the fire behavior of similar fuel types, and provide valuable information for using the model to predict the fire behavior under the similar field conditions. As a whole, the two different weighting methods did not show significant difference in predicting the fire behavior of the mixed fuels by extended Rothermel model. When the proportion of Korean pine fuels was lower, the predicted values of spread speed and reaction intensity obtained by surface area weighting method and those of fireline intensity and flame length obtained by load weighting method were higher; when the proportion of Korean pine needles was higher, the contrary results were obtained.
Using Analog Ensemble to generate spatially downscaled probabilistic wind power forecasts
NASA Astrophysics Data System (ADS)
Delle Monache, L.; Shahriari, M.; Cervone, G.
2017-12-01
We use the Analog Ensemble (AnEn) method to generate probabilistic 80-m wind power forecasts. We use data from the NCEP GFS ( 28 km resolution) and NCEP NAM (12 km resolution). We use forecasts data from NAM and GFS, and analysis data from NAM which enables us to: 1) use a lower-resolution model to create higher-resolution forecasts, and 2) use a higher-resolution model to create higher-resolution forecasts. The former essentially increases computing speed and the latter increases forecast accuracy. An aggregated model of the former can be compared against the latter to measure the accuracy of the AnEn spatial downscaling. The AnEn works by taking a deterministic future forecast and comparing it with past forecasts. The model searches for the best matching estimates within the past forecasts and selects the predictand value corresponding to these past forecasts as the ensemble prediction for the future forecast. Our study is based on predicting wind speed and air density at more than 13,000 grid points in the continental US. We run the AnEn model twice: 1) estimating 80-m wind speed by using predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind, 2) estimating air density by using predictors such as temperature, pressure, and relative humidity. We use the air density values to correct the standard wind power curves for different values of air density. The standard deviation of the ensemble members (i.e. ensemble spread) will be used as the degree of difficulty to predict wind power at different locations. The value of the correlation coefficient between the ensemble spread and the forecast error determines the appropriateness of this measure. This measure is prominent for wind farm developers as building wind farms in regions with higher predictability will reduce the real-time risks of operating in the electricity markets.
Giri, Veda N.; Egleston, Brian; Ruth, Karen; Uzzo, Robert G.; Chen, David Y.T.; Buyyounouski, Mark; Raysor, Susan; Hooker, Stanley; Torres, Jada Benn; Ramike, Teniel; Mastalski, Kathleen; Kim, Taylor Y.; Kittles, Rick
2008-01-01
Introduction “Race-specific” PSA needs evaluation in men at high-risk for prostate cancer (PCA) for optimizing early detection. Baseline PSA and longitudinal prediction for PCA was examined by self-reported race and genetic West African (WA) ancestry in the Prostate Cancer Risk Assessment Program, a prospective high-risk cohort. Materials and Methods Eligibility criteria are age 35–69 years, FH of PCA, African American (AA) race, or BRCA1/2 mutations. Biopsies have been performed at low PSA values (<4.0 ng/mL). WA ancestry was discerned by genotyping 100 ancestry informative markers. Cox proportional hazards models evaluated baseline PSA, self-reported race, and genetic WA ancestry. Cox models were used for 3-year predictions for PCA. Results 646 men (63% AA) were analyzed. Individual WA ancestry estimates varied widely among self-reported AA men. “Race-specific” differences in baseline PSA were not found by self-reported race or genetic WA ancestry. Among men with ≥ 1 follow-up visit (405 total, 54% AA), three-year prediction for PCA with a PSA of 1.5–4.0 ng/mL was higher in AA men with age in the model (p=0.025) compared to EA men. Hazard ratios of PSA for PCA were also higher by self-reported race (1.59 for AA vs. 1.32 for EA, p=0.04). There was a trend for increasing prediction for PCA with increasing genetic WA ancestry. Conclusions “Race-specific” PSA may need to be redefined as higher prediction for PCA at any given PSA in AA men. Large-scale studies are needed to confirm if genetic WA ancestry explains these findings to make progress in personalizing PCA early detection. PMID:19240249
Kuo, Ben Ch; Kwantes, Catherine T
2014-01-01
Despite the prevalence and popularity of research on positive and negative affect within the field of psychology, there is currently little research on affect involving the examination of cultural variables and with participants of diverse cultural and ethnic backgrounds. To the authors' knowledge, currently no empirical studies have comprehensively examined predictive models of positive and negative affect based specifically on multiple psychosocial, acculturation, and coping variables as predictors with any sample populations. Therefore, the purpose of the present study was to test the predictive power of perceived stress, social support, bidirectional acculturation (i.e., Canadian acculturation and heritage acculturation), religious coping and cultural coping (i.e., collective, avoidance, and engagement coping) in explaining positive and negative affect in a multiethnic sample of 301 undergraduate students in Canada. Two hierarchal multiple regressions were conducted, one for each affect as the dependent variable, with the above described predictors. The results supported the hypotheses and showed the two overall models to be significant in predicting affect of both kinds. Specifically, a higher level of positive affect was predicted by a lower level of perceived stress, less use of religious coping, and more use of engagement coping in dealing with stress by the participants. Higher level of negative affect, however, was predicted by a higher level of perceived stress and more use of avoidance coping in responding to stress. The current findings highlight the value and relevance of empirically examining the stress-coping-adaptation experiences of diverse populations from an affective conceptual framework, particularly with the inclusion of positive affect. Implications and recommendations for advancing future research and theoretical works in this area are considered and presented.
External Validation of the Garvan Nomograms for Predicting Absolute Fracture Risk: The Tromsø Study
Ahmed, Luai A.; Nguyen, Nguyen D.; Bjørnerem, Åshild; Joakimsen, Ragnar M.; Jørgensen, Lone; Størmer, Jan; Bliuc, Dana; Center, Jacqueline R.; Eisman, John A.; Nguyen, Tuan V.; Emaus, Nina
2014-01-01
Background Absolute risk estimation is a preferred approach for assessing fracture risk and treatment decision making. This study aimed to evaluate and validate the predictive performance of the Garvan Fracture Risk Calculator in a Norwegian cohort. Methods The analysis included 1637 women and 1355 aged 60+ years from the Tromsø study. All incident fragility fractures between 2001 and 2009 were registered. The predicted probabilities of non-vertebral osteoporotic and hip fractures were determined using models with and without BMD. The discrimination and calibration of the models were assessed. Reclassification analysis was used to compare the models performance. Results The incidence of osteoporotic and hip fracture was 31.5 and 8.6 per 1000 population in women, respectively; in men the corresponding incidence was 12.2 and 5.1. The predicted 5-year and 10-year probability of fractures was consistently higher in the fracture group than the non-fracture group for all models. The 10-year predicted probabilities of hip fracture in those with fracture was 2.8 (women) to 3.1 times (men) higher than those without fracture. There was a close agreement between predicted and observed risk in both sexes and up to the fifth quintile. Among those in the highest quintile of risk, the models over-estimated the risk of fracture. Models with BMD performed better than models with body weight in correct classification of risk in individuals with and without fracture. The overall net decrease in reclassification of the model with weight compared to the model with BMD was 10.6% (p = 0.008) in women and 17.2% (p = 0.001) in men for osteoporotic fractures, and 13.3% (p = 0.07) in women and 17.5% (p = 0.09) in men for hip fracture. Conclusions The Garvan Fracture Risk Calculator is valid and clinically useful in identifying individuals at high risk of fracture. The models with BMD performed better than those with body weight in fracture risk prediction. PMID:25255221
Macrocell path loss prediction using artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.
2014-04-01
The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-05-11
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.
Liu, Yang; Paciorek, Christopher J; Koutrakis, Petros
2009-06-01
Studies of chronic health effects due to exposures to particulate matter with aerodynamic diameters
NASA Astrophysics Data System (ADS)
Huang, Bing; von Lilienfeld, O. Anatole
2016-10-01
The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. Inspired by the postulates of quantum mechanics, we introduce a hierarchy of representations which meet uniqueness and target similarity criteria. To systematically control target similarity, we simply rely on interatomic many body expansions, as implemented in universal force-fields, including Bonding, Angular (BA), and higher order terms. Addition of higher order contributions systematically increases similarity to the true potential energy and predictive accuracy of the resulting ML models. We report numerical evidence for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heat capacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After training, BAML predicts energies or electronic properties of out-of-sample molecules with unprecedented accuracy and speed.
The Glacial BuzzSaw, Isostasy, and Global Crustal Models
NASA Astrophysics Data System (ADS)
Levander, A.; Oncken, O.; Niu, F.
2015-12-01
The glacial buzzsaw hypothesis predicts that maximum elevations in orogens at high latitudes are depressed relative to temperate latitudes, as maximum elevation and hypsography of glaciated orogens are functions of the glacial equilibrium line altitude (ELA) and the modern and last glacial maximum (LGM) snowlines. As a consequence crustal thickness, density, or both must change with increasing latitude to maintain isostatic balance. For Airy compensation crustal thickness should decrease toward polar latitudes, whereas for Pratt compensation crustal densities should increase. For similar convergence rates, higher latitude orogens should have higher grade, and presumably higher density rocks in the crustal column due to more efficient glacial erosion. We have examined a number of global and regional crustal models to see if these predictions appear in the models. Crustal thickness is straightforward to examine, crustal density less so. The different crustal models generally agree with one another, but do show some major differences. We used a standard tectonic classification scheme of the crust for data selection. The globally averaged orogens show crustal thicknesses that decrease toward high latitudes, almost reflecting topography, in both the individual crustal models and the models averaged together. The most convincing is the western hemisphere cordillera, where elevations and crustal thicknesses decrease toward the poles, and also toward lower latitudes (the equatorial minimum is at ~12oN). The elevation differences and Airy prediction of crustal thickness changes are in reasonable agreement in the North American Cordillera, but in South America the observed crustal thickness change is larger than the Airy prediction. The Alpine-Himalayan chain shows similar trends, however the strike of the chain makes interpretation ambiguous. We also examined cratons with ice sheets during the last glacial period to see if continental glaciation also thins the crust toward higher latitudes. The glaciated North American and European cratons show a trend of modest thinning (~3km), and glaciated western Asia minor thinning (~1.5 km). These values are at the level of model uncertainties, but we note that cratons without ice sheets during the last glacial period show substantially different patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.; Oldenburg, C.; Moridis, G.
1997-12-31
This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less
Study on SOC wavelet analysis for LiFePO4 battery
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.
Titah, Harmin Sulistiyaning; Halmi, Mohd Izuan Effendi Bin; Abdullah, Siti Rozaimah Sheikh; Hasan, Hassimi Abu; Idris, Mushrifah; Anuar, Nurina
2018-06-07
In this study, the removal of arsenic (As) by plant, Ludwigia octovalvis, in a pilot reed bed was optimized. A Box-Behnken design was employed including a comparative analysis of both Response Surface Methodology (RSM) and an Artificial Neural Network (ANN) for the prediction of maximum arsenic removal. The predicted optimum condition using the desirability function of both models was 39 mg kg -1 for the arsenic concentration in soil, an elapsed time of 42 days (the sampling day) and an aeration rate of 0.22 L/min, with the predicted values of arsenic removal by RSM and ANN being 72.6% and 71.4%, respectively. The validation of the predicted optimum point showed an actual arsenic removal of 70.6%. This was achieved with the deviation between the validation value and the predicted values being within 3.49% (RSM) and 1.87% (ANN). The performance evaluation of the RSM and ANN models showed that ANN performs better than RSM with a higher R 2 (0.97) close to 1.0 and very small Average Absolute Deviation (AAD) (0.02) and Root Mean Square Error (RMSE) (0.004) values close to zero. Both models were appropriate for the optimization of arsenic removal with ANN demonstrating significantly higher predictive and fitting ability than RSM.
Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
Duan, Liwei; Zhang, Sheng; Lin, Zhaofen
2017-02-01
To explore the method and performance of using multiple indices to diagnose sepsis and to predict the prognosis of severe ill patients. Critically ill patients at first admission to intensive care unit (ICU) of Changzheng Hospital, Second Military Medical University, from January 2014 to September 2015 were enrolled if the following conditions were satisfied: (1) patients were 18-75 years old; (2) the length of ICU stay was more than 24 hours; (3) All records of the patients were available. Data of the patients was collected by searching the electronic medical record system. Logistic regression model was formulated to create the new combined predictive indicator and the receiver operating characteristic (ROC) curve for the new predictive indicator was built. The area under the ROC curve (AUC) for both the new indicator and original ones were compared. The optimal cut-off point was obtained where the Youden index reached the maximum value. Diagnostic parameters such as sensitivity, specificity and predictive accuracy were also calculated for comparison. Finally, individual values were substituted into the equation to test the performance in predicting clinical outcomes. A total of 362 patients (218 males and 144 females) were enrolled in our study and 66 patients died. The average age was (48.3±19.3) years old. (1) For the predictive model only containing categorical covariants [including procalcitonin (PCT), lipopolysaccharide (LPS), infection, white blood cells count (WBC) and fever], increased PCT, increased WBC and fever were demonstrated to be independent risk factors for sepsis in the logistic equation. The AUC for the new combined predictive indicator was higher than that of any other indictor, including PCT, LPS, infection, WBC and fever (0.930 vs. 0.661, 0.503, 0.570, 0.837, 0.800). The optimal cut-off value for the new combined predictive indicator was 0.518. Using the new indicator to diagnose sepsis, the sensitivity, specificity and diagnostic accuracy rate were 78.00%, 93.36% and 87.47%, respectively. One patient was randomly selected, and the clinical data was substituted into the probability equation for prediction. The calculated value was 0.015, which was less than the cut-off value (0.518), indicating that the prognosis was non-sepsis at an accuracy of 87.47%. (2) For the predictive model only containing continuous covariants, the logistic model which combined acute physiology and chronic health evaluation II (APACHE II) score and sequential organ failure assessment (SOFA) score to predict in-hospital death events, both APACHE II score and SOFA score were independent risk factors for death. The AUC for the new predictive indicator was higher than that of APACHE II score and SOFA score (0.834 vs. 0.812, 0.813). The optimal cut-off value for the new combined predictive indicator in predicting in-hospital death events was 0.236, and the corresponding sensitivity, specificity and diagnostic accuracy for the combined predictive indicator were 73.12%, 76.51% and 75.70%, respectively. One patient was randomly selected, and the APACHE II score and SOFA score was substituted into the probability equation for prediction. The calculated value was 0.570, which was higher than the cut-off value (0.236), indicating that the death prognosis at an accuracy of 75.70%. The combined predictive indicator, which is formulated by logistic regression models, is superior to any single indicator in predicting sepsis or in-hospital death events.
Accuracies of univariate and multivariate genomic prediction models in African cassava.
Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2017-12-04
Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.
GEOS S2S-2_1: The GMAO new high resolution Seasonal Prediction System
NASA Astrophysics Data System (ADS)
Molod, A.; Vikhliaev, Y. V.; Hackert, E. C.; Kovach, R. M.; Zhao, B.; Cullather, R. I.; Marshak, J.; Borovikov, A.; Li, Z.; Barahona, D.; Andrews, L. C.; Chang, Y.; Schubert, S. D.; Koster, R. D.; Suarez, M.; Akella, S.
2017-12-01
A new version of the modeling and analysis system used to produce subseasonalto seasonal forecasts has just been released by the NASA/Goddard GlobalModeling and Assimilation Office. The new version runs at higher atmospheric resolution (approximately 1/2 degree globally), contains a subtantially improvedmodel description of the cryosphere, and includes additional interactive earth system model components (aerosol model). In addition, the Ocean data assimilationsystem has been replaced with a Local Ensemble Transform Kalman Filter.Here will describe the new system, along with the plans for the future (GEOS S2S-3_0) which will include a higher resolution ocean model and more interactive earth system model components (interactive vegetation, biomass burning from fires). We will alsopresent results from a free-running coupled simulation with the new system and resultsfrom a series of retrospective seasonal forecasts.Results from retrospective forecasts show significant improvements in surface temperaturesover much of the northern hemisphere and a much improved prediction of sea ice extent in bothhemispheres. The precipitation forecast skill is comparable to previous S2S systems, andthe only tradeoff is an increased "double ITCZ", which is expected as we go to higher atmospheric resolution.
Open-Source Learning Management Systems: A Predictive Model for Higher Education
ERIC Educational Resources Information Center
van Rooij, S. Williams
2012-01-01
The present study investigated the role of pedagogical, technical, and institutional profile factors in an institution of higher education's decision to select an open-source learning management system (LMS). Drawing on the results of previous research that measured patterns of deployment of open-source software (OSS) in US higher education and…
Turusheva, Anna; Frolova, Elena; Bert, Vaes; Hegendoerfer, Eralda; Degryse, Jean-Marie
2017-07-01
Prediction models help to make decisions about further management in clinical practice. This study aims to develop a mortality risk score based on previously identified risk predictors and to perform internal and external validations. In a population-based prospective cohort study of 611 community-dwelling individuals aged 65+ in St. Petersburg (Russia), all-cause mortality risks over 2.5 years follow-up were determined based on the results obtained from anthropometry, medical history, physical performance tests, spirometry and laboratory tests. C-statistic, risk reclassification analysis, integrated discrimination improvement analysis, decision curves analysis, internal validation and external validation were performed. Older adults were at higher risk for mortality [HR (95%CI)=4.54 (3.73-5.52)] when two or more of the following components were present: poor physical performance, low muscle mass, poor lung function, and anemia. If anemia was combined with high C-reactive protein (CRP) and high B-type natriuretic peptide (BNP) was added the HR (95%CI) was slightly higher (5.81 (4.73-7.14)) even after adjusting for age, sex and comorbidities. Our models were validated in an external population of adults 80+. The extended model had a better predictive capacity for cardiovascular mortality [HR (95%CI)=5.05 (2.23-11.44)] compared to the baseline model [HR (95%CI)=2.17 (1.18-4.00)] in the external population. We developed and validated a new risk prediction score that may be used to identify older adults at higher risk for mortality in Russia. Additional studies need to determine which targeted interventions improve the outcomes of these at-risk individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
Structural features that predict real-value fluctuations of globular proteins
Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke
2012-01-01
It is crucial to consider dynamics for understanding the biological function of proteins. We used a large number of molecular dynamics trajectories of non-homologous proteins as references and examined static structural features of proteins that are most relevant to fluctuations. We examined correlation of individual structural features with fluctuations and further investigated effective combinations of features for predicting the real-value of residue fluctuations using the support vector regression. It was found that some structural features have higher correlation than crystallographic B-factors with fluctuations observed in molecular dynamics trajectories. Moreover, support vector regression that uses combinations of static structural features showed accurate prediction of fluctuations with an average Pearson’s correlation coefficient of 0.669 and a root mean square error of 1.04 Å. This correlation coefficient is higher than the one observed for the prediction by the Gaussian network model. An advantage of the developed method over the Gaussian network models is that the former predicts the real-value of fluctuation. The results help improve our understanding of relationships between protein structure and fluctuation. Furthermore, the developed method provides a convienient practial way to predict fluctuations of proteins using easily computed static structural features of proteins. PMID:22328193
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atienzar, Franck A., E-mail: franck.atienzar@ucb.com; Novik, Eric I.; Gerets, Helga H.
Drug Induced Liver Injury (DILI) is a major cause of attrition during early and late stage drug development. Consequently, there is a need to develop better in vitro primary hepatocyte models from different species for predicting hepatotoxicity in both animals and humans early in drug development. Dog is often chosen as the non-rodent species for toxicology studies. Unfortunately, dog in vitro models allowing long term cultures are not available. The objective of the present manuscript is to describe the development of a co-culture dog model for predicting hepatotoxic drugs in humans and to compare the predictivity of the canine modelmore » along with primary human hepatocytes and HepG2 cells. After rigorous optimization, the dog co-culture model displayed metabolic capacities that were maintained up to 2 weeks which indicates that such model could be also used for long term metabolism studies. Most of the human hepatotoxic drugs were detected with a sensitivity of approximately 80% (n = 40) for the three cellular models. Nevertheless, the specificity was low approximately 40% for the HepG2 cells and hepatocytes compared to 72.7% for the canine model (n = 11). Furthermore, the dog co-culture model showed a higher superiority for the classification of 5 pairs of close structural analogs with different DILI concerns in comparison to both human cellular models. Finally, the reproducibility of the canine system was also satisfactory with a coefficient of correlation of 75.2% (n = 14). Overall, the present manuscript indicates that the dog co-culture model may represent a relevant tool to perform chronic hepatotoxicity and metabolism studies. - Highlights: • Importance of species differences in drug development. • Relevance of dog co-culture model for metabolism and toxicology studies. • Hepatotoxicity: higher predictivity of dog co-culture vs HepG2 and human hepatocytes.« less
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. © 2016 John Wiley & Sons Ltd.
Genomic prediction in a nuclear population of layers using single-step models.
Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning
2018-02-01
Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.
Predicting institutionalization after traumatic brain injury inpatient rehabilitation.
Eum, Regina S; Seel, Ronald T; Goldstein, Richard; Brown, Allen W; Watanabe, Thomas K; Zasler, Nathan D; Roth, Elliot J; Zafonte, Ross D; Glenn, Mel B
2015-02-15
Risk factors contributing to institutionalization after inpatient rehabilitation for people with traumatic brain injury (TBI) have not been well studied and need to be better understood to guide clinicians during rehabilitation. We aimed to develop a prognostic model that could be used at admission to inpatient rehabilitation facilities to predict discharge disposition. The model could be used to provide the interdisciplinary team with information regarding aspects of patients' functioning and/or their living situation that need particular attention during inpatient rehabilitation if institutionalization is to be avoided. The study population included 7219 patients with moderate-severe TBI in the Traumatic Brain Injury Model Systems (TBIMS) National Database enrolled from 2002-2012 who had not been institutionalized prior to injury. Based on institutionalization predictors in other populations, we hypothesized that among people who had lived at a private residence prior to injury, greater dependence in locomotion, bed-chair-wheelchair transfers, bladder and bowel continence, feeding, and comprehension at admission to inpatient rehabilitation programs would predict institutionalization at discharge. Logistic regression was used, with adjustment for demographic factors, proxy measures for TBI severity, and acute-care length-of-stay. C-statistic and predictiveness curves validated a five-variable model. Higher levels of independence in bladder management (adjusted odds ratio [OR], 0.88; 95% CI 0.83, 0.93), bed-chair-wheelchair transfers (OR, 0.81 [95% CI, 0.83-0.93]), and comprehension (OR, 0.78 [95% CI, 0.68, 0.89]) at admission were associated with lower risks of institutionalization on discharge. For every 10-year increment in age was associated with a 1.38 times higher risk for institutionalization (95% CI, 1.29, 1.48) and living alone was associated with a 2.34 times higher risk (95% CI, 1.86, 2.94). The c-statistic was 0.780. We conclude that this simple model can predict risk of institutionalization after inpatient rehabilitation for patients with TBI.
Leffel, G Michael; Oakes Mueller, Ross A; Ham, Sandra A; Karches, Kyle E; Curlin, Farr A; Yoon, John D
2018-01-19
In the Project on the Good Physician, the authors propose a moral intuitionist model of virtuous caring that places the virtues of Mindfulness, Empathic Compassion, and Generosity at the heart of medical character education. Hypothesis 1a: The virtues of Mindfulness, Empathic Compassion, and Generosity will be positively associated with one another (convergent validity). Hypothesis 1b: The virtues of Mindfulness and Empathic Compassion will explain variance in the action-related virtue of Generosity beyond that predicted by Big Five personality traits alone (discriminant validity). Hypothesis 1c: Virtuous students will experience greater well-being ("flourishing"), as measured by four indices of well-being: life meaning, life satisfaction, vocational identity, and vocational calling (predictive validity). Hypothesis 1d: Students who self-report higher levels of the virtues will be nominated by their peers for the Gold Humanism Award (predictive validity). Hypothesis 2a-2c: Neuroticism and Burnout will be positively associated with each other and inversely associated with measures of virtue and well-being. The authors used data from a 2011 nationally representative sample of U.S. medical students (n = 499) in which medical virtues (Mindfulness, Empathic Compassion, and Generosity) were measured using scales adapted from existing instruments with validity evidence. Supporting the predictive validity of the model, virtuous students were recognized by their peers to be exemplary doctors, and they were more likely to have higher ratings on measures of student well-being. Supporting the discriminant validity of the model, virtues predicted prosocial behavior (Generosity) more than personality traits alone, and students higher in the virtue of Mindfulness were less likely to be high in Neuroticism and Burnout. Data from this descriptive-correlational study offered additional support for the validity of the moral intuitionist model of virtuous caring. Applied to medical character education, medical school programs should consider designing educational experiences that intentionally emphasize the cultivation of virtue.
The c-index is not proper for the evaluation of $t$-year predicted risks.
Blanche, Paul; Kattan, Michael W; Gerds, Thomas A
2018-02-16
We show that the widely used concordance index for time to event outcome is not proper when interest is in predicting a $t$-year risk of an event, for example 10-year mortality. In the situation with a fixed prediction horizon, the concordance index can be higher for a misspecified model than for a correctly specified model. Impropriety happens because the concordance index assesses the order of the event times and not the order of the event status at the prediction horizon. The time-dependent area under the receiver operating characteristic curve does not have this problem and is proper in this context.
Zhang, Banglin; Tallapragada, Vijay; Weng, Fuzhong; Liu, Qingfu; Sippel, Jason A.; Ma, Zaizhong; Bender, Morris A.
2016-01-01
The atmosphere−ocean coupled Hurricane Weather Research and Forecast model (HWRF) developed at the National Centers for Environmental Prediction (NCEP) is used as an example to illustrate the impact of model vertical resolution on track forecasts of tropical cyclones. A number of HWRF forecasting experiments were carried out at different vertical resolutions for Hurricane Joaquin, which occurred from September 27 to October 8, 2015, in the Atlantic Basin. The results show that the track prediction for Hurricane Joaquin is much more accurate with higher vertical resolution. The positive impacts of higher vertical resolution on hurricane track forecasts suggest that National Oceanic and Atmospheric Administration/NCEP should upgrade both HWRF and the Global Forecast System to have more vertical levels. PMID:27698121
NASA Astrophysics Data System (ADS)
Mimasu, Ken; Sanz, Verónica; Williams, Ciaran
2016-08-01
We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.
Turnell, Adrienne; Rasmussen, Victoria; Butow, Phyllis; Juraskova, Ilona; Kirsten, Laura; Wiener, Lori; Patenaude, Andrea; Hoekstra-Weebers, Josette; Grassi, Luigi
2016-01-01
Objective Burnout is reportedly high among oncology healthcare workers. Psychosocial oncologists may be particularly vulnerable to burnout. However, their work engagement may also be high, counteracting stress in the workplace. This study aimed to document the prevalence of both burnout and work engagement, and the predictors of both, utilizing the job demands–resources (JD–R) model, within a sample of psychosocial oncologists. Method Psychosocial-oncologist (N = 417) clinicians, recruited through 10 international and national psychosocial-oncology societies, completed an online questionnaire. Measures included demographic and work characteristics, burnout (the MBI–HSS Emotional Exhaustion (EE) and Depersonalization (DP) subscales), the Utrecht Work Engagement Scale, and measures of job demands and resources. Results High EE and DP was reported by 20.2 and 6.6% of participants, respectively, while 95.3% reported average to high work engagement. Lower levels of job resources and higher levels of job demands predicted greater burnout, as predicted by the JD–R model, but the predicted interaction between these characteristics and burnout was not significant. Higher levels of job resources predicted higher levels of work engagement. Significance of results Burnout was surprisingly low and work engagement high in this sample. Nonetheless, one in five psychosocial oncologists have high EE. Our results suggest that both the positive (resources) and negative (demands) aspects of this work environment have an on impact burnout and engagement, offering opportunities for intervention. Theories such as the JD–R model can be useful in guiding research in this area. PMID:26653250
Representing, Running, and Revising Mental Models: A Computational Model
ERIC Educational Resources Information Center
Friedman, Scott; Forbus, Kenneth; Sherin, Bruce
2018-01-01
People use commonsense science knowledge to flexibly explain, predict, and manipulate the world around them, yet we lack computational models of how this commonsense science knowledge is represented, acquired, utilized, and revised. This is an important challenge for cognitive science: Building higher order computational models in this area will…
Numerical Study Comparing RANS and LES Approaches on a Circulation Control Airfoil
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Nishino, Takafumi
2011-01-01
A numerical study over a nominally two-dimensional circulation control airfoil is performed using a large-eddy simulation code and two Reynolds-averaged Navier-Stokes codes. Different Coanda jet blowing conditions are investigated. In addition to investigating the influence of grid density, a comparison is made between incompressible and compressible flow solvers. The incompressible equations are found to yield negligible differences from the compressible equations up to at least a jet exit Mach number of 0.64. The effects of different turbulence models are also studied. Models that do not account for streamline curvature effects tend to predict jet separation from the Coanda surface too late, and can produce non-physical solutions at high blowing rates. Three different turbulence models that account for streamline curvature are compared with each other and with large eddy simulation solutions. All three models are found to predict the Coanda jet separation location reasonably well, but one of the models predicts specific flow field details near the Coanda surface prior to separation much better than the other two. All Reynolds-averaged Navier-Stokes computations produce higher circulation than large eddy simulation computations, with different stagnation point location and greater flow acceleration around the nose onto the upper surface. The precise reasons for the higher circulation are not clear, although it is not solely a function of predicting the jet separation location correctly.
Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Malekzadeh, Reza
2013-02-07
Identification of squamous dysplasia and esophageal squamous cell carcinoma (ESCC) is of great importance in prevention of cancer incidence. Computer aided algorithms can be very useful for identification of people with higher risks of squamous dysplasia, and ESCC. Such method can limit the clinical screenings to people with higher risks. Different regression methods have been used to predict ESCC and dysplasia. In this paper, a Fuzzy Neural Network (FNN) model is selected for ESCC and dysplasia prediction. The inputs to the classifier are the risk factors. Since the relation between risk factors in the tumor system has a complex nonlinear behavior, in comparison to most of ordinary data, the cost function of its model can have more local optimums. Thus the need for global optimization methods is more highlighted. The proposed method in this paper is a Chaotic Optimization Algorithm (COA) proceeding by the common Error Back Propagation (EBP) local method. Since the model has many parameters, we use a strategy to reduce the dependency among parameters caused by the chaotic series generator. This dependency was not considered in the previous COA methods. The algorithm is compared with logistic regression model as the latest successful methods of ESCC and dysplasia prediction. The results represent a more precise prediction with less mean and variance of error. Copyright © 2012 Elsevier Ltd. All rights reserved.
New higher-order Godunov code for modelling performance of two-stage light gas guns
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.; Miller, R. J.
1995-01-01
A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. Copyright © 2016 Elsevier Inc. All rights reserved.
Moisen, Gretchen G.; Freeman, E.A.; Blackard, J.A.; Frescino, T.S.; Zimmermann, N.E.; Edwards, T.C.
2006-01-01
Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's?? See5 and Cubist (for binary and continuous responses, respectively) are the tools of choice in many of these applications. These tools are widely used in large remote sensing applications, but are not easily interpretable, do not have ties with survey estimation methods, and use proprietary unpublished algorithms. Consequently, three alternative modelling techniques were compared for mapping presence and basal area of 13 species located in the mountain ranges of Utah, USA. The modelling techniques compared included the widely used See5/Cubist, generalized additive models (GAMs), and stochastic gradient boosting (SGB). Model performance was evaluated using independent test data sets. Evaluation criteria for mapping species presence included specificity, sensitivity, Kappa, and area under the curve (AUC). Evaluation criteria for the continuous basal area variables included correlation and relative mean squared error. For predicting species presence (setting thresholds to maximize Kappa), SGB had higher values for the majority of the species for specificity and Kappa, while GAMs had higher values for the majority of the species for sensitivity. In evaluating resultant AUC values, GAM and/or SGB models had significantly better results than the See5 models where significant differences could be detected between models. For nine out of 13 species, basal area prediction results for all modelling techniques were poor (correlations less than 0.5 and relative mean squared errors greater than 0.8), but SGB provided the most stable predictions in these instances. SGB and Cubist performed equally well for modelling basal area for three species with moderate prediction success, while all three modelling tools produced comparably good predictions (correlation of 0.68 and relative mean squared error of 0.56) for one species. ?? 2006 Elsevier B.V. All rights reserved.
A trade-off between model resolution and variance with selected Rayleigh-wave data
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (??? 2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. First, we employed a data-resolution matrix to select data that would be well predicted and to explain advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher mode data are normally more accurately predicted than fundamental mode data because of restrictions on the data kernel for the inversion system. Second, we obtained an optimal damping vector in a vicinity of an inverted model by the singular value decomposition of a trade-off function of model resolution and variance. In the end of the paper, we used a real-world example to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher mode data in inversion can provide better results. We also calculated model-resolution matrices of these examples to show the potential of increasing model resolution with selected surface-wave data. With the optimal damping vector, we can improve and assess an inverted model obtained by a damped least-square method.
Comparison of Primary Models to Predict Microbial Growth by the Plate Count and Absorbance Methods.
Pla, María-Leonor; Oltra, Sandra; Esteban, María-Dolores; Andreu, Santiago; Palop, Alfredo
2015-01-01
The selection of a primary model to describe microbial growth in predictive food microbiology often appears to be subjective. The objective of this research was to check the performance of different mathematical models in predicting growth parameters, both by absorbance and plate count methods. For this purpose, growth curves of three different microorganisms (Bacillus cereus, Listeria monocytogenes, and Escherichia coli) grown under the same conditions, but with different initial concentrations each, were analysed. When measuring the microbial growth of each microorganism by optical density, almost all models provided quite high goodness of fit (r(2) > 0.93) for all growth curves. The growth rate remained approximately constant for all growth curves of each microorganism, when considering one growth model, but differences were found among models. Three-phase linear model provided the lowest variation for growth rate values for all three microorganisms. Baranyi model gave a variation marginally higher, despite a much better overall fitting. When measuring the microbial growth by plate count, similar results were obtained. These results provide insight into predictive microbiology and will help food microbiologists and researchers to choose the proper primary growth predictive model.
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Case, Jonathan L.; Molthan, Andrew L.
2011-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center develops new products and techniques that can be used in operational meteorology. The majority of these products are derived from NASA polar-orbiting satellite imagery from the Earth Observing System (EOS) platforms. One such product is a Greenness Vegetation Fraction (GVF) dataset, which is produced from Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the new SPoRT-MODIS GVF dataset on land surface models apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. The second phase of the project is to examine the impacts of the SPoRT GVF dataset on NWP using the Weather Research and Forecasting (WRF) model. Two separate WRF model simulations were made for individual severe weather case days using the NCEP GVF (control) and SPoRT GVF (experimental), with all other model parameters remaining the same. Based on the sensitivity results in these case studies, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and lower direct surface heating, which typically resulted in lower (higher) predicted 2-m temperatures (2-m dewpoint temperatures). The opposite was true for areas with lower GVF in the SPoRT model runs. These differences in the heating and evaporation rates produced subtle yet quantifiable differences in the simulated convective precipitation systems for the selected severe weather case examined.
Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics
Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2017-01-01
Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)4 model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends. PMID:28273856
Prediction of insufficient serum vitamin D status in older women: a validated model.
Merlijn, T; Swart, K M A; Lips, P; Heymans, M W; Sohl, E; Van Schoor, N M; Netelenbos, C J; Elders, P J M
2018-05-28
We developed an externally validated simple prediction model to predict serum 25(OH)D levels < 30, < 40, < 50 and 60 nmol/L in older women with risk factors for fractures. The benefit of the model reduces when a higher 25(OH)D threshold is chosen. Vitamin D deficiency is associated with increased fracture risk in older persons. General supplementation of all older women with vitamin D could cause medicalization and costs. We developed a clinical model to identify insufficient serum 25-hydroxyvitamin D (25(OH)D) status in older women at risk for fractures. In a sample of 2689 women ≥ 65 years selected from general practices, with at least one risk factor for fractures, a questionnaire was administered and serum 25(OH)D was measured. Multivariable logistic regression models with backward selection were developed to select predictors for insufficient serum 25(OH)D status, using separate thresholds 30, 40, 50 and 60 nmol/L. Internal and external model validations were performed. Predictors in the models were as follows: age, BMI, vitamin D supplementation, multivitamin supplementation, calcium supplementation, daily use of margarine, fatty fish ≥ 2×/week, ≥ 1 hours/day outdoors in summer, season of blood sampling, the use of a walking aid and smoking. The AUC was 0.77 for the model using a 30 nmol/L threshold and decreased in the models with higher thresholds to 0.72 for 60 nmol/L. We demonstrate that the model can help to distinguish patients with or without insufficient serum 25(OH)D levels at thresholds of 30 and 40 nmol/L, but not when a threshold of 50 nmol/L is demanded. This externally validated model can predict the presence of vitamin D insufficiency in women at risk for fractures. The potential clinical benefit of this tool is highly dependent of the chosen 25(OH)D threshold and decreases when a higher threshold is used.
Meso-Scale Modelling of Deformation, Damage and Failure in Dual Phase Steels
NASA Astrophysics Data System (ADS)
Sari Sarraf, Iman
Advanced high strength steels (AHSS), such as dual phase (DP) and transformation induced plasticity (TRIP) steels, offer high ductility, formability, and strength, as well as high strength-to-weight ratio and improved crash resistance. Dual phase steels belong to a family of high strength grades which consist of martensite, responsible for strengthening, distributed in a ductile ferrite matrix which accommodates the deformation throughout the forming process. It has been shown that the predominant damage mechanism and failure in DP steels depends on the ferrite and martensite grain sizes and their morphology, and can range from a mixture of brittle and ductile rupture to completely ductile rupture in a quasi-static uniaxial tension test. In this study, a hybrid finite element cellular automata model, initially proposed by Anton Shterenlikht (2003), was developed to evaluate the forming behaviour and predict the onset of instability and damage evolution in a dual phase steel. In this model, the finite element constitutive model is used to represent macro-level strain gradients and a damage variable, and two different cell arrays are designed to represent the ductile and brittle fracture modes in meso-scale. In the FE part of the model, a modified Rousselier ductile damage model is developed to account for nucleation, growth and coalescence of voids. Also, several rate-dependent hardening models were developed and evaluated to describe the work hardening flow curve of DP600. Based on statistical analysis and simulation results, a modified Johnson-Cook (JC) model and a multiplicative combination of the Voce-modified JC functions were found to be the most accurate hardening models. The developed models were then implemented in a user-defined material subroutine (VUMAT) for ABAQUS/Explicit finite element simulation software to simulate uniaxial tension tests at strain rates ranging from 0.001 1/s to 1000 1/s, Marciniak tests, and electrohydraulic free-forming (EHFF). The modified Rousselier model could successfully predict the dynamic behaviour, the onset of instability and damage progress in DP600 tensile test specimens. Also, the forming limit curve (FLC) as well as the final damage geometry in DP600 Marciniak specimens was successfully predicted and compared with experiments. A hybrid FE+CA model was utilized to predict the major fracture mode of DP600 and DP780 sheet specimens under different deformation conditions. This hybrid model is able to predict quasi-cleavage fracture in ultra-fine and coarse-grained DP600 and DP780 at low and high strain rates. The numerical results showed the capabilities of the proposed model to predict that higher martensite volume fraction, greater ferrite grain sizes and higher strain rates promote the brittle fracture mechanism whereas finer grain sizes and higher temperature alter the dominant fracture mechanism to ductile mode.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.
Landscape modeling for Everglades ecosystem restoration
DeAngelis, D.L.; Gross, L.J.; Huston, M.A.; Wolff, W.F.; Fleming, D.M.; Comiskey, E.J.; Sylvester, S.M.
1998-01-01
A major environmental restoration effort is under way that will affect the Everglades and its neighboring ecosystems in southern Florida. Ecosystem and population-level modeling is being used to help in the planning and evaluation of this restoration. The specific objective of one of these modeling approaches, the Across Trophic Level System Simulation (ATLSS), is to predict the responses of a suite of higher trophic level species to several proposed alterations in Everglades hydrology. These include several species of wading birds, the snail kite, Cape Sable seaside sparrow, Florida panther, white-tailed deer, American alligator, and American crocodile. ATLSS is an ecosystem landscape-modeling approach and uses Geographic Information System (GIS) vegetation data and existing hydrology models for South Florida to provide the basic landscape for these species. A method of pseudotopography provides estimates of water depths through time at 28 ?? 28-m resolution across the landscape of southern Florida. Hydrologic model output drives models of habitat and prey availability for the higher trophic level species. Spatially explicit, individual-based computer models simulate these species. ATLSS simulations can compare the landscape dynamic spatial pattern of the species resulting from different proposed water management strategies. Here we compare the predicted effects of one possible change in water management in South Florida with the base case of no change. Preliminary model results predict substantial differences between these alternatives in some biotic spatial patterns. ?? 1998 Springer-Verlag.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
Greek, Ray; Hansen, Lawrence A
2013-11-01
We surveyed the scientific literature regarding amyotrophic lateral sclerosis, the SOD1 mouse model, complex adaptive systems, evolution, drug development, animal models, and philosophy of science in an attempt to analyze the SOD1 mouse model of amyotrophic lateral sclerosis in the context of evolved complex adaptive systems. Humans and animals are examples of evolved complex adaptive systems. It is difficult to predict the outcome from perturbations to such systems because of the characteristics of complex systems. Modeling even one complex adaptive system in order to predict outcomes from perturbations is difficult. Predicting outcomes to one evolved complex adaptive system based on outcomes from a second, especially when the perturbation occurs at higher levels of organization, is even more problematic. Using animal models to predict human outcomes to perturbations such as disease and drugs should have a very low predictive value. We present empirical evidence confirming this and suggest a theory to explain this phenomenon. We analyze the SOD1 mouse model of amyotrophic lateral sclerosis in order to illustrate this position. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pompili, Cecilia; Shargall, Yaron; Decaluwe, Herbert; Moons, Johnny; Chari, Madhu; Brunelli, Alessandro
2018-01-03
The objective of this study was to evaluate the performance of 3 thoracic surgery centres using the Eurolung risk models for morbidity and mortality. This was a retrospective analysis performed on data collected from 3 academic centres (2014-2016). Seven hundred and twenty-one patients in Centre 1, 857 patients in Centre 2 and 433 patients in Centre 3 who underwent anatomical lung resections were analysed. The Eurolung1 and Eurolung2 models were used to predict risk-adjusted cardiopulmonary morbidity and 30-day mortality rates. Observed and risk-adjusted outcomes were compared within each centre. The observed morbidity of Centre 1 was in line with the predicted morbidity (observed 21.1% vs predicted 22.7%, P = 0.31). Centre 2 performed better than expected (observed morbidity 20.2% vs predicted 26.7%, P < 0.001), whereas the observed morbidity of Centre 3 was higher than the predicted morbidity (observed 41.1% vs predicted 24.3%, P < 0.001). Centre 1 had higher observed mortality when compared with the predicted mortality (3.6% vs 2.1%, P = 0.005), whereas Centre 2 had an observed mortality rate significantly lower than the predicted mortality rate (1.2% vs 2.5%, P = 0.013). Centre 3 had an observed mortality rate in line with the predicted mortality rate (observed 1.4% vs predicted 2.4%, P = 0.17). The observed mortality rates in the patients with major complications were 30.8% in Centre 1 (versus predicted mortality rate 3.8%, P < 0.001), 8.2% in Centre 2 (versus predicted mortality rate 4.1%, P = 0.030) and 9.0% in Centre 3 (versus predicted mortality rate 3.5%, P = 0.014). The Eurolung models were successfully used as risk-adjusting instruments to internally audit the outcomes of 3 different centres, showing their applicability for future quality improvement initiatives. © The Author(s) 2018. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Extending medium-range predictability of extreme hydrological events in Europe
Lavers, David A.; Pappenberger, Florian; Zsoter, Ervin
2014-01-01
Widespread flooding occurred across northwest Europe during the winter of 2013/14, resulting in large socioeconomic damages. In the historical record, extreme hydrological events have been connected with intense water vapour transport. Here we show that water vapour transport has higher medium-range predictability compared with precipitation in the winter 2013/14 forecasts from the European Centre for Medium-Range Weather Forecasts. Applying the concept of potential predictability, the transport is found to extend the forecast horizon by 3 days in some European regions. Our results suggest that the breakdown in precipitation predictability is due to uncertainty in the horizontal mass convergence location, an essential mechanism for precipitation generation. Furthermore, the predictability increases with larger spatial averages. Given the strong association between precipitation and water vapour transport, especially for extreme events, we conclude that the higher transport predictability could be used as a model diagnostic to increase preparedness for extreme hydrological events. PMID:25387309
Hernández, Maciel M.; Eisenberg, Nancy; Valiente, Carlos; Diaz, Anjolii; VanSchyndel, Sarah K.; Berger, Rebecca H.; Terrell, Nathan; Silva, Kassondra M.; Spinrad, Tracy L.; Southworth, Jody
2015-01-01
The purpose of the study was to evaluate bidirectional associations between peer acceptance and both emotion and effortful control during kindergarten (N = 301). In both the fall and spring semesters, we obtained peer nominations of acceptance, measures of positive and negative emotion based on naturalistic observations in school (i.e., classroom, lunch/recess), and observers’ reports of effortful control (i.e., inhibitory control, attention focusing) and emotions (i.e., positive, negative). In structural equation panel models, peer acceptance in fall predicted higher effortful control in spring. Effortful control in fall did not predict peer acceptance in spring. Negative emotion predicted lower peer acceptance across time for girls but not for boys. Peer acceptance did not predict negative or positive emotion over time. In addition, we tested interactions between positive or negative emotion and effortful control predicting peer acceptance. Positive emotion predicted higher peer acceptance for children low in effortful control. PMID:28348445
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
Validating spatiotemporal predictions of an important pest of small grains.
Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J
2015-01-01
Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Bernardes, S.
2016-12-01
Global coupled carbon-climate simulations show considerable variability in outputs for atmospheric and land fields over the 21st century. This variability includes changes in temperature and in the quantity and spatiotemporal distribution of precipitation for large regions on the planet. Studies have considered that reductions in water availability due to decreased precipitation and increased water demand by the atmosphere may negatively affect plant metabolism and reduce carbon uptake. Future increases in carbon dioxide concentrations are expected to affect those interactions and potentially offset reductions in productivity. It is uncertain how plants will adjust their water use efficiency (WUE, plant production per water loss by evapotranspiration) in response to changing environmental conditions. This work investigates predicted changes in WUE in the 21st century by analyzing an ensemble of Earth System Models from the Coupled Model Intercomparison Project 5 (CMIP5), together with flux tower data and products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Two representative concentration pathways were selected to describe possible climate futures (RCP4.5 and RCP8.5). Periods of analysis included 2006-2099 (predicted) and 1850-2005 (reference). Comparisons between modeled, flux and satellite data for IPCC SREX regions were used to address the significant intermodel variability observed for the CMIP5 ensemble (larger variability for RCP8.5, higher intermodel agreement in Southeast Asia, lower intermodel agreement in arid areas). Model skill was evaluated in support of model selection and the spatiotemporal analysis of changes in WUE. Global, regional and latitudinal distributions of departures of projected conditions in relation to historical values are presented for both concentration pathways. Results showed high model sensitivity to different concentration pathways and increase in GPP and WUE for most of the planet (increases consistently higher for RCP8.5). Higher increases in GPP and WUE are predicted to occur over higher latitudes in the northern hemisphere (boreal region), with WUE usually following GPP in changes. Decreases in productivity and WUE occur mostly in the tropics, affecting tropical forests in Central America and in the Amazon.
Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.
Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J
2015-02-01
The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.
Kuo, Pao-Jen; Wu, Shao-Chun; Chien, Peng-Chen; Chang, Shu-Shya; Rau, Cheng-Shyuan; Tai, Hsueh-Ling; Peng, Shu-Hui; Lin, Yi-Chun; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua
2018-03-02
The aim of this study was to develop an effective surgical site infection (SSI) prediction model in patients receiving free-flap reconstruction after surgery for head and neck cancer using artificial neural network (ANN), and to compare its predictive power with that of conventional logistic regression (LR). There were 1,836 patients with 1,854 free-flap reconstructions and 438 postoperative SSIs in the dataset for analysis. They were randomly assigned tin ratio of 7:3 into a training set and a test set. Based on comprehensive characteristics of patients and diseases in the absence or presence of operative data, prediction of SSI was performed at two time points (pre-operatively and post-operatively) with a feed-forward ANN and the LR models. In addition to the calculated accuracy, sensitivity, and specificity, the predictive performance of ANN and LR were assessed based on area under the curve (AUC) measures of receiver operator characteristic curves and Brier score. ANN had a significantly higher AUC (0.892) of post-operative prediction and AUC (0.808) of pre-operative prediction than LR (both P <0.0001). In addition, there was significant higher AUC of post-operative prediction than pre-operative prediction by ANN (p<0.0001). With the highest AUC and the lowest Brier score (0.090), the post-operative prediction by ANN had the highest overall predictive performance. The post-operative prediction by ANN had the highest overall performance in predicting SSI after free-flap reconstruction in patients receiving surgery for head and neck cancer.
[Spatial distribution prediction of surface soil Pb in a battery contaminated site].
Liu, Geng; Niu, Jun-Jie; Zhang, Chao; Zhao, Xin; Guo, Guan-Lin
2014-12-01
In order to enhance the reliability of risk estimation and to improve the accuracy of pollution scope determination in a battery contaminated site with the soil characteristic pollutant Pb, four spatial interpolation models, including Combination Prediction Model (OK(LG) + TIN), kriging model (OK(BC)), Inverse Distance Weighting model (IDW), and Spline model were employed to compare their effects on the spatial distribution and pollution assessment of soil Pb. The results showed that Pb concentration varied significantly and the data was severely skewed. The variation coefficient of the site was higher in the local region. OK(LG) + TIN was found to be more accurate than the other three models in predicting the actual pollution situations of the contaminated site. The prediction accuracy of other models was lower, due to the effect of the principle of different models and datum feature. The interpolation results of OK(BC), IDW and Spline could not reflect the detailed characteristics of seriously contaminated areas, and were not suitable for mapping and spatial distribution prediction of soil Pb in this site. This study gives great contributions and provides useful references for defining the remediation boundary and making remediation decision of contaminated sites.
Prediction of mode of death in heart failure: the Seattle Heart Failure Model.
Mozaffarian, Dariush; Anker, Stefan D; Anand, Inder; Linker, David T; Sullivan, Mark D; Cleland, John G F; Carson, Peter E; Maggioni, Aldo P; Mann, Douglas L; Pitt, Bertram; Poole-Wilson, Philip A; Levy, Wayne C
2007-07-24
Prognosis and mode of death in heart failure patients are highly variable in that some patients die suddenly (often from ventricular arrhythmia) and others die of progressive failure of cardiac function (pump failure). Prediction of mode of death may facilitate decisions about specific medications or devices. We used the Seattle Heart Failure Model (SHFM), a validated prediction model for total mortality in heart failure, to assess the mode of death in 10,538 ambulatory patients with New York Heart Association class II to IV heart failure and predominantly systolic dysfunction enrolled in 6 randomized trials or registries. During 16,735 person-years of follow-up, 2014 deaths occurred, which included 1014 sudden deaths and 684 pump-failure deaths. Compared with a SHFM score of 0, patients with a score of 1 had a 50% higher risk of sudden death, patients with a score of 2 had a nearly 3-fold higher risk, and patients with a score of 3 or 4 had a nearly 7-fold higher risk (P<0.001 for all comparisons; 1-year area under the receiver operating curve, 0.68). Stratification of risk of pump-failure death was even more pronounced, with a 4-fold higher risk with a score of 1, a 15-fold higher risk with a score of 2, a 38-fold higher risk with a score of 3, and an 88-fold higher risk with a score of 4 (P<0.001 for all comparisons; 1-year area under the receiver operating curve, 0.85). The proportion of deaths caused by sudden death versus pump-failure death decreased from a ratio of 7:1 with a SHFM score of 0 to a ratio of 1:2 with a SHFM score of 4 (P trend <0.001). The SHFM score provides information about the likely mode of death among ambulatory heart failure patients. Investigation is warranted to determine whether such information might predict responses to or cost-effectiveness of specific medications or devices in heart failure patients.
Subseasonal-to-Seasonal Science and Prediction Initiatives of the NOAA MAPP Program
NASA Astrophysics Data System (ADS)
Archambault, H. M.; Barrie, D.; Mariotti, A.
2016-12-01
There is great practical interest in developing predictions beyond the 2-week weather timescale. Scientific communities have historically organized themselves around the weather and climate problems, but the subseasonal-to-seasonal (S2S) timescale range overall is recognized as new territory for which a concerted shared effort is needed. For instance, the climate community, as part of programs like CLIVAR, has historically tackled coupled phenomena and modeling, keys to harnessing predictability on longer timescales. In contrast, the weather community has focused on synoptic dynamics, higher-resolution modeling, and enhanced model initialization, of importance at the shorter timescales and especially for the prediction of extremes. The processes and phenomena specific to timescales between weather and climate require a unified approach to science, modeling, and predictions. Internationally, the WWRP/WCRP S2S Prediction Project is a promising catalyzer for these types of activities. Among the various contributing U.S. research programs, the Modeling, Analysis, Predictions and Projections (MAPP) program, as part of the NOAA Climate Program Office, has launched coordinated research and transition activities that help to meet the agency's goals to fill the weather-to-climate prediction gap and will contribute to advance international goals. This presentation will describe ongoing MAPP program S2S science and prediction initiatives, specifically the MAPP S2S Task Force and the SubX prediction experiment.
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
Baba, Hiromi; Takahara, Jun-ichi; Yamashita, Fumiyoshi; Hashida, Mitsuru
2015-11-01
The solvent effect on skin permeability is important for assessing the effectiveness and toxicological risk of new dermatological formulations in pharmaceuticals and cosmetics development. The solvent effect occurs by diverse mechanisms, which could be elucidated by efficient and reliable prediction models. However, such prediction models have been hampered by the small variety of permeants and mixture components archived in databases and by low predictive performance. Here, we propose a solution to both problems. We first compiled a novel large database of 412 samples from 261 structurally diverse permeants and 31 solvents reported in the literature. The data were carefully screened to ensure their collection under consistent experimental conditions. To construct a high-performance predictive model, we then applied support vector regression (SVR) and random forest (RF) with greedy stepwise descriptor selection to our database. The models were internally and externally validated. The SVR achieved higher performance statistics than RF. The (externally validated) determination coefficient, root mean square error, and mean absolute error of SVR were 0.899, 0.351, and 0.268, respectively. Moreover, because all descriptors are fully computational, our method can predict as-yet unsynthesized compounds. Our high-performance prediction model offers an attractive alternative to permeability experiments for pharmaceutical and cosmetic candidate screening and optimizing skin-permeable topical formulations.
Cissen, M; Meijerink, A M; D'Hauwers, K W; Meissner, A; van der Weide, N; Mochtar, M H; de Melker, A A; Ramos, L; Repping, S; Braat, D D M; Fleischer, K; van Wely, M
2016-09-01
Can an externally validated model, based on biological variables, be developed to predict successful sperm retrieval with testicular sperm extraction (TESE) in men with non-obstructive azoospermia (NOA) using a large nationwide cohort? Our prediction model including six variables was able to make a good distinction between men with a good chance and men with a poor chance of obtaining spermatozoa with TESE. Using ICSI in combination with TESE even men suffering from NOA are able to father their own biological child. Only in approximately half of the patients with NOA can testicular sperm be retrieved successfully. The few models that have been developed to predict the chance of obtaining spermatozoa with TESE were based on small datasets and none of them have been validated externally. We performed a retrospective nationwide cohort study. Data from 1371 TESE procedures were collected between June 2007 and June 2015 in the two fertility centres. All men with NOA undergoing their first TESE procedure as part of a fertility treatment were included. The primary end-point was the presence of one or more spermatozoa (regardless of their motility) in the testicular biopsies.We constructed a model for the prediction of successful sperm retrieval, using univariable and multivariable binary logistic regression analysis and the dataset from one centre. This model was then validated using the dataset from the other centre. The area under the receiver-operating characteristic curve (AUC) was calculated and model calibration was assessed. There were 599 (43.7%) successful sperm retrievals after a first TESE procedure. The prediction model, built after multivariable logistic regression analysis, demonstrated that higher male age, higher levels of serum testosterone and lower levels of FSH and LH were predictive for successful sperm retrieval. Diagnosis of idiopathic NOA and the presence of an azoospermia factor c gene deletion were predictive for unsuccessful sperm retrieval. The AUC was 0.69 (95% confidence interval (CI): 0.66-0.72). The difference between the mean observed chance and the mean predicted chance was <2.0% in all groups, indicating good calibration. In validation, the model had moderate discriminative capacity (AUC 0.65, 95% CI: 0.62-0.72) and moderate calibration: the predicted probability never differed by more than 9.2% of the mean observed probability. The percentage of men with Klinefelter syndrome among men diagnosed with NOA is expected to be higher than in our study population, which is a potential selection bias. The ability of the sperm retrieved to fertilize an oocyte and produce a live birth was not tested. This model can help in clinical decision-making in men with NOA by reliably predicting the chance of obtaining spermatozoa with TESE. This study was partly supported by an unconditional grant from Merck Serono (to D.D.M.B. and K.F.) and by the Department of Obstetrics and Gynaecology of Radboud University Medical Center, Nijmegen, The Netherlands, the Department of Obstetrics and Gynaecology, Jeroen Bosch Hospital, Den Bosch, The Netherlands, and the Department of Obstetrics and Gynaecology, Academic Medical Center, Amsterdam, The Netherlands. Merck Serono had no influence in concept, design nor elaboration of this study. Not applicable. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Predicting Metabolic Cost of Running with and without Backpack Loads
1987-01-01
would higher than generated by our prediction model. he more demanding for the cardiorespiratory sys - These differences, however, could be accounted...197) nabes te pedicionof rg cost ill women walking and running in %hoes and metabol cot aor walki) nande runin arditio af wieoots. I rgiinomics 29:439
ERIC Educational Resources Information Center
Hambrick, D.Z.; Oswald, F.L.
2005-01-01
Research suggests that both working memory capacity and domain knowledge contribute to individual differences in higher-level cognition. This study evaluated three hypotheses concerning the interplay between these factors. The compensation hypothesis predicts that domain knowledge attenuates the influence of working memory capacity on higher-level…
At the Crossroads of Nanotoxicology: Past Achievements and Current Challenges
2015-01-01
rates of ionic dissolution, improving in vitro to in vivo predictive efficiencies, and establishing safety exposure limits. This Review will discuss...Oberdörster et al., 2005a), which drove the focus of in vitro and in vivo model selection to accommodate these areas of higher NM exposure. Most...Accordingly, a current challenge is the design of simple, in vitro models that reliably predict in vivo effects following a NM challenge. In order
Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A
2010-10-01
Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Roberts, G; Bellinger, D; McCormick, M C
2007-03-01
Premature and low birth weight children have a high prevalence of academic difficulties. This study examines a model comprised of cumulative risk factors that allows early identification of these difficulties. This is a secondary analysis of data from a large cohort of premature (<37 weeks gestation) and LBW (<2500 g) children. The study subjects were 8 years of age and 494 had data available for reading achievement and 469 for mathematics. Potential predictor variables were categorized into 4 domains: sociodemographic, neonatal, maternal mental health and early childhood (ages 3 and 5). Regression analysis was used to create a model to predict reading and mathematics scores. Variables from all domains were significant in the model, predicting low achievement scores in reading (R (2) of 0.49, model p-value < .0001) and mathematics (R (2) of 0.44, model p-value < .0001). Significant risk factors for lower reading scores, were: lower maternal education and income, and Black or Hispanic race (sociodemographic); lower birth weight and male gender (neonatal); lower maternal responsivity (maternal mental health); lower intelligence, visual-motor skill and higher behavioral disturbance scores (early childhood). Lower mathematics scores were predicted by lower maternal education, income and age and Black or Hispanic race (sociodemographic); lower birth weight and higher head circumference (neonatal); lower maternal responsivity (maternal mental health); lower intelligence, visual-motor skill and higher behavioral disturbance scores (early childhood). Sequential early childhood risk factors in premature and LBW children lead to a cumulative risk for academic difficulties and can be used for early identification.
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.
NASA Astrophysics Data System (ADS)
Galve, J. P.; Gutiérrez, F.; Remondo, J.; Bonachea, J.; Lucha, P.; Cendrero, A.
2009-10-01
Multiple sinkhole susceptibility models have been generated in three study areas of the Ebro Valley evaporite karst (NE Spain) applying different methods (nearest neighbour distance, sinkhole density, heuristic scoring system and probabilistic analysis) for each sinkhole type separately (cover collapse sinkholes, cover and bedrock collapse sinkholes and cover and bedrock sagging sinkholes). The quantitative and independent evaluation of the predictive capability of the models reveals that: (1) The most reliable susceptibility models are those derived from the nearest neighbour distance and sinkhole density. These models can be generated in a simple and rapid way from detailed geomorphological maps. (2) The reliability of the nearest neighbour distance and density models is conditioned by the degree of clustering of the sinkholes. Consequently, the karst areas in which sinkholes show a higher clustering are a priori more favourable for predicting new occurrences. (3) The predictive capability of the best models obtained in this research is significantly higher (12.5-82.5%) than that of the heuristic sinkhole susceptibility model incorporated into the General Urban Plan for the municipality of Zaragoza. Although the probabilistic approach provides lower quality results than the methods based on sinkhole proximity and density, it helps to identify the most significant factors and select the most effective mitigation strategies and may be applied to model susceptibility in different future scenarios.
Initialization and Predictability of a Coupled ENSO Forecast Model
NASA Technical Reports Server (NTRS)
Chen, Dake; Zebiak, Stephen E.; Cane, Mark A.; Busalacchi, Antonio J.
1997-01-01
The skill of a coupled ocean-atmosphere model in predicting ENSO has recently been improved using a new initialization procedure in which initial conditions are obtained from the coupled model, nudged toward observations of wind stress. The previous procedure involved direct insertion of wind stress observations, ignoring model feedback from ocean to atmosphere. The success of the new scheme is attributed to its explicit consideration of ocean-atmosphere coupling and the associated reduction of "initialization shock" and random noise. The so-called spring predictability barrier is eliminated, suggesting that such a barrier is not intrinsic to the real climate system. Initial attempts to generalize the nudging procedure to include SST were not successful; possible explanations are offered. In all experiments forecast skill is found to be much higher for the 1980s than for the 1970s and 1990s, suggesting decadal variations in predictability.
Reheating predictions in gravity theories with derivative coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalianis, Ioannis; Koutsoumbas, George; Ntrekis, Konstantinos
2017-02-01
We investigate the inflationary predictions of a simple Horndeski theory where the inflaton scalar field has a non-minimal derivative coupling (NMDC) to the Einstein tensor. The NMDC is very motivated for the construction of successful models for inflation, nevertheless its inflationary predictions are not observationally distinct. We show that it is possible to probe the effects of the NMDC on the CMB observables by taking into account both the dynamics of the inflationary slow-roll phase and the subsequent reheating. We perform a comparative study between representative inflationary models with canonical fields minimally coupled to gravity and models with NMDC. Wemore » find that the inflation models with dominant NMDC generically predict a higher reheating temperature and a different range for the tilt of the scalar perturbation spectrum n {sub s} and scalar-to-tensor ratio r , potentially testable by current and future CMB experiments.« less
Ryan, Seamus; McGuire, Brian
2016-05-01
Rheumatoid arthritis is a chronic and progressive autoimmune disorder with symptoms sometimes including chronic pain and depression. The current study aimed to explore some of the psychological variables which predict both pain-related outcomes (pain severity and pain interference) and psychological outcomes (depression and anxiety) amongst patients with rheumatoid arthritis experiencing chronic pain. In particular, this study aimed to establish whether either self-concealment, or the satisfaction of basic psychological needs (autonomy, relatedness, and competence), could explain a significant portion of the variance in pain outcomes and psychological outcomes amongst this patient group. Online questionnaires were completed by 317 rheumatoid arthritis patients with chronic pain, providing data across a number of predictor and outcome variables. Hierarchical multiple linear regressions indicated that the predictive models for each of the four outcome variables were significant, and had good levels of fit with the data. In terms of individual predictor variables, higher relatedness significantly predicted lower depression, and higher autonomy significantly predicted lower anxiety. The model generated by this study may identify factors to be targeted by future interventions with the goal of reducing depression and anxiety amongst patients with rheumatoid arthritis experiencing chronic pain. The findings of this study have shown that the autonomy and the relatedness of patients with rheumatoid arthritis play important roles in promoting psychological well-being. Targeted interventions could help to enhance the lives of patients despite the presence of chronic pain. What is already known about the subject? Amongst a sample of chronic pain patients who primarily had a diagnosis of fibromyalgia, it was found that higher levels of self-concealment were associated with higher self-reported pain levels and reduced well-being (as measured by anxiety/depression), and these associations were mediated by patients' needs for autonomy not being met (Uysal & Lu, Health Psychology, 2011, 30, 606). What does this study add? For the first time amongst a rheumatoid arthritis population experiencing chronic pain, we found that higher levels of relatedness significantly predicted lower depression. For the first time amongst the same population, we found that higher levels of autonomy significantly predicted lower anxiety. © 2015 The British Psychological Society.
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Posa, Mihalj; Pilipović, Ana; Lalić, Mladena; Popović, Jovan
2011-02-15
Linear dependence between temperature (t) and retention coefficient (k, reversed phase HPLC) of bile acids is obtained. Parameters (a, intercept and b, slope) of the linear function k=f(t) highly correlate with bile acids' structures. Investigated bile acids form linear congeneric groups on a principal component (calculated from k=f(t)) score plot that are in accordance with conformations of the hydroxyl and oxo groups in a bile acid steroid skeleton. Partition coefficient (K(p)) of nitrazepam in bile acids' micelles is investigated. Nitrazepam molecules incorporated in micelles show modified bioavailability (depo effect, higher permeability, etc.). Using multiple linear regression method QSAR models of nitrazepams' partition coefficient, K(p) are derived on the temperatures of 25°C and 37°C. For deriving linear regression models on both temperatures experimentally obtained lipophilicity parameters are included (PC1 from data k=f(t)) and in silico descriptors of the shape of a molecule while on the higher temperature molecular polarisation is introduced. This indicates the fact that the incorporation mechanism of nitrazepam in BA micelles changes on the higher temperatures. QSAR models are derived using partial least squares method as well. Experimental parameters k=f(t) are shown to be significant predictive variables. Both QSAR models are validated using cross validation and internal validation method. PLS models have slightly higher predictive capability than MLR models. Copyright © 2010 Elsevier B.V. All rights reserved.
A Plasticity Model to Predict the Effects of Confinement on Concrete
NASA Astrophysics Data System (ADS)
Wolf, Julie
A plasticity model to predict the behavior of confined concrete is developed. The model is designed to implicitly account for the increase in strength and ductility due to confining a concrete member. The concrete model is implemented into a finite element (FE) model. By implicitly including the change in the strength and ductility in the material model, the confining material can be explicitly included in the FE model. Any confining material can be considered, and the effects on the concrete of failure in the confinement material can be modeled. Test data from a wide variety of different concretes utilizing different confinement methods are used to estimate the model parameters. This allows the FE model to capture the generalized behavior of concrete under multiaxial loading. The FE model is used to predict the results of tests on reinforced concrete members confined by steel hoops and fiber reinforced polymer (FRP) jackets. Loading includes pure axial load and axial load-moment combinations. Variability in the test data makes the model predictions difficult to compare but, overall, the FE model is able to capture the effects of confinement on concrete. Finally, the FE model is used to compare the performance of steel hoop to FRP confined sections, and of square to circular cross sections. As expected, circular sections are better able to engage the confining material, leading to higher strengths. However, higher strains are seen in the confining material for the circular sections. This leads to failure at lower axial strain levels in the case of the FRP confined sections. Significant differences are seen in the behavior of FRP confined members and steel hoop confined members. Failure in the FRP members is always determined by rupture in the composite jacket. As a result, the FRP members continue to take load up to failure. In contrast, the steel hoop confined sections exhibit extensive strain softening before failure. This comparison illustrates the usefulness of the concrete model as a tool for designers. Overall, the concrete model provides a flexible and powerful method to predict the performance of confined concrete.
Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang
2017-01-01
Purpose We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials and Methods Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. Conclusions KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. PMID:28046017
Park, Jae Young; Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang; Byun, Seok-Soo
2017-01-01
We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Cometabolic degradation kinetics of TCE and phenol by Pseudomonas putida.
Chen, Yan-Min; Lin, Tsair-Fuh; Huang, Chih; Lin, Jui-Che
2008-08-01
Modeling of cometabolic kinetics is important for better understanding of degradation reaction and in situ application of bio-remediation. In this study, a model incorporated cell growth and decay, loss of transformation activity, competitive inhibition between growth substrate and non-growth substrate and self-inhibition of non-growth substrate was proposed to simulate the degradation kinetics of phenol and trichloroethylene (TCE) by Pseudomonas putida. All the intrinsic parameters employed in this study were measured independently, and were then used for predicting the batch experimental data. The model predictions conformed well to the observed data at different phenol and TCE concentrations. At low TCE concentrations (<2 mg l(-1)), the models with or without self-inhibition of non-growth substrate both simulated the experimental data well. However, at higher TCE concentrations (>6 mg l(-1)), only the model considering self-inhibition can describe the experimental data, suggesting that a self-inhibition of TCE was present in the system. The proposed model was also employed in predicting the experimental data conducted in a repeated batch reactor, and good agreements were observed between model predictions and experimental data. The results also indicated that the biomass loss in the degradation of TCE below 2 mg l(-1) can be totally recovered in the absence of TCE for the next cycle, and it could be used for the next batch experiment for the degradation of phenol and TCE. However, for higher concentration of TCE (>6 mg l(-1)), the recovery of biomass may not be as good as that at lower TCE concentrations.
Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.
Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret
2005-01-01
Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.
Schmitz, Kathryn H; Lytle, Leslie A; Phillips, Glenn A; Murray, David M; Birnbaum, Amanda S; Kubik, Martha Y
2002-02-01
Low levels of physical activity (PA) and highly sedentary leisure habits (SLH) in youth may establish behavioral patterns that will predispose youth to increased chronic disease risk in adulthood. The purpose of this paper was to examine associations of demographic and psychosocial factors with self-reported PA and SLH in young adolescents. A general linear mixed model predicted self-reported PA and SLH in the spring from demographic and psychosocial variables measured the previous fall in 3798 seventh grade students. PA and SLH differed by race, with Caucasian students reporting among the highest PA and lowest SLH. Perceptions of higher academic rank or expectations predicted higher PA and lower SLH. Depressive symptomatology predicted higher SLH scores but not PA. Higher self-reported value of health, appearance, and achievement predicted higher PA and lower SLH in girls. Girls who reported that their mothers had an authoritative parenting style also reported higher PA and lower SLH. Determinants of PA and SLH appear to differ from each other, particularly in boys. Development of effective programs to increase PA and/or decrease SLH in young adolescents should be based on a clear understanding of the determinants of these behaviors. Copyright 2002 American Health Foundation and Elsevier Science (USA).
NEXT Ion Thruster Thermal Model
NASA Technical Reports Server (NTRS)
VanNoord, Jonathan L.
2010-01-01
As the NEXT ion thruster progresses towards higher technology readiness, it is necessary to develop the tools that will support its implementation into flight programs. An ion thruster thermal model has been developed for the latest prototype model design to aid in predicting thruster temperatures for various missions. This model is comprised of two parts. The first part predicts the heating from the discharge plasma for various throttling points based on a discharge chamber plasma model. This model shows, as expected, that the internal heating is strongly correlated with the discharge power. Typically, the internal plasma heating increases with beam current and decreases slightly with beam voltage. The second is a model based on a finite difference thermal code used to predict the thruster temperatures. Both parts of the model will be described in this paper. This model has been correlated with a thermal development test on the NEXT Prototype Model 1 thruster with most predicted component temperatures within 5 to 10 C of test temperatures. The model indicates that heating, and hence current collection, is not based purely on the footprint of the magnet rings, but follows a 0.1:1:2:1 ratio for the cathode-to-conical-to-cylindrical-to-front magnet rings. This thermal model has also been used to predict the temperatures during the worst case mission profile that is anticipated for the thruster. The model predicts ample thermal margin for all of its components except the external cable harness under the hottest anticipated mission scenario. The external cable harness will be re-rated or replaced to meet the predicted environment.
Prediction of frozen food properties during freezing using product composition.
Boonsupthip, W; Heldman, D R
2007-06-01
Frozen water fraction (FWF), as a function of temperature, is an important parameter for use in the design of food freezing processes. An FWF-prediction model, based on concentrations and molecular weights of specific product components, has been developed. Published food composition data were used to determine the identity and composition of key components. The model proposed in this investigation had been verified using published experimental FWF data and initial freezing temperature data, and by comparison to outputs from previously published models. It was found that specific food components with significant influence on freezing temperature depression of food products included low molecular weight water-soluble compounds with molality of 50 micromol per 100 g food or higher. Based on an analysis of 200 high-moisture food products, nearly 45% of the experimental initial freezing temperature data were within an absolute difference (AD) of +/- 0.15 degrees C and standard error (SE) of +/- 0.65 degrees C when compared to values predicted by the proposed model. The predicted relationship between temperature and FWF for all analyzed food products provided close agreements with experimental data (+/- 0.06 SE). The proposed model provided similar prediction capability for high- and intermediate-moisture food products. In addition, the proposed model provided statistically better prediction of initial freezing temperature and FWF than previous published models.
Lee, Tsair-Fwu; Liou, Ming-Hsiang; Huang, Yu-Jie; Chao, Pei-Ju; Ting, Hui-Min; Lee, Hsiao-Yi
2014-01-01
To predict the incidence of moderate-to-severe patient-reported xerostomia among head and neck squamous cell carcinoma (HNSCC) and nasopharyngeal carcinoma (NPC) patients treated with intensity-modulated radiotherapy (IMRT). Multivariable normal tissue complication probability (NTCP) models were developed by using quality of life questionnaire datasets from 152 patients with HNSCC and 84 patients with NPC. The primary endpoint was defined as moderate-to-severe xerostomia after IMRT. The numbers of predictive factors for a multivariable logistic regression model were determined using the least absolute shrinkage and selection operator (LASSO) with bootstrapping technique. Four predictive models were achieved by LASSO with the smallest number of factors while preserving predictive value with higher AUC performance. For all models, the dosimetric factors for the mean dose given to the contralateral and ipsilateral parotid gland were selected as the most significant predictors. Followed by the different clinical and socio-economic factors being selected, namely age, financial status, T stage, and education for different models were chosen. The predicted incidence of xerostomia for HNSCC and NPC patients can be improved by using multivariable logistic regression models with LASSO technique. The predictive model developed in HNSCC cannot be generalized to NPC cohort treated with IMRT without validation and vice versa. PMID:25163814
Yang, Qinghua; Chen, Yixin; Wendorf Muhamad, Jessica
2017-09-01
We proposed a conceptual model to predict health information-seeking behaviors (HISBs) from three different sources (family, the Internet, doctors). To test the model, a structural equation modeling (SEM) analysis was conducted using data from the 2012 Annenberg National Health Communication Survey (ANHCS) (N = 3,285). Findings suggest higher social support from family predicts higher trust in health information from family members (abbreviated as trust in this article). Trust is positively related to HISBs from all three sources, with the path linking trust to HISB from family being the strongest. The effect of social support on HISB from family is partially mediated by trust, while effect of social support on HISBs from the Internet/doctors is fully mediated by trust. Implications of the study are discussed.
Prediction of Very High Reynolds Number Compressible Skin Friction
NASA Technical Reports Server (NTRS)
Carlson, John R.
1998-01-01
Flat plate skin friction calculations over a range of Mach numbers from 0.4 to 3.5 at Reynolds numbers from 16 million to 492 million using a Navier Stokes method with advanced turbulence modeling are compared with incompressible skin friction coefficient correlations. The semi-empirical correlation theories of van Driest; Cope; Winkler and Cha; and Sommer and Short T' are used to transform the predicted skin friction coefficients of solutions using two algebraic Reynolds stress turbulence models in the Navier-Stokes method PAB3D. In general, the predicted skin friction coefficients scaled well with each reference temperature theory though, overall the theory by Sommer and Short appeared to best collapse the predicted coefficients. At the lower Reynolds number 3 to 30 million, both the Girimaji and Shih, Zhu and Lumley turbulence models predicted skin-friction coefficients within 2% of the semi-empirical correlation skin friction coefficients. At the higher Reynolds numbers of 100 to 500 million, the turbulence models by Shih, Zhu and Lumley and Girimaji predicted coefficients that were 6% less and 10% greater, respectively, than the semi-empirical coefficients.
Wassink, Annemarie M; van der Graaf, Yolanda; Janssen, Kristel J; Cook, Nancy R; Visseren, Frank L
2012-12-01
Although the overall average 10-year cardiovascular risk for patients with manifest atherosclerosis is considered to be more than 20%, actual risk for individual patients ranges from much lower to much higher. We investigated whether information on metabolic syndrome (MetS) or its individual components improves cardiovascular risk stratification in these patients. We conducted a prospective cohort study in 3679 patients with clinical manifest atherosclerosis from the Secondary Manifestations of ARTerial disease (SMART) study. Primary outcome was defined as any cardiovascular event (cardiovascular death, ischemic stroke or myocardial infarction). Three pre-specified prediction models were derived, all including information on established MetS components. The association between outcome and predictors was quantified using a Cox proportional hazard analysis. Model performance was assessed using global goodness-of-fit fit (χ(2)), discrimination (C-index) and ability to improve risk stratification. A total of 417 cardiovascular events occurred among 3679 patients with 15,102 person-years of follow-up (median follow-up 3.7 years, range 1.6-6.4 years). Compared to a model with age and gender only, all MetS-based models performed slightly better in terms of global model fit (χ(2)) but not C-index. The Net Reclassification Index associated with the addition of MetS (yes/no), the dichotomous MetS-components or the continuous MetS-components on top of age and gender was 2.1% (p = 0.29), 2.3% (p = 0.31) and 7.5% (p = 0.01), respectively. Prediction models incorporating age, gender and MetS can discriminate between patients with clinical manifest atherosclerosis at the highest vascular risk and those at lower risk. The addition of MetS components to a model with age and gender correctly reclassifies only a small proportion of patients into higher- and lower-risk categories. The clinical utility of a prediction model with MetS is therefore limited.
NASA Astrophysics Data System (ADS)
Sheshukov, Aleksey Y.; Sekaluvu, Lawrence; Hutchinson, Stacy L.
2018-04-01
Topographic index (TI) models have been widely used to predict trajectories and initiation points of ephemeral gullies (EGs) in agricultural landscapes. Prediction of EGs strongly relies on the selected value of critical TI threshold, and the accuracy depends on topographic features, agricultural management, and datasets of observed EGs. This study statistically evaluated the predictions by TI models in two paired watersheds in Central Kansas that had different levels of structural disturbances due to implemented conservation practices. Four TI models with sole dependency on topographic factors of slope, contributing area, and planform curvature were used in this study. The observed EGs were obtained by field reconnaissance and through the process of hydrological reconditioning of digital elevation models (DEMs). The Kernel Density Estimation analysis was used to evaluate TI distribution within a 10-m buffer of the observed EG trajectories. The EG occurrence within catchments was analyzed using kappa statistics of the error matrix approach, while the lengths of predicted EGs were compared with the observed dataset using the Nash-Sutcliffe Efficiency (NSE) statistics. The TI frequency analysis produced bi-modal distribution of topographic indexes with the pixels within the EG trajectory having a higher peak. The graphs of kappa and NSE versus critical TI threshold showed similar profile for all four TI models and both watersheds with the maximum value representing the best comparison with the observed data. The Compound Topographic Index (CTI) model presented the overall best accuracy with NSE of 0.55 and kappa of 0.32. The statistics for the disturbed watershed showed higher best critical TI threshold values than for the undisturbed watershed. Structural conservation practices implemented in the disturbed watershed reduced ephemeral channels in headwater catchments, thus producing less variability in catchments with EGs. The variation in critical thresholds for all TI models suggested that TI models tend to predict EG occurrence and length over a range of thresholds rather than find a single best value.
Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.
2018-05-01
Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.
Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.
2017-12-01
Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
Seitzinger, S.P.; Styles, R.V.; Boyer, E.W.; Alexander, R.B.; Billen, G.; Howarth, R.W.; Mayer, B.; Van Breemen, N.
2002-01-01
A regression model (RivR-N) was developed that predicts the proportion of N removed from streams and reservoirs as an inverse function of the water displacement time of the water body (ratio of water body depth to water time of travel). When applied to 16 drainage networks in the eastern U.S., the RivR-N model predicted that 37% to 76% of N input to these rivers is removed during transport through the river networks. Approximately half of that is removed in 1st through 4th order streams which account for 90% of the total stream length. The other half is removed in 5th order and higher rivers which account for only about 10% of the total stream length. Most N removed in these higher orders is predicted to originate from watershed loading to small and intermediate sized streams. The proportion of N removed from all streams in the watersheds (37-76%) is considerably higher than the proportion of N input to an individual reach that is removed in that reach (generally <20%) because of the cumulative effect of continued nitrogen removal along the entire flow path in downstream reaches. This generally has not been recognized in previous studies, but is critical to an evaluation of the total amount of N removed within a river network. At the river network scale, reservoirs were predicted to have a minimal effect on N removal. A fairly modest decrease (<10 percentage points) in the N removed at the river network scale was predicted when a third of the direct watershed loading was to the two highest orders compared to a uniform loading.
Hsu, Jeremy Ming; Hitos, Kerry; Fletcher, John P
2013-09-01
Military and civilian data would suggest that hemostatic resuscitation results in improved outcomes for exsanguinating patients. However, identification of those patients who are at risk of significant hemorrhage is not clearly defined. We attempted to identify factors that would predict the need for massive transfusion (MT) in an Australasian trauma population, by comparing those trauma patients who did receive massive transfusion with those who did not. Between 1985 and 2010, 1,686 trauma patients receiving at least 1 U of packed red blood cells were identified from our prospectively maintained trauma registry. Demographic, physiologic, laboratory, injury, and outcome variables were reviewed. Univariate analysis determined significant factors between those who received MT and those who did not. A predictive multivariate logistic regression model with backward conditional stepwise elimination was used for MT risk. Statistical analysis was performed using SPSS PASW. MT patients had a higher pulse rate, lower Glasgow Coma Scale (GCS) score, lower systolic blood pressure, lower hemoglobin level, higher Injury Severity Score (ISS), higher international normalized ratio (INR), and longer stay. Initial logistic regression identified base deficit (BD), INR, and hemoperitoneum at laparotomy as independent predictive variables. After assigning cutoff points of BD being greater than 5 and an INR of 1.5 or greater, a further model was created. A BD greater than 5 and either INR of 1.5 or greater or hemoperitoneum was associated with 51 times increase in MT risk (odds ratio, 51.6; 95% confidence interval, 24.9-95.8). The area under the receiver operating characteristic curve for the model was 0.859. From this study, a combination of BD, INR, and hemoperitoneum has demonstrated good predictability for MT. This tool may assist in the determination of those patients who might benefit from hemostatic resuscitation. Prognostic study, level III.
LDEF satellite radiation study
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1994-01-01
Some early results are summarized from a program under way to utilize LDEF satellite data for evaluating and improving current models of the space radiation environment in low earth orbit. Reported here are predictions and comparisons with some of the LDEF dose and induced radioactivity data, which are used to check the accuracy of current models describing the magnitude and directionality of the trapped proton environment. Preliminary findings are that the environment models underestimate both dose and activation from trapped protons by a factor of about two, and the observed anisotropy is higher than predicted.
Romero-Moreno, R; Losada, A; Márquez-González, M; Mausbach, B T
2016-11-01
Despite the robust associations between stressors and anxiety in dementia caregiving, there is a lack of research examining which factors contribute to explain this relationship. This study was designed to test a multiple mediation model of behavioral and psychological symptoms of dementia (BPSD) and anxiety that proposes higher levels of rumination and experiential avoidance and lower levels of leisure satisfaction as potential mediating variables. The sample consisted of 256 family caregivers. In order to test a simultaneously parallel multiple mediation model of the BPSD to anxiety pathway, a PROCESS method was used and bias-corrected and accelerated bootstrapping method was used to test confidence intervals. Higher levels of stressors significantly predicted anxiety. Greater stressors significantly predicted higher levels of rumination and experiential avoidance, and lower levels of leisure satisfaction. These three coping variables significantly predicted anxiety. Finally, rumination, experiential avoidance, and leisure satisfaction significantly mediated the link between stressors and anxiety. The explained variance for the final model was 47.09%. Significant contrasts were found between rumination and leisure satisfaction, with rumination being a significantly higher mediator. The results suggest that caregivers' experiential avoidance, rumination, and leisure satisfaction may function as mechanisms through which BPSD influence on caregivers' anxiety. Training caregivers in reducing their levels of experiential avoidance and rumination by techniques that foster their ability of acceptance of their negative internal experiences, and increase their level of leisure satisfaction, may be helpful to reduce their anxiety symptoms developed by stressors.
Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza
2017-12-01
Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.
Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.
Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer
2017-06-01
Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Spectral reflectance characteristics and modeling of typical Takyr Solonetzs water content].
Zhang, Jun-hua; Jia, Ke-li
2015-03-01
Based on the analysis of the spectral reflectance of the typical Takyr Solonetzs soil in Ningxia, the relationship of soil water content and spectral reflectance was determined, and a quantitative model for the prediction of soil water content was constructed. The results showed that soil spectral reflectance decreased with the increasing soil water content when it was below the water holding capacity but increased with the increasing soil water content when it was higher than the water holding capacity. Soil water content presented significantly negative correlation with original reflectance (r), smooth reflectance (R), logarithm of reflectance (IgR), and positive correlation with the reciprocal of R and logarithm of reciprocal [lg (1/R)]. The correlation coefficient of soil water content and R in the whole wavelength was 0.0013, 0.0397 higher than r and lgR, respectively. Average correlation coefficient of soil water content with 1/R and [lg (1/R)] at the wavelength of 950-1000 nm was 0.2350 higher than that of 400-950 nm. The relationships of soil water content with the first derivate differential (R') , the first derivate differential of logarithm (lgR)' and the first derivate differential of logarithm of reciprocal [lg(1/R)]' were unstable. Base on the coefficients of r, lg(1/R), R' and (lgR)', different regression models were established to predict soil water content, and the coefficients of determination were 0.7610, 0.8184, 0.8524 and 0.8255, respectively. The determination coefficient for power function model of R'. reached 0.9447, while the fitting degree between the predicted value based on this model and on-site measured value was 0.8279. The model of R' had the highest fitted accuracy, while that of r had the lowest one. The results could provide a scientific basis for soil water content prediction and field irrigation in the Takyr Solonetzs region.
NASA Technical Reports Server (NTRS)
Dittmar, J. H.
1985-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.
Kempe, P T; van Oppen, P; de Haan, E; Twisk, J W R; Sluis, A; Smit, J H; van Dyck, R; van Balkom, A J L M
2007-09-01
Two methods for predicting remissions in obsessive-compulsive disorder (OCD) treatment are evaluated. Y-BOCS measurements of 88 patients with a primary OCD (DSM-III-R) diagnosis were performed over a 16-week treatment period, and during three follow-ups. Remission at any measurement was defined as a Y-BOCS score lower than thirteen combined with a reduction of seven points when compared with baseline. Logistic regression models were compared with a Cox regression for recurrent events model. Logistic regression yielded different models at different evaluation times. The recurrent events model remained stable when fewer measurements were used. Higher baseline levels of neuroticism and more severe OCD symptoms were associated with a lower chance of remission, early age of onset and more depressive symptoms with a higher chance. Choice of outcome time affects logistic regression prediction models. Recurrent events analysis uses all information on remissions and relapses. Short- and long-term predictors for OCD remission show overlap.
Climate warming causes life-history evolution in a model for Atlantic cod (Gadus morhua).
Holt, Rebecca E; Jørgensen, Christian
2014-01-01
Climate change influences the marine environment, with ocean warming being the foremost driving factor governing changes in the physiology and ecology of fish. At the individual level, increasing temperature influences bioenergetics and numerous physiological and life-history processes, which have consequences for the population level and beyond. We provide a state-dependent energy allocation model that predicts temperature-induced adaptations for life histories and behaviour for the North-East Arctic stock (NEA) of Atlantic cod (Gadus morhua) in response to climate warming. The key constraint is temperature-dependent respiratory physiology, and the model includes a number of trade-offs that reflect key physiological and ecological processes. Dynamic programming is used to find an evolutionarily optimal strategy of foraging and energy allocation that maximizes expected lifetime reproductive output given constraints from physiology and ecology. The optimal strategy is then simulated in a population, where survival, foraging behaviour, growth, maturation and reproduction emerge. Using current forcing, the model reproduces patterns of growth, size-at-age, maturation, gonad production and natural mortality for NEA cod. The predicted climate responses are positive for this stock; under a 2°C warming, the model predicted increased growth rates and a larger asymptotic size. Maturation age was unaffected, but gonad weight was predicted to more than double. Predictions for a wider range of temperatures, from 2 to 7°C, show that temperature responses were gradual; fish were predicted to grow faster and increase reproductive investment at higher temperatures. An emergent pattern of higher risk acceptance and increased foraging behaviour was also predicted. Our results provide important insight into the effects of climate warming on NEA cod by revealing the underlying mechanisms and drivers of change. We show how temperature-induced adaptations of behaviour and several life-history traits are not only mediated by physiology but also by trade-offs with survival, which has consequences for conservation physiology.
Climate warming causes life-history evolution in a model for Atlantic cod (Gadus morhua)
Holt, Rebecca E.; Jørgensen, Christian
2014-01-01
Climate change influences the marine environment, with ocean warming being the foremost driving factor governing changes in the physiology and ecology of fish. At the individual level, increasing temperature influences bioenergetics and numerous physiological and life-history processes, which have consequences for the population level and beyond. We provide a state-dependent energy allocation model that predicts temperature-induced adaptations for life histories and behaviour for the North-East Arctic stock (NEA) of Atlantic cod (Gadus morhua) in response to climate warming. The key constraint is temperature-dependent respiratory physiology, and the model includes a number of trade-offs that reflect key physiological and ecological processes. Dynamic programming is used to find an evolutionarily optimal strategy of foraging and energy allocation that maximizes expected lifetime reproductive output given constraints from physiology and ecology. The optimal strategy is then simulated in a population, where survival, foraging behaviour, growth, maturation and reproduction emerge. Using current forcing, the model reproduces patterns of growth, size-at-age, maturation, gonad production and natural mortality for NEA cod. The predicted climate responses are positive for this stock; under a 2°C warming, the model predicted increased growth rates and a larger asymptotic size. Maturation age was unaffected, but gonad weight was predicted to more than double. Predictions for a wider range of temperatures, from 2 to 7°C, show that temperature responses were gradual; fish were predicted to grow faster and increase reproductive investment at higher temperatures. An emergent pattern of higher risk acceptance and increased foraging behaviour was also predicted. Our results provide important insight into the effects of climate warming on NEA cod by revealing the underlying mechanisms and drivers of change. We show how temperature-induced adaptations of behaviour and several life-history traits are not only mediated by physiology but also by trade-offs with survival, which has consequences for conservation physiology. PMID:27293671
A computational model for simulating solute transport and oxygen consumption along the nephrons
Vallon, Volker; Edwards, Aurélie
2016-01-01
The goal of this study was to investigate water and solute transport, with a focus on sodium transport (TNa) and metabolism along individual nephron segments under differing physiological and pathophysiological conditions. To accomplish this goal, we developed a computational model of solute transport and oxygen consumption (QO2) along different nephron populations of a rat kidney. The model represents detailed epithelial and paracellular transport processes along both the superficial and juxtamedullary nephrons, with the loop of Henle of each model nephron extending to differing depths of the inner medulla. We used the model to assess how changes in TNa may alter QO2 in different nephron segments and how shifting the TNa sites alters overall kidney QO2. Under baseline conditions, the model predicted a whole kidney TNa/QO2, which denotes the number of moles of Na+ reabsorbed per moles of O2 consumed, of ∼15, with TNa efficiency predicted to be significantly greater in cortical nephron segments than in medullary segments. The TNa/QO2 ratio was generally similar among the superficial and juxtamedullary nephron segments, except for the proximal tubule, where TNa/QO2 was ∼20% higher in superficial nephrons, due to the larger luminal flow along the juxtamedullary proximal tubules and the resulting higher, flow-induced transcellular transport. Moreover, the model predicted that an increase in single-nephron glomerular filtration rate does not significantly affect TNa/QO2 in the proximal tubules but generally increases TNa/QO2 along downstream segments. The latter result can be attributed to the generally higher luminal [Na+], which raises paracellular TNa. Consequently, vulnerable medullary segments, such as the S3 segment and medullary thick ascending limb, may be relatively protected from flow-induced increases in QO2 under pathophysiological conditions. PMID:27707705
NASA Technical Reports Server (NTRS)
Glasser, M. E.
1981-01-01
The Multilevel Diffusion Model (MDM) Version 5 was modified to include features of more recent versions. The MDM was used to predict in-cloud HCl concentrations for the April 12 launch of the space Shuttle (STS-1). The maximum centerline predictions were compared with measurements of maximum gaseous HCl obtained from aircraft passes through two segments of the fragmented shuttle ground cloud. The model over-predicted the maximum values for gaseous HCl in the lower cloud segment and portrayed the same rate of decay with time as the observed values. However, the decay with time of HCl maximum predicted by the MDM was more rapid than the observed decay for the higher cloud segment, causing the model to under-predict concentrations which were measured late in the life of the cloud. The causes of the tendency for the MDM to be conservative in over-estimating the HCl concentrations in the one case while tending to under-predict concentrations in the other case are discussed.
Denton, Brian T.; Hayward, Rodney A.
2017-01-01
Background Intensive blood pressure (BP) treatment can avert cardiovascular disease (CVD) events but can cause some serious adverse events. We sought to develop and validate risk models for predicting absolute risk difference (increased risk or decreased risk) for CVD events and serious adverse events from intensive BP therapy. A secondary aim was to test if the statistical method of elastic net regularization would improve the estimation of risk models for predicting absolute risk difference, as compared to a traditional backwards variable selection approach. Methods and findings Cox models were derived from SPRINT trial data and validated on ACCORD-BP trial data to estimate risk of CVD events and serious adverse events; the models included terms for intensive BP treatment and heterogeneous response to intensive treatment. The Cox models were then used to estimate the absolute reduction in probability of CVD events (benefit) and absolute increase in probability of serious adverse events (harm) for each individual from intensive treatment. We compared the method of elastic net regularization, which uses repeated internal cross-validation to select variables and estimate coefficients in the presence of collinearity, to a traditional backwards variable selection approach. Data from 9,069 SPRINT participants with complete data on covariates were utilized for model development, and data from 4,498 ACCORD-BP participants with complete data were utilized for model validation. Participants were exposed to intensive (goal systolic pressure < 120 mm Hg) versus standard (<140 mm Hg) treatment. Two composite primary outcome measures were evaluated: (i) CVD events/deaths (myocardial infarction, acute coronary syndrome, stroke, congestive heart failure, or CVD death), and (ii) serious adverse events (hypotension, syncope, electrolyte abnormalities, bradycardia, or acute kidney injury/failure). The model for CVD chosen through elastic net regularization included interaction terms suggesting that older age, black race, higher diastolic BP, and higher lipids were associated with greater CVD risk reduction benefits from intensive treatment, while current smoking was associated with fewer benefits. The model for serious adverse events chosen through elastic net regularization suggested that male sex, current smoking, statin use, elevated creatinine, and higher lipids were associated with greater risk of serious adverse events from intensive treatment. SPRINT participants in the highest predicted benefit subgroup had a number needed to treat (NNT) of 24 to prevent 1 CVD event/death over 5 years (absolute risk reduction [ARR] = 0.042, 95% CI: 0.018, 0.066; P = 0.001), those in the middle predicted benefit subgroup had a NNT of 76 (ARR = 0.013, 95% CI: −0.0001, 0.026; P = 0.053), and those in the lowest subgroup had no significant risk reduction (ARR = 0.006, 95% CI: −0.007, 0.018; P = 0.71). Those in the highest predicted harm subgroup had a number needed to harm (NNH) of 27 to induce 1 serious adverse event (absolute risk increase [ARI] = 0.038, 95% CI: 0.014, 0.061; P = 0.002), those in the middle predicted harm subgroup had a NNH of 41 (ARI = 0.025, 95% CI: 0.012, 0.038; P < 0.001), and those in the lowest subgroup had no significant risk increase (ARI = −0.007, 95% CI: −0.043, 0.030; P = 0.72). In ACCORD-BP, participants in the highest subgroup of predicted benefit had significant absolute CVD risk reduction, but the overall ACCORD-BP participant sample was skewed towards participants with less predicted benefit and more predicted risk than in SPRINT. The models chosen through traditional backwards selection had similar ability to identify absolute risk difference for CVD as the elastic net models, but poorer ability to correctly identify absolute risk difference for serious adverse events. A key limitation of the analysis is the limited sample size of the ACCORD-BP trial, which expanded confidence intervals for ARI among persons with type 2 diabetes. Additionally, it is not possible to mechanistically explain the physiological relationships explaining the heterogeneous treatment effects captured by the models, since the study was an observational secondary data analysis. Conclusions We found that predictive models could help identify subgroups of participants in both SPRINT and ACCORD-BP who had lower versus higher ARRs in CVD events/deaths with intensive BP treatment, and participants who had lower versus higher ARIs in serious adverse events. PMID:29040268
Pretreatment data is highly predictive of liver chemistry signals in clinical trials.
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy's law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones.
Kattou, Panayiotis; Lian, Guoping; Glavin, Stephen; Sorrell, Ian; Chen, Tao
2017-10-01
The development of a new two-dimensional (2D) model to predict follicular permeation, with integration into a recently reported multi-scale model of transdermal permeation is presented. The follicular pathway is modelled by diffusion in sebum. The mass transfer and partition properties of solutes in lipid, corneocytes, viable dermis, dermis and systemic circulation are calculated as reported previously [Pharm Res 33 (2016) 1602]. The mass transfer and partition properties in sebum are collected from existing literature. None of the model input parameters was fit to the clinical data with which the model prediction is compared. The integrated model has been applied to predict the published clinical data of transdermal permeation of caffeine. The relative importance of the follicular pathway is analysed. Good agreement of the model prediction with the clinical data has been obtained. The simulation confirms that for caffeine the follicular route is important; the maximum bioavailable concentration of caffeine in systemic circulation with open hair follicles is predicted to be 20% higher than that when hair follicles are blocked. The follicular pathway contributes to not only short time fast penetration, but also the overall systemic bioavailability. With such in silico model, useful information can be obtained for caffeine disposition and localised delivery in lipid, corneocytes, viable dermis, dermis and the hair follicle. Such detailed information is difficult to obtain experimentally.
Personality and mental health treatment: Traits as predictors of presentation, usage, and outcome.
Thalmayer, Amber Gayle
2018-03-08
Self-report scores on personality inventories predict important life outcomes, including health and longevity, marital outcomes, career success, and mental health problems, but the ways they predict mental health treatment have not been widely explored. Psychotherapy is sought for diverse problems, but about half of those who begin therapy drop out, and only about half who complete therapy experience lasting improvements. Several authors have argued that understanding how personality traits relate to treatment could lead to better targeted, more successful services. Here self-report scores on Big Five and Big Six personality dimensions are explored as predictors of therapy presentation, usage, and outcomes in a sample of community clinic clients (N = 306). Participants received evidence-based treatments in the context of individual-, couples-, or family-therapy sessions. One measure of initial functioning and three indicators of outcome were used. All personality trait scores except Openness associated with initial psychological functioning. Higher Conscientiousness scores predicted more sessions attended for family therapy but fewer for couples-therapy clients. Higher Honesty-Propriety and Extraversion scores predicted fewer sessions attended for family-therapy clients. Better termination outcome was predicted by higher Conscientiousness scores for family- and higher Extraversion scores for individual-therapy clients. Higher Honesty-Propriety and Neuroticism scores predicted more improvement in psychological functioning in terms of successive Outcome Questionnaire-45 administrations. Taken together, the results provide some support for the role of personality traits in predicting treatment usage and outcome and for the utility of a 6-factor model in this context. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Bosch, Xavier; Théroux, Pierre
2005-08-01
Improvement in risk stratification of patients with non-ST-segment elevation acute coronary syndrome (ACS) is a gateway to a more judicious treatment. This study examines whether the routine determination of left ventricular ejection fraction (EF) adds significant prognostic information to currently recommended stratifiers. Several predictors of inhospital mortality were prospectively characterized in a registry study of 1104 consecutive patients, for whom an EF was determined, who were admitted for an ACS. Multiple regression models were constructed using currently recommended clinical, electrocardiographic, and blood marker stratifiers, and values of EF were incorporated into the models. Age, ST-segment shifts, elevation of cardiac markers, and the Thrombolysis in Myocardial Infarction (TIMI) risk score all predicted mortality (P < .0001). Adding EF into the model improved the prediction of mortality (C statistic 0.73 vs 0.67). The odds of death increased by a factor of 1.042 for each 1% decrement in EF. By receiver operating curves, an EF cutoff of 48% provided the best predictive value. Mortality rates were 3.3 times higher within each TIMI risk score stratum in patients with an EF of 48% or lower as compared with those with higher. The TIMI risk score predicts inhospital mortality in a broad population of patients with ACS. The further consideration of EF adds significant prognostic information.
Modelling the growth of Leuconostoc mesenteroides by Artificial Neural Networks.
García-Gimeno, R M; Hervás-Martínez, C; Rodríguez-Pérez, R; Zurera-Cosano, G
2005-12-15
The combined effect of temperature (10.5 to 24.5 degrees C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the predicted specific growth rate (Gr), lag-time (Lag) and maximum population density (yEnd) of Leuconostoc mesenteroides under aerobic and anaerobic conditions, was studied using an Artificial Neural Network-based model (ANN) in comparison with Response Surface Methodology (RS). For both aerobic and anaerobic conditions, two types of ANN model were elaborated, unidimensional for each of the growth parameters, and multidimensional in which the three parameters Gr, Lag, and yEnd are combined. Although in general no significant statistical differences were observed between both types of model, we opted for the unidimensional model, because it obtained the lowest mean value for the standard error of prediction for generalisation. The ANN models developed provided reliable estimates for the three kinetic parameters studied; the SEP values in aerobic conditions ranged from between 2.82% for Gr, 6.05% for Lag and 10% for yEnd, a higher degree accuracy than those of the RS model (Gr: 9.54%; Lag: 8.89%; yEnd: 10.27%). Similar results were observed for anaerobic conditions. During external validation, a higher degree of accuracy (Af) and bias (Bf) were observed for the ANN model compared with the RS model. ANN predictive growth models are a valuable tool, enabling swift determination of L. mesenteroides growth parameters.
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Iturriaga, H; Hirsch, S; Bunout, D; Díaz, M; Kelly, M; Silva, G; de la Maza, M P; Petermann, M; Ugarte, G
1993-04-01
Looking for a noninvasive method to predict liver histologic alterations in alcoholic patients without clinical signs of liver failure, we studied 187 chronic alcoholics recently abstinent, divided in 2 series. In the model series (n = 94) several clinical variables and results of common laboratory tests were confronted to the findings of liver biopsies. These were classified in 3 groups: 1. Normal liver; 2. Moderate alterations; 3. Marked alterations, including alcoholic hepatitis and cirrhosis. Multivariate methods used were logistic regression analysis and a classification and regression tree (CART). Both methods entered gamma-glutamyltransferase (GGT), aspartate-aminotransferase (AST), weight and age as significant and independent variables. Univariate analysis with GGT and AST at different cutoffs were also performed. To predict the presence of any kind of damage (Groups 2 and 3), CART and AST > 30 IU showed the higher sensitivity, specificity and correct prediction, both in the model and validation series. For prediction of marked liver damage, a score based on logistic regression and GGT > 110 IU had the higher efficiencies. It is concluded that GGT and AST are good markers of alcoholic liver damage and that, using sample cutoffs, histologic diagnosis can be correctly predicted in 80% of recently abstinent asymptomatic alcoholics.
A neighborhood statistics model for predicting stream pathogen indicator levels.
Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S
2015-03-01
Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.
Pettit, Jeremy W.; Roberts, Robert E.; Lewinsohn, Peter M.; Seeley, John R.; Yaroslavsky, Ilya
2010-01-01
Longitudinal trajectories of depressive symptoms, perceived support from family, and perceived support from friends were examined among 816 emerging adults (480 women; 59%). In the context of a larger longitudinal investigation on the predictors and course of depression, data were drawn from eight self-report questionnaire assessments that roughly spanned the third decade of life. An age-based scaling approach was used to model trajectories of depressive symptoms and perceived social support between the ages of 21 and 30. Associative models of the relations between depressive symptoms and perceived social support from family and friends were tested. Results indicated that depressive symptoms decreased and perceived social support increased during the study period. Associative models suggested that among women, higher initial levels of perceived support from family predicted slower decreases in depressive symptoms (b = .34, p < .01). Among men, higher initial levels of depressive symptoms predicted slower increases in perceived family support (b = −.23, p < .05). Cross-domain predictive effects were not observed for perceived support from friends and depressive symptoms. Implications of the findings are discussed. PMID:21355652
Ando, Tatsuya; Suguro, Miyuki; Kobayashi, Takeshi; Seto, Masao; Honda, Hiroyuki
2003-10-01
A fuzzy neural network (FNN) using gene expression profile data can select combinations of genes from thousands of genes, and is applicable to predict outcome for cancer patients after chemotherapy. However, wide clinical heterogeneity reduces the accuracy of prediction. To overcome this problem, we have proposed an FNN system based on majoritarian decision using multiple noninferior models. We used transcriptional profiling data, which were obtained from "Lymphochip" DNA microarrays (http://llmpp.nih.gov/DLBCL), reported by Rosenwald (N Engl J Med 2002; 346: 1937-47). When the data were analyzed by our FNN system, accuracy (73.4%) of outcome prediction using only 1 FNN model with 4 genes was higher than that (68.5%) of the Cox model using 17 genes. Higher accuracy (91%) was obtained when an FNN system with 9 noninferior models, consisting of 35 independent genes, was used. The genes selected by the system included genes that are informative in the prognosis of Diffuse large B-cell lymphoma (DLBCL), such as genes showing an expression pattern similar to that of CD10 and BCL-6 or similar to that of IRF-4 and BCL-4. We classified 220 DLBCL patients into 5 groups using the prediction results of 9 FNN models. These groups may correspond to DLBCL subtypes. In group A containing half of the 220 patients, patients with poor outcome were found to satisfy 2 rules, i.e., high expression of MAX dimerization with high expression of unknown A (LC_26146), or high expression of MAX dimerization with low expression of unknown B (LC_33144). The present paper is the first to describe the multiple noninferior FNN modeling system. This system is a powerful tool for predicting outcome and classifying patients, and is applicable to other heterogeneous diseases.
Ren, Jingzheng
2018-01-01
Anaerobic digestion process has been recognized as a promising way for waste treatment and energy recovery in a sustainable way. Modelling of anaerobic digestion system is significantly important for effectively and accurately controlling, adjusting, and predicting the system for higher methane yield. The GM(1,N) approach which does not need the mechanism or a large number of samples was employed to model the anaerobic digestion system to predict methane yield. In order to illustrate the proposed model, an illustrative case about anaerobic digestion of municipal solid waste for methane yield was studied, and the results demonstrate that GM(1,N) model can effectively simulate anaerobic digestion system at the cases of poor information with less computational expense. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Torki-Harchegani, Mehdi; Ghanbarian, Davoud; Sadeghi, Morteza
2015-08-01
To design new dryers or improve existing drying equipments, accurate values of mass transfer parameters is of great importance. In this study, an experimental and theoretical investigation of drying whole lemons was carried out. The whole lemons were dried in a convective hot air dryer at different air temperatures (50, 60 and 75 °C) and a constant air velocity (1 m s-1). In theoretical consideration, three moisture transfer models including Dincer and Dost model, Bi- G correlation approach and conventional solution of Fick's second law of diffusion were used to determine moisture transfer parameters and predict dimensionless moisture content curves. The predicted results were then compared with the experimental data and the higher degree of prediction accuracy was achieved by the Dincer and Dost model.
Facultative Stabilization Pond: Measuring Biological Oxygen Demand using Mathematical Approaches
NASA Astrophysics Data System (ADS)
Wira S, Ihsan; Sunarsih, Sunarsih
2018-02-01
Pollution is a man-made phenomenon. Some pollutants which discharged directly to the environment could create serious pollution problems. Untreated wastewater will cause contamination and even pollution on the water body. Biological Oxygen Demand (BOD) is the amount of oxygen required for the oxidation by bacteria. The higher the BOD concentration, the greater the organic matter would be. The purpose of this study was to predict the value of BOD contained in wastewater. Mathematical modeling methods were chosen in this study to depict and predict the BOD values contained in facultative wastewater stabilization ponds. Measurements of sampling data were carried out to validate the model. The results of this study indicated that a mathematical approach can be applied to predict the BOD contained in the facultative wastewater stabilization ponds. The model was validated using Absolute Means Error with 10% tolerance limit, and AME for model was 7.38% (< 10%), so the model is valid. Furthermore, a mathematical approach can also be applied to illustrate and predict the contents of wastewater.
Spectroscopic Detection of COClF in the Tropical and Mid-Latitude Lower Stratosphere
NASA Technical Reports Server (NTRS)
Rinsland, Curtis P.; Nassar, Ray; Boone, Chris D.; Bernath, Peter; Chiou, Linda; Weisenstein, Debra K.; Mahieu, Emmanuel; Zander, Rodolphe
2007-01-01
We report retrievals of COClF (carbonyl chlorofluoride) based on atmospheric chemistry experiment (ACE) solar occultation spectra recorded at tropical and mid-latitudes during 2004-2005. The COClF molecule is a temporary reservoir of both chlorine and fluorine and has not been measured previously by remote sensing. A maximum COClF mixing ratio of 99.7+/-48.0 pptv (10(exp -12) per unit volume, 1 sigma) is measured at 28km for tropical and subtropical occultations (latitudes below 20deg in both hemispheres) with lower mixing ratios at both higher and lower altitudes. Northern hemisphere mid-latitude mixing ratios (30-50degN) resulted in an average profile with a peak mixing ratio of 51.7+/-32.1 pptv, 1 sigma, at 27 km, also decreasing above and below that altitude. We compare the measured average profiles with the one reported set of in situ lower stratospheric mid-latitude measurements from 1986 and 1987, a previous two-dimensional (2-D) model calculation for 1987 and 1993, and a 2-D-model prediction for 2004. The measured average tropical profile is in close agreement with the model prediction; the northern mid-latitude profile is also consistent, although the peak in the measured profile occurs at a higher altitude (2.5-4.5km offset) than in the model prediction. Seasonal average 2-D-model predictions of the COClF stratospheric distribution for 2004 are also reported.
Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong
2018-01-01
Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control. PMID:29461469
Zhang, Sen; Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong
2018-02-20
Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control.
Epileptic Seizures Prediction Using Machine Learning Methods
Usman, Syed Muhammad
2017-01-01
Epileptic seizures occur due to disorder in brain functionality which can affect patient's health. Prediction of epileptic seizures before the beginning of the onset is quite useful for preventing the seizure by medication. Machine learning techniques and computational methods are used for predicting epileptic seizures from Electroencephalograms (EEG) signals. However, preprocessing of EEG signals for noise removal and features extraction are two major issues that have an adverse effect on both anticipation time and true positive prediction rate. Therefore, we propose a model that provides reliable methods of both preprocessing and feature extraction. Our model predicts epileptic seizures' sufficient time before the onset of seizure starts and provides a better true positive rate. We have applied empirical mode decomposition (EMD) for preprocessing and have extracted time and frequency domain features for training a prediction model. The proposed model detects the start of the preictal state, which is the state that starts few minutes before the onset of the seizure, with a higher true positive rate compared to traditional methods, 92.23%, and maximum anticipation time of 33 minutes and average prediction time of 23.6 minutes on scalp EEG CHB-MIT dataset of 22 subjects. PMID:29410700
Development of a recursion RNG-based turbulence model
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George; Thangam, S.
1993-01-01
Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.
Predicting synchrony in heterogeneous pulse coupled oscillators
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Miliotis, Abraham; Carney, Paul R.; Ditto, William L.
2009-08-01
Pulse coupled oscillators (PCOs) represent an ubiquitous model for a number of physical and biological systems. Phase response curves (PRCs) provide a general mathematical framework to analyze patterns of synchrony generated within these models. A general theoretical approach to account for the nonlinear contributions from higher-order PRCs in the generation of synchronous patterns by the PCOs is still lacking. Here, by considering a prototypical example of a PCO network, i.e., two synaptically coupled neurons, we present a general theory that extends beyond the weak-coupling approximation, to account for higher-order PRC corrections in the derivation of an approximate discrete map, the stable fixed point of which can predict the domain of 1:1 phase locked synchronous states generated by the PCO network.
Using Faculty Characteristics to Predict Attitudes toward Developmental Education
ERIC Educational Resources Information Center
Sides, Meredith Louise Carr
2017-01-01
The study adapted Astin's I-E-O model and utilized multiple regression analyses to predict faculty attitudes toward developmental education. The study utilized a cross-sectional survey design to survey faculty members at 27 different higher education institutions in the state of Alabama. The survey instrument was a self-designed questionnaire that…
Volunteering for Job Enrichment: A Test of Expectancy Theory Predictions
ERIC Educational Resources Information Center
Giles, William F.
1977-01-01
In order to test predictions derived from an expectancy theory model developed by E. E. Lawler, measures of higher-order need satisfaction, locus of control, and intrinsic motivation were obtained from 252 female assembly line workers. Implications of the results for placement of individuals in enriched jobs are discussed. (Editor/RK)
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
Baskar, Gurunathan; Sathya, Shree Rajesh K Lakshmi Jai; Jinnah, Riswana Begum; Sahadevan, Renganathan
2011-01-01
Response surface methodology was employed to optimize the concentration of four important cultivation media components such as cottonseed oil cake, glucose, NH4Cl, and MgSO4 for maximum medicinal polysaccharide yield by Lingzhi or Reishi medicinal mushroom, Ganoderma lucidum MTCC 1039 in submerged culture. The second-order polynomial model describing the relationship between media components and polysaccharide yield was fitted in coded units of the variables. The higher value of the coefficient of determination (R2 = 0.953) justified an excellent correlation between media components and polysaccharide yield, and the model fitted well with high statistical reliability and significance. The predicted optimum concentration of the media components was 3.0% cottonseed oil cake, 3.0% glucose, 0.15% NH4Cl, and 0.045% MgSO4, with the maximum predicted polysaccharide yield of 819.76 mg/L. The experimental polysaccharide yield at the predicted optimum media components was 854.29 mg/L, which was 4.22% higher than the predicted yield.
Van Zalk, Nejra; Tillfors, Maria; Trost, Kari
2018-05-05
This study investigated the links between parental worry, parental over-control and adolescent social anxiety in parent-adolescent dyads. Using a longitudinal sample of adolescents (M age = 14.28) and their parents (224 mother-daughter, 234 mother-son, 51 father-daughter, and 47 father-son dyads), comparisons were conducted using cross-lagged path models across two time points. We used adolescent reports of social anxiety and feelings of being overly controlled by parents, and mother and father self-reports of worries. Our results show that boys' social anxiety predicted higher perceived parental overcontrol, whereas girls' social anxiety predicted higher paternal worry over time. In addition, girls' reports of feeling overly controlled by parents predicted higher maternal worry but lower paternal worry over time. For boys, feeling overly controlled predicted less social anxiety instead. The study illustrates how mothers and fathers might differ in their behaviors and concerns regarding their children's social anxiety and feelings of overcontrol.
Reduction of initial shock in decadal predictions using a new initialization strategy
NASA Astrophysics Data System (ADS)
He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei
2017-08-01
A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.
Xu, Dong; Zhang, Yang
2013-01-01
Genome-wide protein structure prediction and structure-based function annotation have been a long-term goal in molecular biology but not yet become possible due to difficulties in modeling distant-homology targets. We developed a hybrid pipeline combining ab initio folding and template-based modeling for genome-wide structure prediction applied to the Escherichia coli genome. The pipeline was tested on 43 known sequences, where QUARK-based ab initio folding simulation generated models with TM-score 17% higher than that by traditional comparative modeling methods. For 495 unknown hard sequences, 72 are predicted to have a correct fold (TM-score > 0.5) and 321 have a substantial portion of structure correctly modeled (TM-score > 0.35). 317 sequences can be reliably assigned to a SCOP fold family based on structural analogy to existing proteins in PDB. The presented results, as a case study of E. coli, represent promising progress towards genome-wide structure modeling and fold family assignment using state-of-the-art ab initio folding algorithms. PMID:23719418
Long-Term Post-CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions.
Carr, Brendan M; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C; Zhu, Wei; Shroyer, A Laurie
2016-01-01
Clinical risk models are commonly used to predict short-term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long-term mortality. The added value of long-term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long-term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Long-term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c-index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Mortality rates were 3%, 9%, and 17% at one-, three-, and five years, respectively (median follow-up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long-term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Long-term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long-term mortality risk can be accurately assessed and subgroups of higher-risk patients can be identified for enhanced follow-up care. More research appears warranted to refine long-term CABG clinical risk models. © 2015 The Authors. Journal of Cardiac Surgery Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Collins, Jarrod A.; Brown, Daniel; Kingham, T. Peter; Jarnagin, William R.; Miga, Michael I.; Clements, Logan W.
2015-03-01
Development of a clinically accurate predictive model of microwave ablation (MWA) procedures would represent a significant advancement and facilitate an implementation of patient-specific treatment planning to achieve optimal probe placement and ablation outcomes. While studies have been performed to evaluate predictive models of MWA, the ability to quantify the performance of predictive models via clinical data has been limited to comparing geometric measurements of the predicted and actual ablation zones. The accuracy of placement, as determined by the degree of spatial overlap between ablation zones, has not been achieved. In order to overcome this limitation, a method of evaluation is proposed where the actual location of the MWA antenna is tracked and recorded during the procedure via a surgical navigation system. Predictive models of the MWA are then computed using the known position of the antenna within the preoperative image space. Two different predictive MWA models were used for the preliminary evaluation of the proposed method: (1) a geometric model based on the labeling associated with the ablation antenna and (2) a 3-D finite element method based computational model of MWA using COMSOL. Given the follow-up tomographic images that are acquired at approximately 30 days after the procedure, a 3-D surface model of the necrotic zone was generated to represent the true ablation zone. A quantification of the overlap between the predicted ablation zones and the true ablation zone was performed after a rigid registration was computed between the pre- and post-procedural tomograms. While both model show significant overlap with the true ablation zone, these preliminary results suggest a slightly higher degree of overlap with the geometric model.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Kuo, Pao-Jen; Wu, Shao-Chun; Chien, Peng-Chen; Chang, Shu-Shya; Rau, Cheng-Shyuan; Tai, Hsueh-Ling; Peng, Shu-Hui; Lin, Yi-Chun; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua
2018-01-01
Background The aim of this study was to develop an effective surgical site infection (SSI) prediction model in patients receiving free-flap reconstruction after surgery for head and neck cancer using artificial neural network (ANN), and to compare its predictive power with that of conventional logistic regression (LR). Materials and methods There were 1,836 patients with 1,854 free-flap reconstructions and 438 postoperative SSIs in the dataset for analysis. They were randomly assigned tin ratio of 7:3 into a training set and a test set. Based on comprehensive characteristics of patients and diseases in the absence or presence of operative data, prediction of SSI was performed at two time points (pre-operatively and post-operatively) with a feed-forward ANN and the LR models. In addition to the calculated accuracy, sensitivity, and specificity, the predictive performance of ANN and LR were assessed based on area under the curve (AUC) measures of receiver operator characteristic curves and Brier score. Results ANN had a significantly higher AUC (0.892) of post-operative prediction and AUC (0.808) of pre-operative prediction than LR (both P<0.0001). In addition, there was significant higher AUC of post-operative prediction than pre-operative prediction by ANN (p<0.0001). With the highest AUC and the lowest Brier score (0.090), the post-operative prediction by ANN had the highest overall predictive performance. Conclusion The post-operative prediction by ANN had the highest overall performance in predicting SSI after free-flap reconstruction in patients receiving surgery for head and neck cancer. PMID:29568393
Predicted effect of dynamic load on pitting fatigue life for low-contact-ratio spur gears
NASA Technical Reports Server (NTRS)
Lewicki, David G.
1986-01-01
How dynamic load affects the surface pitting fatigue life of external spur gears was predicted by using the NASA computer program TELSGE. Parametric studies were performed over a range of various gear parameters modeling low-contact-ratio involute spur gears. In general, gear life predictions based on dynamic loads differed significantly from those based on static loads, with the predictions being strongly influenced by the maximum dynamic load during contact. Gear mesh operating speed strongly affected predicted dynamic load and life. Meshes operating at a resonant speed or one-half the resonant speed had significantly shorter lives. Dynamic life factors for gear surface pitting fatigue were developed on the basis of the parametric studies. In general, meshes with higher contact ratios had higher dynamic life factors than meshes with lower contact ratios. A design chart was developed for hand calculations of dynamic life factors.
Biases in affective forecasting and recall in individuals with depression and anxiety symptoms.
Wenze, Susan J; Gunthert, Kathleen C; German, Ramaris E
2012-07-01
The authors used experience sampling to investigate biases in affective forecasting and recall in individuals with varying levels of depression and anxiety symptoms. Participants who were higher in depression symptoms demonstrated stronger (more pessimistic) negative mood prediction biases, marginally stronger negative mood recall biases, and weaker (less optimistic) positive mood prediction and recall biases. Participants who were higher in anxiety symptoms demonstrated stronger negative mood prediction biases, but positive mood prediction biases that were on par with those who were lower in anxiety. Anxiety symptoms were not associated with mood recall biases. Neither depression symptoms nor anxiety symptoms were associated with bias in event prediction. Their findings fit well with the tripartite model of depression and anxiety. Results are also consistent with the conceptualization of anxiety as a "forward-looking" disorder, and with theories that emphasize the importance of pessimism and general negative information processing in depressive functioning.
Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap
2016-01-01
The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.
What can 35 years and over 700,000 measurements tell us about noise exposure in the mining industry?
Roberts, Benjamin; Sun, Kan; Neitzel, Richard L
2017-01-01
To analyse over 700,000 cross-sectional measurements from the Mine Safety and Health Administration (MHSA) and develop statistical models to predict noise exposure for a worker. Descriptive statistics were used to summarise the data. Two linear regression models were used to predict noise exposure based on MSHA-permissible exposure limit (PEL) and action level (AL), respectively. Twofold cross validation was used to compare the exposure estimates from the models to actual measurement. The mean difference and t-statistic was calculated for each job title to determine whether the model predictions were significantly different from the actual data. Measurements were acquired from MSHA through a Freedom of Information Act request. From 1979 to 2014, noise exposure has decreased. Measurements taken before the implementation of MSHA's revised noise regulation in 2000 were on average 4.5 dBA higher than after the law was implemented. Both models produced exposure predictions that were less than 1 dBA different than the holdout data. Overall noise levels in mines have been decreasing. However, this decrease has not been uniform across all mining sectors. The exposure predictions from the model will be useful to help predict hearing loss in workers in the mining industry.
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
Belay, T K; Dagnachew, B S; Boison, S A; Ådnøy, T
2018-03-28
Milk infrared spectra are routinely used for phenotyping traits of interest through links developed between the traits and spectra. Predicted individual traits are then used in genetic analyses for estimated breeding value (EBV) or for phenotypic predictions using a single-trait mixed model; this approach is referred to as indirect prediction (IP). An alternative approach [direct prediction (DP)] is a direct genetic analysis of (a reduced dimension of) the spectra using a multitrait model to predict multivariate EBV of the spectral components and, ultimately, also to predict the univariate EBV or phenotype for the traits of interest. We simulated 3 traits under different genetic (low: 0.10 to high: 0.90) and residual (zero to high: ±0.90) correlation scenarios between the 3 traits and assumed the first trait is a linear combination of the other 2 traits. The aim was to compare the IP and DP approaches for predictions of EBV and phenotypes under the different correlation scenarios. We also evaluated relationships between performances of the 2 approaches and the accuracy of calibration equations. Moreover, the effect of using different regression coefficients estimated from simulated phenotypes (β p ), true breeding values (β g ), and residuals (β r ) on performance of the 2 approaches were evaluated. The simulated data contained 2,100 parents (100 sires and 2,000 cows) and 8,000 offspring (4 offspring per cow). Of the 8,000 observations, 2,000 were randomly selected and used to develop links between the first and the other 2 traits using partial least square (PLS) regression analysis. The different PLS regression coefficients, such as β p , β g , and β r , were used in subsequent predictions following the IP and DP approaches. We used BLUP analyses for the remaining 6,000 observations using the true (co)variance components that had been used for the simulation. Accuracy of prediction (of EBV and phenotype) was calculated as a correlation between predicted and true values from the simulations. The results showed that accuracies of EBV prediction were higher in the DP than in the IP approach. The reverse was true for accuracy of phenotypic prediction when using β p but not when using β g and β r , where accuracy of phenotypic prediction in the DP was slightly higher than in the IP approach. Within the DP approach, accuracies of EBV when using β g were higher than when using β p only at the low genetic correlation scenario. However, we found no differences in EBV prediction accuracy between the β p and β g in the IP approach. Accuracy of the calibration models increased with an increase in genetic and residual correlations between the traits. Performance of both approaches increased with an increase in accuracy of the calibration models. In conclusion, the DP approach is a good strategy for EBV prediction but not for phenotypic prediction, where the classical PLS regression-based equations or the IP approach provided better results. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
NASA Astrophysics Data System (ADS)
Sohn, Soo-Jin; Min, Young-Mi; Lee, June-Yi; Tam, Chi-Yung; Kang, In-Sik; Wang, Bin; Ahn, Joong-Bae; Yamagata, Toshio
2012-02-01
The performance of the probabilistic multimodel prediction (PMMP) system of the APEC Climate Center (APCC) in predicting the Asian summer monsoon (ASM) precipitation at a four-month lead (with February initial condition) was compared with that of a statistical model using hindcast data for 1983-2005 and real-time forecasts for 2006-2011. Particular attention was paid to probabilistic precipitation forecasts for the boreal summer after the mature phase of El Niño and Southern Oscillation (ENSO). Taking into account the fact that coupled models' skill for boreal spring and summer precipitation mainly comes from their ability to capture ENSO teleconnection, we developed the statistical model using linear regression with the preceding winter ENSO condition as the predictor. Our results reveal several advantages and disadvantages in both forecast systems. First, the PMMP appears to have higher skills for both above- and below-normal categories in the six-year real-time forecast period, whereas the cross-validated statistical model has higher skills during the 23-year hindcast period. This implies that the cross-validated statistical skill may be overestimated. Second, the PMMP is the better tool for capturing atypical ENSO (or non-canonical ENSO related) teleconnection, which has affected the ASM precipitation during the early 1990s and in the recent decade. Third, the statistical model is more sensitive to the ENSO phase and has an advantage in predicting the ASM precipitation after the mature phase of La Niña.
Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.
2008-01-01
This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.
Slat Noise Predictions Using Higher-Order Finite-Difference Methods on Overset Grids
NASA Technical Reports Server (NTRS)
Housman, Jeffrey A.; Kiris, Cetin
2016-01-01
Computational aeroacoustic simulations using the structured overset grid approach and higher-order finite difference methods within the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for slat noise predictions. The simulations are part of a collaborative study comparing noise generation mechanisms between a conventional slat and a Krueger leading edge flap. Simulation results are compared with experimental data acquired during an aeroacoustic test in the NASA Langley Quiet Flow Facility. Details of the structured overset grid, numerical discretization, and turbulence model are provided.
Decision-making, sensitivity to reward, and attrition in weight-management
Koritzky, Gilly; Dieterle, Camille; Rice, Chantelle; Jordan, Katie; Bechara, Antoine
2014-01-01
Objective Attrition is a common problem in weight-management. Understanding the risk factors for attrition should enhance professionals’ ability to increase completion rates and improve health outcomes for more individuals. We propose a model that draws upon neuropsychological knowledge on reward-sensitivity in obesity and overeating to predict attrition. Design & Methods 52 participants in a weight-management program completed a complex decision-making task.Decision-making characteristics – including sensitivity to reward – were further estimated using a quantitative model. Impulsivity and risk-taking measures were also administered. Results Consistent with the hypothesis that sensitivity to reward predicted attrition, program dropouts had higher sensitivity to reward than completers (p < 0.03). No differences were observed between completers and dropouts in initial BMI, age, employment status, or the number of prior weight-loss attempts (p ≥ 0.07). Completers had a slightly higher education level than dropouts, but its inclusion in the model did not increase predictive power. Impulsivity, delay of gratification, and risk-taking did not predict attrition, either. Conclusions Findings link attrition in weight-management to the neural mechanisms associated with reward-seeking and related influences on decision-making. Individual differences in the magnitude of response elicited by rewards may account for the relative difficulty experienced by dieters in adhering to treatment. PMID:24771588
Segev, G; Langston, C; Takada, K; Kass, P H; Cowgill, L D
2016-05-01
A scoring system for outcome prediction in dogs with acute kidney injury (AKI) recently has been developed but has not been validated. The scoring system previously developed for outcome prediction will accurately predict outcome in a validation cohort of dogs with AKI managed with hemodialysis. One hundred fifteen client-owned dogs with AKI. Medical records of dogs with AKI treated by hemodialysis between 2011 and 2015 were reviewed. Dogs were included only if all variables required to calculate the final predictive score were available, and the 30-day outcome was known. A predictive score for 3 models was calculated for each dog. Logistic regression was used to evaluate the association of the final predictive score with each model's outcome. Receiver operating curve (ROC) analyses were performed to determine sensitivity and specificity for each model based on previously established cut-off values. Higher scores for each model were associated with decreased survival probability (P < .001). Based on previously established cut-off values, 3 models (models A, B, C) were associated with sensitivities/specificities of 73/75%, 71/80%, and 75/86%, respectively, and correctly classified 74-80% of the dogs. All models were simple to apply and allowed outcome prediction that closely corresponded with actual outcome in an independent cohort. As expected, accuracies were slightly lower compared with those from the previously reported cohort used initially to develop the models. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
2014-01-01
Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231
Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin
2014-04-28
It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.
Abraham, Gad; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael
2013-02-01
A central goal of medical genetics is to accurately predict complex disease from genotypes. Here, we present a comprehensive analysis of simulated and real data using lasso and elastic-net penalized support-vector machine models, a mixed-effects linear model, a polygenic score, and unpenalized logistic regression. In simulation, the sparse penalized models achieved lower false-positive rates and higher precision than the other methods for detecting causal SNPs. The common practice of prefiltering SNP lists for subsequent penalized modeling was examined and shown to substantially reduce the ability to recover the causal SNPs. Using genome-wide SNP profiles across eight complex diseases within cross-validation, lasso and elastic-net models achieved substantially better predictive ability in celiac disease, type 1 diabetes, and Crohn's disease, and had equivalent predictive ability in the rest, with the results in celiac disease strongly replicating between independent datasets. We investigated the effect of linkage disequilibrium on the predictive models, showing that the penalized methods leverage this information to their advantage, compared with methods that assume SNP independence. Our findings show that sparse penalized approaches are robust across different disease architectures, producing as good as or better phenotype predictions and variance explained. This has fundamental ramifications for the selection and future development of methods to genetically predict human disease. © 2012 WILEY PERIODICALS, INC.
California's Snow Gun and its implications for mass balance predictions under greenhouse warming
NASA Astrophysics Data System (ADS)
Howat, I.; Snyder, M.; Tulaczyk, S.; Sloan, L.
2003-12-01
Precipitation has received limited treatment in glacier and snowpack mass balance models, largely due to the poor resolution and confidence of precipitation predictions relative to temperature predictions derived from atmospheric models. Most snow and glacier mass balance models rely on statistical or lapse rate-based downscaling of general or regional circulation models (GCM's and RCM's), essentially decoupling sub-grid scale, orographically-driven evolution of atmospheric heat and moisture. Such models invariably predict large losses in the snow and ice volume under greenhouse warming. However, positive trends in the mass balance of glaciers in some warming maritime climates, as well as at high elevations of the Greenland Ice Sheet, suggest that increased precipitation may play an important role in snow- and glacier-climate interactions. Here, we present a half century of April snowpack data from the Sierra Nevada and Cascade mountains of California, USA. This high-density network of snow-course data indicates that a gain in winter snow accumulation at higher elevations has compensated loss in snow volume at lower elevations by over 50% and has led to glacier expansion on Mt. Shasta. These trends are concurrent with a region-wide increase in winter temperatures up to 2° C. They result from the orographic lifting and saturation of warmer, more humid air leading to increased precipitation at higher elevations. Previous studies have invoked such a "Snow Gun" effect to explain contemporaneous records of Tertiary ocean warming and rapid glacial expansion. A climatological context of the California's "snow gun" effect is elucidated by correlation between the elevation distribution of April SWE observations and the phase of the Pacific Decadal Oscillation and the El Nino Southern Oscillation, both controlling the heat and moisture delivered to the U.S. Pacific coast. The existence of a significant "Snow Gun" effect presents two challenges to snow and glacier mass balance modeling. Firstly, the link between amplification of orographic precipitation and the temporal evolution of ocean-climate oscillations indicates that prediction of future mass balance trends requires consideration of the timing and amplitude of such oscillations. Only recently have ocean-atmosphere models begun to realistically produce such temporal variability. Secondly, the steepening snow mass-balance elevation-gradient associated with the "Snow Gun" implies greater spatial variability in balance with warming. In a warming climate, orographic processes at a scale finer that the highest resolution RCM (>20km grid) become increasingly important and predictions based on lower elevations become increasingly inaccurate for higher elevations. Therefore, thermodynamic interaction between atmospheric heat, moisture and topography must be included in downscaling techniques. In order to demonstrate the importance of the thermodynamic downscaling in mass balance predictions, we nest a high-resolution (100m grid), coupled Orographic Precipitation and Surface Energy balance Model (OPSEM) into the RegC2.5 RCM (40 km grid) and compare results. We apply this nesting technique to Mt. Shasta, California, an area of high topography (~4000m) relative to its RegCM2.5 grid elevation (1289m). These models compute average April snow volume under present and doubled-present Atmospheric CO2 concentrations. While the RegCM2.5 regional model predicts an 83% decrease in April SWE, OPSEM predicts a 16% increase. These results indicate that thermodynamic interactions between the atmosphere and topography at sub- RCM grid resolution must be considered in mass balance models.
Impact of predictive scoring model and e-mail messages on African American blood donors.
Bachegowda, Lohith S; Timm, Brad; Dasgupta, Pinaki; Hillyer, Christopher D; Kessler, Debra; Rebosa, Mark; France, Christopher R; Shaz, Beth H
2017-06-01
Expanding the African American (AA) donor pool is critical to sustain transfusion support for sickle cell disease patients. The aims were to: 1) apply cognitive computing on donation related metrics to develop a predictive model that effectively identifies repeat AA donors, 2) determine whether a single e-mail communication could improve AA donor retention and compare retention results on higher versus lower predictive score donors, and 3) evaluate the effect of e-mail marketing on AA donor retention with culturally versus nonculturally tailored message. Between 2011 and 2012, 30,786 AA donors donated blood at least once on whom predictive repeat donor scores (PRDSs) was generated from donor-related metrics (frequency of donations, duration between donations, age, blood type, and sex). In 2013, 28% (8657/30,786) of 2011 to 2012 donors returned to donate on whom PRDS was validated. Returning blood donors had a higher mean PRDS compared to nonreturning donors (0.649 vs. 0.268; p < 0.001). In the e-mail pilot, high PRDS (≥0.6) compared to low PRDS (<0.6) was associated with 89% higher donor presentation rate (p < 0.001), 20% higher e-mail opening rate (p < 0.001), and, specifically among those who opened the e-mail, 159% higher presentation rate (p < 0.001). Finally, blood donation rate did not differ (p = 0.79) as a function of generic (n = 9312, 1.4%) versus culturally tailored (n = 9326, 1.3%) message. Computational algorithms utilizing readily available donor metrics can identify highly committed AA donors and in conjunction with targeted e-mail communication has the potential to increase the efficiency of donor marketing. © 2017 AABB.
NASA Astrophysics Data System (ADS)
Nield, Grace A.; Whitehouse, Pippa L.; van der Wal, Wouter; Blank, Bas; O'Donnell, John Paul; Stuart, Graham W.
2018-04-01
Differences in predictions of Glacial Isostatic Adjustment (GIA) for Antarctica persist due to uncertainties in deglacial history and Earth rheology. The Earth models adopted in many GIA studies are defined by parameters that vary in the radial direction only and represent a global average Earth structure (referred to as 1D Earth models). Over-simplifying actual Earth structure leads to bias in model predictions in regions where Earth parameters differ significantly from the global average, such as West Antarctica. We investigate the impact of lateral variations in lithospheric thickness on GIA in Antarctica by carrying out two experiments that use different rheological approaches to define 3D Earth models that include spatial variations in lithospheric thickness. The first experiment defines an elastic lithosphere with spatial variations in thickness inferred from seismic studies. We compare the results from this 3D model with results derived from a 1D Earth model that has a uniform lithospheric thickness defined as the average of the 3D lithospheric thickness. Irrespective of deglacial history and sub-lithospheric mantle viscosity, we find higher gradients of present-day uplift rates (i.e. higher amplitude and shorter wavelength) in West Antarctica when using the 3D models, due to the thinner-than-1D-average lithosphere prevalent in this region. The second experiment uses seismically-inferred temperature as input to a power-law rheology thereby allowing the lithosphere to have a viscosity structure. Modelling the lithosphere with a power-law rheology results in behaviour that is equivalent to a thinner-lithosphere model, and it leads to higher amplitude and shorter wavelength deformation compared with the first experiment. We conclude that neglecting spatial variations in lithospheric thickness in GIA models will result in predictions of peak uplift and subsidence that are biased low in West Antarctica. This has important implications for ice-sheet modelling studies as the steeper gradients of uplift predicted from the more realistic 3D model may promote stability in marine-grounded regions of West Antarctica. Including lateral variations in lithospheric thickness, at least to the level of considering West and East Antarctica separately, is important for capturing short wavelength deformation and it has the potential to provide a better fit to GPS observations as well as an improved GIA correction for GRACE data.
Liu, Yang; Paciorek, Christopher J.; Koutrakis, Petros
2009-01-01
Background Studies of chronic health effects due to exposures to particulate matter with aerodynamic diameters ≤ 2.5 μm (PM2.5) are often limited by sparse measurements. Satellite aerosol remote sensing data may be used to extend PM2.5 ground networks to cover a much larger area. Objectives In this study we examined the benefits of using aerosol optical depth (AOD) retrieved by the Geostationary Operational Environmental Satellite (GOES) in conjunction with land use and meteorologic information to estimate ground-level PM2.5 concentrations. Methods We developed a two-stage generalized additive model (GAM) for U.S. Environmental Protection Agency PM2.5 concentrations in a domain centered in Massachusetts. The AOD model represents conditions when AOD retrieval is successful; the non-AOD model represents conditions when AOD is missing in the domain. Results The AOD model has a higher predicting power judged by adjusted R2 (0.79) than does the non-AOD model (0.48). The predicted PM2.5 concentrations by the AOD model are, on average, 0.8–0.9 μg/m3 higher than the non-AOD model predictions, with a more smooth spatial distribution, higher concentrations in rural areas, and the highest concentrations in areas other than major urban centers. Although AOD is a highly significant predictor of PM2.5, meteorologic parameters are major contributors to the better performance of the AOD model. Conclusions GOES aerosol/smoke product (GASP) AOD is able to summarize a set of weather and land use conditions that stratify PM2.5 concentrations into two different spatial patterns. Even if land use regression models do not include AOD as a predictor variable, two separate models should be fitted to account for different PM2.5 spatial patterns related to AOD availability. PMID:19590678
Miyamoto, Maki; Iwasaki, Shinji; Chisaki, Ikumi; Nakagawa, Sayaka; Amano, Nobuyuki; Hirabayashi, Hideki
2017-12-01
1. The aim of the present study was to evaluate the usefulness of chimeric mice with humanised liver (PXB mice) for the prediction of clearance (CL t ) and volume of distribution at steady state (Vd ss ), in comparison with monkeys, which have been reported as a reliable model for human pharmacokinetics (PK) prediction, and with rats, as a conventional PK model. 2. CL t and Vd ss values in PXB mice, monkeys and rats were determined following intravenous administration of 30 compounds known to be mainly eliminated in humans via the hepatic metabolism by various drug-metabolising enzymes. Using single-species allometric scaling, human CL t and Vd ss values were predicted from the three animal models. 3. Predicted CL t values from PXB mice exhibited the highest predictability: 25 for PXB mice, 21 for monkeys and 14 for rats were predicted within a three-fold range of actual values among 30 compounds. For predicted human Vd ss values, the number of compounds falling within a three-fold range was 23 for PXB mice, 24 for monkeys, and 16 for rats among 29 compounds. PXB mice indicated a higher predictability for CL t and Vd ss values than the other animal models. 4. These results demonstrate the utility of PXB mice in predicting human PK parameters.
Prediction of fatigue-related driver performance from EEG data by deep Riemannian model.
Hajinoroozi, Mehdi; Jianqiu Zhang; Yufei Huang
2017-07-01
Prediction of the drivers' drowsy and alert states is important for safety purposes. The prediction of drivers' drowsy and alert states from electroencephalography (EEG) using shallow and deep Riemannian methods is presented. For shallow Riemannian methods, the minimum distance to Riemannian mean (mdm) and Log-Euclidian metric are investigated, where it is shown that Log-Euclidian metric outperforms the mdm algorithm. In addition the SPDNet, a deep Riemannian model, that takes the EEG covariance matrix as the input is investigated. It is shown that SPDNet outperforms all tested shallow and deep classification methods. Performance of SPDNet is 6.02% and 2.86% higher than the best performance by the conventional Euclidian classifiers and shallow Riemannian models, respectively.
NASA Astrophysics Data System (ADS)
Crosby, S. C.; O'Reilly, W. C.; Guza, R. T.
2016-02-01
Accurate, unbiased, high-resolution (in space and time) nearshore wave predictions are needed to drive models of beach erosion, coastal flooding, and alongshore transport of sediment, biota and pollutants. On highly sheltered shorelines, wave predictions are sensitive to the directions of onshore propagating waves, and nearshore model prediction error is often dominated by uncertainty in offshore boundary conditions. Offshore islands and shoals, and coastline curvature, create complex sheltering patterns over the 250km span of southern California (SC) shoreline. Here, regional wave model skill in SC was compared for different offshore boundary conditions created using offshore buoy observations and global wave model hindcasts (National Oceanographic and Atmospheric Administration Wave Watch 3, WW3). Spectral ray-tracing methods were used to transform incident offshore swell (0.04-0.09Hz) energy at high directional resolution (1-deg). Model skill is assessed for predictions (wave height, direction, and alongshore radiation stress) at 16 nearshore buoy sites between 2000 and 2009. Model skill using buoy-derived boundary conditions is higher than with WW3-derived boundary conditions. Buoy-driven nearshore model results are similar with various assumptions about the true offshore directional distribution (maximum entropy, Bayesian direct, and 2nd derivative smoothness). Two methods combining offshore buoy observations with WW3 predictions in the offshore boundary condition did not improve nearshore skill above buoy-only methods. A case example at Oceanside harbor shows strong sensitivity of alongshore sediment transport predictions to different offshore boundary conditions. Despite this uncertainty in alongshore transport magnitude, alongshore gradients in transport (e.g. the location of model accretion and erosion zones) are determined by the local bathymetry, and are similar for all predictions.
Modeling lead concentration in drinking water of residential plumbing pipes and hot water tanks.
Chowdhury, Shakhawat; Kabir, Fayzul; Mazumder, Mohammad Abu Jafar; Zahir, Md Hasan
2018-09-01
Drinking water is a potential source of exposure to lead (Pb), which can pose risk to humans. The regulatory agencies often monitor Pb in water treatment plants (WTP) and/or water distribution systems (WDS). However, people are exposed to tap water inside the house while water may stay in the plumbing premise for several hours prior to reaching the tap. Depending on stagnation period and plumbing premise, concentrations of Pb in tap water can be significantly higher than the WDS leading to higher intake of Pb than the values from WDS or WTP. In this study, concentrations of Pb and water quality parameters were investigated in WDS, plumbing pipe (PP) and hot water tanks (HWT) for 7months. The samples were collected and analyzed on bi-weekly basis for 7 times a day. Several linear, non-linear and neural network models were developed for predicting Pb in PP and HWT. The models were validated using the additional data, which were not used for model development. The concentrations of Pb in PP and HWT were 1-1.17 and 1-1.21 times the Pb in WDS respectively. Concentrations of Pb were higher in summer than winter. The models showed moderate to excellent performance (R 2 =0.85-0.99) in predicting Pb in PP and HWT. The correlation coefficients (r) with the validation data were in the ranges of 0.76-0.90 and 0.97-0.99 for PP and HWT respectively. The models can be used for predicting Pb in tap water, which can assist to better protect the humans. Copyright © 2018. Published by Elsevier B.V.
Zhang, Xueying; Chu, Yiyi; Wang, Yuxuan; Zhang, Kai
2018-08-01
The regulatory monitoring data of particulate matter with an aerodynamic diameter <2.5μm (PM 2.5 ) in Texas have limited spatial and temporal coverage. The purpose of this study is to estimate the ground-level PM 2.5 concentrations on a daily basis using satellite-retrieved Aerosol Optical Depth (AOD) in the state of Texas. We obtained the AOD values at 1-km resolution generated through the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm based on the images retrieved from the Moderate Resolution Imaging Spectroradiometer (MODIS) satellites. We then developed mixed-effects models based on AODs, land use features, geographic characteristics, and weather conditions, and the day-specific as well as site-specific random effects to estimate the PM 2.5 concentrations (μg/m 3 ) in the state of Texas during the period 2008-2013. The mixed-effects models' performance was evaluated using the coefficient of determination (R 2 ) and square root of the mean squared prediction error (RMSPE) from ten-fold cross-validation, which randomly selected 90% of the observations for training purpose and 10% of the observations for assessing the models' true prediction ability. Mixed-effects regression models showed good prediction performance (R 2 values from 10-fold cross validation: 0.63-0.69). The model performance varied by regions and study years, and the East region of Texas, and year of 2009 presented relatively higher prediction precision (R 2 : 0.62 for the East region; R 2 : 0.69 for the year of 2009). The PM 2.5 concentrations generated through our developed models at 1-km grid cells in the state of Texas showed a decreasing trend from 2008 to 2013 and a higher reduction of predicted PM 2.5 in more polluted areas. Our findings suggest that mixed-effects regression models developed based on MAIAC AOD are a feasible approach to predict ground-level PM 2.5 in Texas. Predicted PM 2.5 concentrations at the 1-km resolution on a daily basis can be used for epidemiological studies to investigate short- and long-term health impact of PM 2.5 in Texas. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ironside, K. E.; Cole, K. L.; Eischeid, J. K.; Garfin, G. M.; Shaw, J. D.; Cobb, N. S.
2008-12-01
Ponderosa pine (Pinus ponderosa var. scopulorum) is the dominant conifer in higher elevation regions of the southwestern United States. Because this species is so prominent, southwestern montane ecosystems will be significantly altered if this species is strongly affected by future climate changes. These changes could be highly challenging for land management agencies. In order to model the consequences of future climates, 20th Century recruitment events and mortality for ponderosa pine were characterized using measures of seasonal water balance (precipitation - potential evapotranspiration). These relationships, assuming they will remain unchanged, were then used to predict 21st Century changes in ponderosa pine occurrence in the southwest. Twenty-one AR4 IPCC General Circulation Model (GCM) A1B simulation results were ranked on their ability to simulate the later 20th Century (1950-2000 AD) precipitation seasonality, spatial patterns, and quantity in the western United States. Among the top ranked GCMs, five were selected for downscaling to a 4 km grid that represented a range in predictions in terms of changes in water balance. Predicted decadal changes in southwestern ponderosa pine for the 21st Century for these five climate change scenarios were calculated using a multiple quadratic logistic regression model. Similar models of other western tree species (Pinus edulis, Yucca brevifolia) predicted severe contractions, especially in the southern half of their ranges. However, the results for Ponderosa pine suggested future expansions throughout its range to both higher and lower elevations, as well as very significant expansions northward.
Using exposure prediction tools to link exposure and ...
A few different exposure prediction tools were evaluated for use in the new in vitro-based safety assessment paradigm using di-2-ethylhexyl phthalate (DEHP) and dibutyl phthalate (DnBP) as case compounds. Daily intake of each phthalate was estimated using both high-throughput (HT) prediction models such as the HT Stochastic Human Exposure and Dose Simulation model (SHEDS-HT) and the ExpoCast heuristic model and non-HT approaches based on chemical specific exposure estimations in the environment in conjunction with human exposure factors. Reverse dosimetry was performed using a published physiologically based pharmacokinetic (PBPK) model for phthalates and their metabolites to provide a comparison point. Daily intakes of DEHP and DnBP were estimated based on the urinary concentrations of their respective monoesters, mono-2-ethylhexyl phthalate (MEHP) and monobutyl phthalate (MnBP), reported in NHANES (2011–2012). The PBPK-reverse dosimetry estimated daily intakes at the 50th and 95th percentiles were 0.68 and 9.58 μg/kg/d and 0.089 and 0.68 μg/kg/d for DEHP and DnBP, respectively. For DEHP, the estimated median from PBPK-reverse dosimetry was about 3.6-fold higher than the ExpoCast estimate (0.68 and 0.18 μg/kg/d, respectively). For DnBP, the estimated median was similar to that predicted by ExpoCast (0.089 and 0.094 μg/kg/d, respectively). The SHEDS-HT prediction of DnBP intake from consumer product pathways alone was higher at 0.67 μg/kg/d. The PBPK-reve
Impacts of Daily Bag Limit Reductions on Angler Effort in Wisconsin Walleye Lakes
Beard, T.D.; Cox, S.P.; Carpenter, S.R.
2003-01-01
Angler effort is an important factor affecting recreational fisheries. However, angler responses are rarely incorporated into recreational fisheries regulations or predictions. Few have attempted to examine how daily bag limit regulations affect total angling pressure and subsequent stock densities. Our paper develops a theoretical basis for predicting angler effort and harvest rate based on stock densities and bag limit regulations. We examined data from a management system that controls the total exploitation of walleyes Sander vitreus (formerly Stizostedion vitreum) in northern Wisconsin lakes and compared these empirical results with the predictions from a theoretical effort and harvest rate response model. The data indicated that higher general angler effort occurs on lakes regulated with a 5-walleye daily limit than on lakes regulated with either a 2- or 3-walleye daily limit. General walleye catch rates were lower on lakes with a 5-walleye limit than on lakes with either a 2- or 3-walleye daily limit. An effort response model predicted a logarithmic relationship between angler effort and adult walleye density and that an index of attractiveness would be greater on lakes with high bag limits. Predictions from the harvest rate model with constant walleye catchability indicated that harvest rates increased nonlinearly with increasing density. When the effort model was fitted to data from northern Wisconsin, we found higher lake attractiveness at 5-walleye-limit lakes. We conclude that different groups of anglers respond differently to bag limit changes and that reliance on daily bag limits may not be sufficient to maintain high walleye densities in some lakes in this region.
Størset, Elisabet; Holford, Nick; Hennig, Stefanie; Bergmann, Troels K; Bergan, Stein; Bremer, Sara; Åsberg, Anders; Midtvedt, Karsten; Staatz, Christine E
2014-09-01
The aim was to develop a theory-based population pharmacokinetic model of tacrolimus in adult kidney transplant recipients and to externally evaluate this model and two previous empirical models. Data were obtained from 242 patients with 3100 tacrolimus whole blood concentrations. External evaluation was performed by examining model predictive performance using Bayesian forecasting. Pharmacokinetic disposition parameters were estimated based on tacrolimus plasma concentrations, predicted from whole blood concentrations, haematocrit and literature values for tacrolimus binding to red blood cells. Disposition parameters were allometrically scaled to fat free mass. Tacrolimus whole blood clearance/bioavailability standardized to haematocrit of 45% and fat free mass of 60 kg was estimated to be 16.1 l h−1 [95% CI 12.6, 18.0 l h−1]. Tacrolimus clearance was 30% higher (95% CI 13, 46%) and bioavailability 18% lower (95% CI 2, 29%) in CYP3A5 expressers compared with non-expressers. An Emax model described decreasing tacrolimus bioavailability with increasing prednisolone dose. The theory-based model was superior to the empirical models during external evaluation displaying a median prediction error of −1.2% (95% CI −3.0, 0.1%). Based on simulation, Bayesian forecasting led to 65% (95% CI 62, 68%) of patients achieving a tacrolimus average steady-state concentration within a suggested acceptable range. A theory-based population pharmacokinetic model was superior to two empirical models for prediction of tacrolimus concentrations and seemed suitable for Bayesian prediction of tacrolimus doses early after kidney transplantation.
Dyjas, Oliver; Ulrich, Rolf
2014-01-01
In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
We developed a spatially-explicit, flexible 3-parameter habitat suitability model that can be used to identify and predict areas at higher risk for non-native dwarf eelgrass (Zostera japonica) invasion. The model uses simple environmental parameters (depth, nearshore slope, and s...
Guede Rojas, Francisco; Chirosa Ríos, Luis Javier; Fuentealba Urra, Sergio; Vergara Ríos, César; Ulloa Díaz, David; Campos Jara, Christian; Barbosa González, Paola; Cuevas Aburto, Jesualdo
2017-01-01
There is no conclusive evidence about the association between physical fitness (PF) and health related quality of life (HRQOL) in older adults. To seek for an association between PF and HRQOL in non-disabled community-dwelling Chilean older adults. One hundred and sixteen subjects participated in the study. PF was assessed using the Senior Fitness Test (SFT) and hand grip strength (HGS). HRQOL was assessed using eight dimensions provided by the SF-12v2 questionnaire. Binary multivariate logistic regression models were carried out considering the potential influence of confounder variables. Non-adjusted models, indicated that subjects with better performance in arm curl test (ACT) were more likely to score higher on vitality dimension (OR > 1) and those with higher HGS were more likely to score higher on physical functioning, bodily pain, vitality and mental health (OR > 1). The adjusted models consistently showed that ACT and HGS predicted a favorable perception of vitality and mental health dimensions respectively (OR > 1). HGS and ACT have a predictive value for certain dimensions of HRQOL.
[Application of ARIMA model to predict number of malaria cases in China].
Hui-Yu, H; Hua-Qin, S; Shun-Xian, Z; Lin, A I; Yan, L U; Yu-Chun, C; Shi-Zhu, L I; Xue-Jiao, T; Chun-Li, Y; Wei, H U; Jia-Xu, C
2017-08-15
Objective To study the application of autoregressive integrated moving average (ARIMA) model to predict the monthly reported malaria cases in China, so as to provide a reference for prevention and control of malaria. Methods SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported malaria cases of the time series of 20062015 and 2011-2015, respectively. The data of malaria cases from January to December, 2016 were used as validation data to compare the accuracy of the two ARIMA models. Results The models of the monthly reported cases of malaria in China were ARIMA (2, 1, 1) (1, 1, 0) 12 and ARIMA (1, 0, 0) (1, 1, 0) 12 respectively. The comparison between the predictions of the two models and actual situation of malaria cases showed that the ARIMA model based on the data of 2011-2015 had a higher accuracy of forecasting than the model based on the data of 2006-2015 had. Conclusion The establishment and prediction of ARIMA model is a dynamic process, which needs to be adjusted unceasingly according to the accumulated data, and in addition, the major changes of epidemic characteristics of infectious diseases must be considered.
Rubio-Álvarez, Ana; Molina-Alarcón, Milagros; Arias-Arias, Ángel; Hernández-Martínez, Antonio
2018-03-01
postpartum haemorrhage is one of the leading causes of maternal morbidity and mortality worldwide. Despite the use of uterotonics agents as preventive measure, it remains a challenge to identify those women who are at increased risk of postpartum bleeding. to develop and to validate a predictive model to assess the risk of excessive bleeding in women with vaginal birth. retrospective cohorts study. "Mancha-Centro Hospital" (Spain). the elaboration of the predictive model was based on a derivation cohort consisting of 2336 women between 2009 and 2011. For validation purposes, a prospective cohort of 953 women between 2013 and 2014 were employed. Women with antenatal fetal demise, multiple pregnancies and gestations under 35 weeks were excluded METHODS: we used a multivariate analysis with binary logistic regression, Ridge Regression and areas under the Receiver Operating Characteristic curves to determine the predictive ability of the proposed model. there was 197 (8.43%) women with excessive bleeding in the derivation cohort and 63 (6.61%) women in the validation cohort. Predictive factors in the final model were: maternal age, primiparity, duration of the first and second stages of labour, neonatal birth weight and antepartum haemoglobin levels. Accordingly, the predictive ability of this model in the derivation cohort was 0.90 (95% CI: 0.85-0.93), while it remained 0.83 (95% CI: 0.74-0.92) in the validation cohort. this predictive model is proved to have an excellent predictive ability in the derivation cohort, and its validation in a latter population equally shows a good ability for prediction. This model can be employed to identify women with a higher risk of postpartum haemorrhage. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prediction of Nursing Workload in Hospital.
Fiebig, Madlen; Hunstein, Dirk; Bartholomeyczik, Sabine
2018-01-01
A dissertation project at the Witten/Herdecke University [1] is investigating which (nursing sensitive) patient characteristics are suitable for predicting a higher or lower degree of nursing workload. For this research project four predictive modelling methods were selected. In a first step, SUPPORT VECTOR MACHINE, RANDOM FOREST, and GRADIENT BOOSTING were used to identify potential predictors from the nursing sensitive patient characteristics. The results were compared via FEATURE IMPORTANCE. To predict nursing workload the predictors identified in step 1 were modelled using MULTINOMIAL LOGISTIC REGRESSION. First results from the data mining process will be presented. A prognostic determination of nursing workload can be used not only as a basis for human resource planning in hospital, but also to respond to health policy issues.
Empirical Research of Micro-blog Information Transmission Range by Guard nodes
NASA Astrophysics Data System (ADS)
Chen, Shan; Ji, Ling; Li, Guang
2018-03-01
The prediction and evaluation of information transmission in online social networks is a challenge. It is significant to solve this issue for monitoring public option and advertisement communication. First, the prediction process is described by a set language. Then with Sina Microblog system as used as the case object, the relationship between node influence and coverage rate is analyzed by using the topology structure of information nodes. A nonlinear model is built by a statistic method in a specific, bounded and controlled Microblog network. It can predict the message coverage rate by guard nodes. The experimental results show that the prediction model has higher accuracy to the source nodes which have lower influence in social network and practical application.
van Klaveren, David; Steyerberg, Ewout W; Serruys, Patrick W; Kent, David M
2018-02-01
Clinical prediction models that support treatment decisions are usually evaluated for their ability to predict the risk of an outcome rather than treatment benefit-the difference between outcome risk with vs. without therapy. We aimed to define performance metrics for a model's ability to predict treatment benefit. We analyzed data of the Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) trial and of three recombinant tissue plasminogen activator trials. We assessed alternative prediction models with a conventional risk concordance-statistic (c-statistic) and a novel c-statistic for benefit. We defined observed treatment benefit by the outcomes in pairs of patients matched on predicted benefit but discordant for treatment assignment. The 'c-for-benefit' represents the probability that from two randomly chosen matched patient pairs with unequal observed benefit, the pair with greater observed benefit also has a higher predicted benefit. Compared to a model without treatment interactions, the SYNTAX score II had improved ability to discriminate treatment benefit (c-for-benefit 0.590 vs. 0.552), despite having similar risk discrimination (c-statistic 0.725 vs. 0.719). However, for the simplified stroke-thrombolytic predictive instrument (TPI) vs. the original stroke-TPI, the c-for-benefit (0.584 vs. 0.578) was similar. The proposed methodology has the potential to measure a model's ability to predict treatment benefit not captured with conventional performance metrics. Copyright © 2017 Elsevier Inc. All rights reserved.
Kerckhoffs, Jules; Hoek, Gerard; Vlaanderen, Jelle; van Nunen, Erik; Messier, Kyle; Brunekreef, Bert; Gulliver, John; Vermeulen, Roel
2017-11-01
Land-use regression (LUR) models for ultrafine particles (UFP) and Black Carbon (BC) in urban areas have been developed using short-term stationary monitoring or mobile platforms in order to capture the high variability of these pollutants. However, little is known about the comparability of predictions of mobile and short-term stationary models and especially the validity of these models for assessing residential exposures and the robustness of model predictions developed in different campaigns. We used an electric car to collect mobile measurements (n = 5236 unique road segments) and short-term stationary measurements (3 × 30min, n = 240) of UFP and BC in three Dutch cities (Amsterdam, Utrecht, Maastricht) in 2014-2015. Predictions of LUR models based on mobile measurements were compared to (i) measured concentrations at the short-term stationary sites, (ii) LUR model predictions based on short-term stationary measurements at 1500 random addresses in the three cities, (iii) externally obtained home outdoor measurements (3 × 24h samples; n = 42) and (iv) predictions of a LUR model developed based upon a 2013 mobile campaign in two cities (Amsterdam, Rotterdam). Despite the poor model R 2 of 15%, the ability of mobile UFP models to predict measurements with longer averaging time increased substantially from 36% for short-term stationary measurements to 57% for home outdoor measurements. In contrast, the mobile BC model only predicted 14% of the variation in the short-term stationary sites and also 14% of the home outdoor sites. Models based upon mobile and short-term stationary monitoring provided fairly high correlated predictions of UFP concentrations at 1500 randomly selected addresses in the three Dutch cities (R 2 = 0.64). We found higher UFP predictions (of about 30%) based on mobile models opposed to short-term model predictions and home outdoor measurements with no clear geospatial patterns. The mobile model for UFP was stable over different settings as the model predicted concentration levels highly correlated to predictions made by a previously developed LUR model with another spatial extent and in a different year at the 1500 random addresses (R 2 = 0.80). In conclusion, mobile monitoring provided robust LUR models for UFP, valid to use in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Wang, Zichen; Li, Li; Glicksberg, Benjamin S; Israel, Ariel; Dudley, Joel T; Ma'ayan, Avi
2017-12-01
Determining the discrepancy between chronological and physiological age of patients is central to preventative and personalized care. Electronic medical records (EMR) provide rich information about the patient physiological state, but it is unclear whether such information can be predictive of chronological age. Here we present a deep learning model that uses vital signs and lab tests contained within the EMR of Mount Sinai Health System (MSHS) to predict chronological age. The model is trained on 377,686 EMR from patients of ages 18-85 years old. The discrepancy between the predicted and real chronological age is then used as a proxy to estimate physiological age. Overall, the model can predict the chronological age of patients with a standard deviation error of ∼7 years. The ages of the youngest and oldest patients were more accurately predicted, while patients of ages ranging between 40 and 60 years were the least accurately predicted. Patients with the largest discrepancy between their physiological and chronological age were further inspected. The patients predicted to be significantly older than their chronological age have higher systolic blood pressure, higher cholesterol, damaged liver, and anemia. In contrast, patients predicted to be younger than their chronological age have lower blood pressure and shorter stature among other indicators; both groups display lower weight than the population average. Using information from ∼10,000 patients from the entire cohort who have been also profiled with SNP arrays, genome-wide association study (GWAS) uncovers several novel genetic variants associated with aging. In particular, significant variants were mapped to genes known to be associated with inflammation, hypertension, lipid metabolism, height, and increased lifespan in mice. Several genes with missense mutations were identified as novel candidate aging genes. In conclusion, we demonstrate how EMR data can be used to assess overall health via a scale that is based on deviation from the patient's predicted chronological age. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Wenbo; Jing, Shaocai; Yu, Wenjuan; Wang, Zhaoxian; Zhang, Guoping; Huang, Jianxi
2013-11-01
In this study, the high risk areas of Sichuan Province with debris flow, Panzhihua and Liangshan Yi Autonomous Prefecture, were taken as the studied areas. By using rainfall and environmental factors as the predictors and based on the different prior probability combinations of debris flows, the prediction of debris flows was compared in the areas with statistical methods: logistic regression (LR) and Bayes discriminant analysis (BDA). The results through the comprehensive analysis show that (a) with the mid-range scale prior probability, the overall predicting accuracy of BDA is higher than those of LR; (b) with equal and extreme prior probabilities, the overall predicting accuracy of LR is higher than those of BDA; (c) the regional predicting models of debris flows with rainfall factors only have worse performance than those introduced environmental factors, and the predicting accuracies of occurrence and nonoccurrence of debris flows have been changed in the opposite direction as the supplemented information.
Sekiguchi, Masau; Kakugawa, Yasuo; Matsumoto, Minori; Matsuda, Takahisa
2018-01-22
Risk stratification of screened populations could help improve colorectal cancer (CRC) screening. Use of the modified Asia-Pacific Colorectal Screening (APCS) score has been proposed in the Asia-Pacific region. This study was performed to build a new useful scoring model for CRC screening. Data were reviewed from 5218 asymptomatic Japanese individuals who underwent their first screening colonoscopy. Multivariate logistic regression was used to investigate risk factors for advanced colorectal neoplasia (ACN), and a new scoring model for the prediction of ACN was developed based on the results. The discriminatory capability of the new model and the modified APCS score were assessed and compared. Internal validation was also performed. ACN was detected in 225 participants. An 8-point scoring model for the prediction of ACN was developed using five independent risk factors for ACN (male sex, higher age, presence of two or more first-degree relatives with CRC, body mass index of > 22.5 kg/m 2 , and smoking history of > 18.5 pack-years). The prevalence of ACN was 1.6% (34/2172), 5.3% (127/2419), and 10.2% (64/627) in participants with scores of < 3, ≥ 3 to < 5, and ≥ 5, respectively. The c-statistic of the scoring model was 0.70 (95% confidence interval, 0.67-0.73) in both the development and internal validation sets, and this value was higher than that of the modified APCS score [0.68 (95% confidence interval, 0.65-0.71), P = 0.03]. We built a new simple scoring model for prediction of ACN in a Japanese population that could stratify the screened population into low-, moderate-, and high-risk groups.
NASA Technical Reports Server (NTRS)
Siemiginowska, Aneta
2001-01-01
The predicted counts for ASCA observation was much higher than actually observed counts in the quasar. However, there are three weak hard x-ray sources in the GIS field. We are adding them to the source counts in modeling of hard x-ray background. The work is in progress. We have published a paper in Ap.J. on the luminosity function and the quasar evolution. Based on the theory described in this paper we are predicting a number of sources and their contribution to the x-ray background at different redshifts. These model predictions will be compared to the observed data in the final paper.
Predictors of relational continuity in primary care: patient, provider and practice factors
2013-01-01
Background Continuity is a fundamental tenet of primary care, and highly valued by patients; it may also improve patient outcomes and lower cost of health care. It is thus important to investigate factors that predict higher continuity. However, to date, little is known about the factors that contribute to continuity. The purpose of this study was to analyse practice, provider and patient predictors of continuity of care in a large sample of primary care practices in Ontario, Canada. Another goal was to assess whether there was a difference in the continuity of care provided by different models of primary care. Methods This study is part of the larger a cross-sectional study of 137 primary care practices, their providers and patients. Several performance measures were evaluated; this paper focuses on relational continuity. Four items from the Primary Care Assessment Tool were used to assess relational continuity from the patient’s perspective. Results Multilevel modeling revealed several patient factors that predicted continuity. Older patients and those with chronic disease reported higher continuity, while those who lived in rural areas, had higher education, poorer mental health status, no regular provider, and who were employed reported lower continuity. Providers with more years since graduation had higher patient-reported continuity. Several practice factors predicted lower continuity: number of MDs, nurses, opening on weekends, and having 24 hours a week or less on-call. Analyses that compared continuity across models showed that, in general, Health Service Organizations had better continuity than other models, even when adjusting for patient demographics. Conclusions Some patients with greater health needs experience greater continuity of care. However, the lower continuity reported by those with mental health issues and those who live in rural areas is concerning. Furthermore, our finding that smaller practices have higher continuity suggests that physicians and policy makers need to consider the fact that ‘bigger is not always necessarily better’. PMID:23725212
Predictors of relational continuity in primary care: patient, provider and practice factors.
Kristjansson, Elizabeth; Hogg, William; Dahrouge, Simone; Tuna, Meltem; Mayo-Bruinsma, Liesha; Gebremichael, Goshu
2013-05-31
Continuity is a fundamental tenet of primary care, and highly valued by patients; it may also improve patient outcomes and lower cost of health care. It is thus important to investigate factors that predict higher continuity. However, to date, little is known about the factors that contribute to continuity. The purpose of this study was to analyse practice, provider and patient predictors of continuity of care in a large sample of primary care practices in Ontario, Canada. Another goal was to assess whether there was a difference in the continuity of care provided by different models of primary care. This study is part of the larger a cross-sectional study of 137 primary care practices, their providers and patients. Several performance measures were evaluated; this paper focuses on relational continuity. Four items from the Primary Care Assessment Tool were used to assess relational continuity from the patient's perspective. Multilevel modeling revealed several patient factors that predicted continuity. Older patients and those with chronic disease reported higher continuity, while those who lived in rural areas, had higher education, poorer mental health status, no regular provider, and who were employed reported lower continuity. Providers with more years since graduation had higher patient-reported continuity. Several practice factors predicted lower continuity: number of MDs, nurses, opening on weekends, and having 24 hours a week or less on-call. Analyses that compared continuity across models showed that, in general, Health Service Organizations had better continuity than other models, even when adjusting for patient demographics. Some patients with greater health needs experience greater continuity of care. However, the lower continuity reported by those with mental health issues and those who live in rural areas is concerning. Furthermore, our finding that smaller practices have higher continuity suggests that physicians and policy makers need to consider the fact that 'bigger is not always necessarily better'.
Boelaert, Marleen; Matlashewski, Greg; Mondal, Dinesh; Arana, Byron; Kroeger, Axel; Olliaro, Piero
2016-01-01
Background As Bangladesh, India and Nepal progress towards visceral leishmaniasis (VL) elimination, it is important to understand the role of asymptomatic Leishmania infection (ALI), VL treatment relapse and post kala-azar dermal leishmaniasis (PKDL) in transmission. Methodology/ Principal Finding We reviewed evidence systematically on ALI, relapse and PKDL. We searched multiple databases to include studies on burden, risk factors, biomarkers, natural history, and infectiveness of ALI, PKDL and relapse. After screening 292 papers, 98 were included covering the years 1942 through 2016. ALI, PKDL and relapse studies lacked a reference standard and appropriate biomarker. The prevalence of ALI was 4–17-fold that of VL. The risk of ALI was higher in VL case contacts. Most infections remained asymptomatic or resolved spontaneously. The proportion of ALI that progressed to VL disease within a year was 1.5–23%, and was higher amongst those with high antibody titres. The natural history of PKDL showed variability; 3.8–28.6% had no past history of VL treatment. The infectiveness of PKDL was 32–53%. The risk of VL relapse was higher with HIV co-infection. Modelling studies predicted a range of scenarios. One model predicted VL elimination was unlikely in the long term with early diagnosis. Another model estimated that ALI contributed to 82% of the overall transmission, VL to 10% and PKDL to 8%. Another model predicted that VL cases were the main driver for transmission. Different models predicted VL elimination if the sandfly density was reduced by 67% by killing the sandfly or by 79% by reducing their breeding sites, or with 4–6y of optimal IRS or 10y of sub-optimal IRS and only in low endemic setting. Conclusion/ Significance There is a need for xenodiagnostic and longitudinal studies to understand the potential of ALI and PKDL as reservoirs of infection. PMID:27490264
Statistical Prediction of Sea Ice Concentration over Arctic
NASA Astrophysics Data System (ADS)
Kim, Jongho; Jeong, Jee-Hoon; Kim, Baek-Min
2017-04-01
In this study, a statistical method that predict sea ice concentration (SIC) over the Arctic is developed. We first calculate the Season-reliant Empirical Orthogonal Functions (S-EOFs) of monthly Arctic SIC from Nimbus-7 SMMR and DMSP SSM/I-SSMIS Passive Microwave Data, which contain the seasonal cycles (12 months long) of dominant SIC anomaly patterns. Then, the current SIC state index is determined by projecting observed SIC anomalies for latest 12 months to the S-EOFs. Assuming the current SIC anomalies follow the spatio-temporal evolution in the S-EOFs, we project the future (upto 12 months) SIC anomalies by multiplying the SI and the corresponding S-EOF and then taking summation. The predictive skill is assessed by hindcast experiments initialized at all the months for 1980-2010. When comparing predictive skill of SIC predicted by statistical model and NCEP CFS v2, the statistical model shows a higher skill in predicting sea ice concentration and extent.
Mattei, Lorenza; Di Puccio, Francesca; Joyce, Thomas J; Ciulli, Enrico
2015-03-01
In the present study, numerical and experimental wear investigations on reverse total shoulder arthroplasties (RTSAs) were combined in order to estimate specific wear coefficients, currently not available in the literature. A wear model previously developed by the authors for metal-on-plastic hip implants was adapted to RTSAs and applied in a double direction: firstly, to evaluate specific wear coefficients for RTSAs from experimental results and secondly, to predict wear distribution. In both cases, the Archard wear law (AR) and the wear law of UHMWPE (PE) were considered, assuming four different k functions. The results indicated that both the wear laws predict higher wear coefficients for RTSA with respect to hip implants, particularly the AR law, with k values higher than twofold the hip ones. Such differences can significantly affect predictive wear model results for RTSA, when non-specific wear coefficients are used. Moreover, the wear maps simulated with the two laws are markedly different, although providing the same wear volume. A higher wear depth (+51%) is obtained with the AR law, located at the dome of the cup, while with the PE law the most worn region is close to the edge. Taking advantage of the linear trend of experimental volume losses, the wear coefficients obtained with the AR law should be valid despite having neglected the geometry update in the model. Copyright © 2015 Elsevier Ltd. All rights reserved.
Valuing river characteristics using combined site choice and participation travel cost models.
Johnstone, C; Markandya, A
2006-08-01
This paper presents new welfare measures for marginal changes in river quality in selected English rivers. The river quality indicators used include chemical, biological and habitat-level attributes. Economic values for recreational use of three types of river-upland, lowland and chalk-are presented. A survey of anglers was carried out and using these data, two travel cost models were estimated, one to predict the numbers of trips and the other to predict angling site choice. These models were then linked to estimate the welfare associated with marginal changes in river quality using the participation levels as estimated in the trip prediction model. The model results showed that higher flow rates, biological quality and nutrient pollution levels affect site choice and influence the likelihood of a fishing trip. Consumer surplus values per trip for a 10% change in river attributes range from pound 0.04 to pound 3.93 ( pound 2001) depending on the attribute.
A quantitative model of honey bee colony population dynamics.
Khoury, David S; Myerscough, Mary R; Barron, Andrew B
2011-04-18
Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Beran, Gregory J O; Hartman, Joshua D; Heit, Yonaton N
2016-11-15
Molecular crystals occur widely in pharmaceuticals, foods, explosives, organic semiconductors, and many other applications. Thanks to substantial progress in electronic structure modeling of molecular crystals, attention is now shifting from basic crystal structure prediction and lattice energy modeling toward the accurate prediction of experimentally observable properties at finite temperatures and pressures. This Account discusses how fragment-based electronic structure methods can be used to model a variety of experimentally relevant molecular crystal properties. First, it describes the coupling of fragment electronic structure models with quasi-harmonic techniques for modeling the thermal expansion of molecular crystals, and what effects this expansion has on thermochemical and mechanical properties. Excellent agreement with experiment is demonstrated for the molar volume, sublimation enthalpy, entropy, and free energy, and the bulk modulus of phase I carbon dioxide when large basis second-order Møller-Plesset perturbation theory (MP2) or coupled cluster theories (CCSD(T)) are used. In addition, physical insight is offered into how neglect of thermal expansion affects these properties. Zero-point vibrational motion leads to an appreciable expansion in the molar volume; in carbon dioxide, it accounts for around 30% of the overall volume expansion between the electronic structure energy minimum and the molar volume at the sublimation point. In addition, because thermal expansion typically weakens the intermolecular interactions, neglecting thermal expansion artificially stabilizes the solid and causes the sublimation enthalpy to be too large at higher temperatures. Thermal expansion also frequently weakens the lower-frequency lattice phonon modes; neglecting thermal expansion causes the entropy of sublimation to be overestimated. Interestingly, the sublimation free energy is less significantly affected by neglecting thermal expansion because the systematic errors in the enthalpy and entropy cancel somewhat. Second, because solid state nuclear magnetic resonance (NMR) plays an increasingly important role in molecular crystal studies, this Account discusses how fragment methods can be used to achieve higher-accuracy chemical shifts in molecular crystals. Whereas widely used plane wave density functional theory models are largely restricted to generalized gradient approximation (GGA) functionals like PBE in practice, fragment methods allow the routine use of hybrid density functionals with only modest increases in computational cost. In extensive molecular crystal benchmarks, hybrid functionals like PBE0 predict chemical shifts with 20-30% higher accuracy than GGAs, particularly for 1 H, 13 C, and 15 N nuclei. Due to their higher sensitivity to polarization effects, 17 O chemical shifts prove slightly harder to predict with fragment methods. Nevertheless, the fragment model results are still competitive with those from GIPAW. The improved accuracy achievable with fragment approaches and hybrid density functionals increases discrimination between different potential assignments of individual shifts or crystal structures, which is critical in NMR crystallography applications. This higher accuracy and greater discrimination are highlighted in application to the solid state NMR of different acetaminophen and testosterone crystal forms.
Risk Prediction Models for Other Cancers or Multiple Sites
Developing statistical models that estimate the probability of developing other multiple cancers over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre
2017-09-01
Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
A strategy to apply machine learning to small datasets in materials science
NASA Astrophysics Data System (ADS)
Zhang, Ying; Ling, Chen
2018-12-01
There is growing interest in applying machine learning techniques in the research of materials science. However, although it is recognized that materials datasets are typically smaller and sometimes more diverse compared to other fields, the influence of availability of materials data on training machine learning models has not yet been studied, which prevents the possibility to establish accurate predictive rules using small materials datasets. Here we analyzed the fundamental interplay between the availability of materials data and the predictive capability of machine learning models. Instead of affecting the model precision directly, the effect of data size is mediated by the degree of freedom (DoF) of model, resulting in the phenomenon of association between precision and DoF. The appearance of precision-DoF association signals the issue of underfitting and is characterized by large bias of prediction, which consequently restricts the accurate prediction in unknown domains. We proposed to incorporate the crude estimation of property in the feature space to establish ML models using small sized materials data, which increases the accuracy of prediction without the cost of higher DoF. In three case studies of predicting the band gap of binary semiconductors, lattice thermal conductivity, and elastic properties of zeolites, the integration of crude estimation effectively boosted the predictive capability of machine learning models to state-of-art levels, demonstrating the generality of the proposed strategy to construct accurate machine learning models using small materials dataset.
Zhou, Kun; Gao, Chun-Fang; Zhao, Yun-Peng; Liu, Hai-Lin; Zheng, Rui-Dan; Xian, Jian-Chun; Xu, Hong-Tao; Mao, Yi-Min; Zeng, Min-De; Lu, Lun-Gen
2010-09-01
In recent years, a great interest has been dedicated to the development of noninvasive predictive models to substitute liver biopsy for fibrosis assessment and follow-up. Our aim was to provide a simpler model consisting of routine laboratory markers for predicting liver fibrosis in patients chronically infected with hepatitis B virus (HBV) in order to optimize their clinical management. Liver fibrosis was staged in 386 chronic HBV carriers who underwent liver biopsy and routine laboratory testing. Correlations between routine laboratory markers and fibrosis stage were statistically assessed. After logistic regression analysis, a novel predictive model was constructed. This S index was validated in an independent cohort of 146 chronic HBV carriers in comparison to the SLFG model, Fibrometer, Hepascore, Hui model, Forns score and APRI using receiver operating characteristic (ROC) curves. The diagnostic values of each marker panels were better than single routine laboratory markers. The S index consisting of gamma-glutamyltransferase (GGT), platelets (PLT) and albumin (ALB) (S-index: 1000 x GGT/(PLT x ALB(2))) had a higher diagnostic accuracy in predicting degree of fibrosis than any other mathematical model tested. The areas under the ROC curves (AUROC) were 0.812 and 0.890 for predicting significant fibrosis and cirrhosis in the validation cohort, respectively. The S index, a simpler mathematical model consisting of routine laboratory markers predicts significant fibrosis and cirrhosis in patients with chronic HBV infection with a high degree of accuracy, potentially decreasing the need for liver biopsy.
An analytical approach for predicting pilot induced oscillations
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
ERIC Educational Resources Information Center
Zimmermann, Judith; Brodersen, Kay H.; Heinimann, Hans R.; Buhmann, Joachim M.
2015-01-01
The graduate admissions process is crucial for controlling the quality of higher education, yet, rules-of-thumb and domain-specific experiences often dominate evidence-based approaches. The goal of the present study is to dissect the predictive power of undergraduate performance indicators and their aggregates. We analyze 81 variables in 171…
Regional and global modeling estimates of policy relevant background ozone over the United States
NASA Astrophysics Data System (ADS)
Emery, Christopher; Jung, Jaegun; Downey, Nicole; Johnson, Jeremiah; Jimenez, Michele; Yarwood, Greg; Morris, Ralph
2012-02-01
Policy Relevant Background (PRB) ozone, as defined by the US Environmental Protection Agency (EPA), refers to ozone concentrations that would occur in the absence of all North American anthropogenic emissions. PRB enters into the calculation of health risk benefits, and as the US ozone standard approaches background levels, PRB is increasingly important in determining the feasibility and cost of compliance. As PRB is a hypothetical construct, modeling is a necessary tool. Since 2006 EPA has relied on global modeling to establish PRB for their regulatory analyses. Recent assessments with higher resolution global models exhibit improved agreement with remote observations and modest upward shifts in PRB estimates. This paper shifts the paradigm to a regional model (CAMx) run at 12 km resolution, for which North American boundary conditions were provided by a low-resolution version of the GEOS-Chem global model. We conducted a comprehensive model inter-comparison, from which we elucidate differences in predictive performance against ozone observations and differences in temporal and spatial background variability over the US. In general, CAMx performed better in replicating observations at remote monitoring sites, and performance remained better at higher concentrations. While spring and summer mean PRB predicted by GEOS-Chem ranged 20-45 ppb, CAMx predicted PRB ranged 25-50 ppb and reached well over 60 ppb in the west due to event-oriented phenomena such as stratospheric intrusion and wildfires. CAMx showed a higher correlation between modeled PRB and total observed ozone, which is significant for health risk assessments. A case study during April 2006 suggests that stratospheric exchange of ozone is underestimated in both models on an event basis. We conclude that wildfires, lightning NO x and stratospheric intrusions contribute a significant level of uncertainty in estimating PRB, and that PRB will require careful consideration in the ozone standard setting process.
Prediction of LDEF exposure to the ionizing radiation environment
NASA Technical Reports Server (NTRS)
Watts, J. W.; Armstrong, T. W.; Colborn, B. L.
1996-01-01
Predictions of the LDEF mission's trapped proton and electron and galactic cosmic ray proton exposures have been made using the currently accepted models with improved resolution near mission end and better modeling of solar cycle effects. An extension of previous calculations, to provide a more definitive description of the LDEF exposure to ionizing radiation, is represented by trapped proton and electron flux as a function of mission time, presented considering altitude and solar activity variation during the mission and the change in galactic cosmic ray proton flux over the mission. Modifications of the AP8MAX and AP8MIN fluence led to a reduction of fluence by 20%. A modified interpolation model developed by Daly and Evans resulted in 30% higher dose and activation levels, which better agreed with measured values than results predicted using the Vette model.
Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril
2017-01-01
The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755-0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691-0.783) and 0.742 (0.698-0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction.
[Comparison of predictive models for the selection of high-complexity patients].
Estupiñán-Ramírez, Marcos; Tristancho-Ajamil, Rita; Company-Sancho, María Consuelo; Sánchez-Janáriz, Hilda
2017-08-18
To compare the concordance of complexity weights between Clinical Risk Groups (CRG) and Adjusted Morbidity Groups (AMG). To determine which one is the best predictor of patient admission. To optimise the method used to select the 0.5% of patients of higher complexity that will be included in an intervention protocol. Cross-sectional analytical study in 18 Canary Island health areas, 385,049 citizens were enrolled, using sociodemographic variables from health cards; diagnoses and use of healthcare resources obtained from primary health care electronic records (PCHR) and the basic minimum set of hospital data; the functional status recorded in the PCHR, and the drugs prescribed through the electronic prescription system. The correlation between stratifiers was estimated from these data. The ability of each stratifier to predict patient admissions was evaluated and prediction optimisation models were constructed. Concordance between weights complexity stratifiers was strong (rho = 0.735) and the correlation between categories of complexity was moderate (weighted kappa = 0.515). AMG complexity weight predicts better patient admission than CRG (AUC: 0.696 [0.695-0.697] versus 0.692 [0.691-0.693]). Other predictive variables were added to the AMG weight, obtaining the best AUC (0.708 [0.707-0.708]) the model composed by AMG, sex, age, Pfeiffer and Barthel scales, re-admissions and number of prescribed therapeutic groups. strong concordance was found between stratifiers, and higher predictive capacity for admission from AMG, which can be increased by adding other dimensions. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
The EXCITE Trial: Predicting a Clinically Meaningful Motor Activity Log Outcome
Park, Si-Woon; Wolf, Steven L.; Blanton, Sarah; Winstein, Carolee; Nichols-Larsen, Deborah S.
2013-01-01
Background and Objective This study determined which baseline clinical measurements best predicted a predefined clinically meaningful outcome on the Motor Activity Log (MAL) and developed a predictive multivariate model to determine outcome after 2 weeks of constraint-induced movement therapy (CIMT) and 12 months later using the database from participants in the Extremity Constraint Induced Therapy Evaluation (EXCITE) Trial. Methods A clinically meaningful CIMT outcome was defined as achieving higher than 3 on the MAL Quality of Movement (QOM) scale. Predictive variables included baseline MAL, Wolf Motor Function Test (WMFT), the sensory and motor portion of the Fugl-Meyer Assessment (FMA), spasticity, visual perception, age, gender, type of stroke, concordance, and time after stroke. Significant predictors identified by univariate analysis were used to develop the multivariate model. Predictive equations were generated and odds ratios for predictors were calculated from the multivariate model. Results Pretreatment motor function measured by MAL QOM, WMFT, and FMA were significantly associated with outcome immediately after CIMT. Pretreatment MAL QOM, WMFT, proprioception, and age were significantly associated with outcome after 12 months. Each unit of higher pretreatment MAL QOM score and each unit of faster pretreatment WMFT log mean time improved the probability of achieving a clinically meaningful outcome by 7 and 3 times at posttreatment, and 5 and 2 times after 12 months, respectively. Patients with impaired proprioception had a 20% probability of achieving a clinically meaningful outcome compared with those with intact proprioception. Conclusions Baseline clinical measures of motor and sensory function can be used to predict a clinically meaningful outcome after CIMT. PMID:18780883
Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan
2017-01-01
Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.
The EXCITE Trial: Predicting a clinically meaningful motor activity log outcome.
Park, Si-Woon; Wolf, Steven L; Blanton, Sarah; Winstein, Carolee; Nichols-Larsen, Deborah S
2008-01-01
This study determined which baseline clinical measurements best predicted a predefined clinically meaningful outcome on the Motor Activity Log (MAL) and developed a predictive multivariate model to determine outcome after 2 weeks of constraint-induced movement therapy (CIMT) and 12 months later using the database from participants in the Extremity Constraint Induced Therapy Evaluation (EXCITE) Trial. A clinically meaningful CIMT outcome was defined as achieving higher than 3 on the MAL Quality of Movement (QOM) scale. Predictive variables included baseline MAL, Wolf Motor Function Test (WMFT), the sensory and motor portion of the Fugl-Meyer Assessment (FMA), spasticity, visual perception, age, gender, type of stroke, concordance, and time after stroke. Significant predictors identified by univariate analysis were used to develop the multivariate model. Predictive equations were generated and odds ratios for predictors were calculated from the multivariate model. Pretreatment motor function measured by MAL QOM, WMFT, and FMA were significantly associated with outcome immediately after CIMT. Pretreatment MAL QOM, WMFT, proprioception, and age were significantly associated with outcome after 12 months. Each unit of higher pretreatment MAL QOM score and each unit of faster pretreatment WMFT log mean time improved the probability of achieving a clinically meaningful outcome by 7 and 3 times at posttreatment, and 5 and 2 times after 12 months, respectively. Patients with impaired proprioception had a 20% probability of achieving a clinically meaningful outcome compared with those with intact proprioception. Baseline clinical measures of motor and sensory function can be used to predict a clinically meaningful outcome after CIMT.
Ghaderi, Forouzan; Ghaderi, Amir H.; Ghaderi, Noushin; Najafi, Bijan
2017-01-01
Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose. PMID:29188217
Passini, Elisa; Britton, Oliver J; Lu, Hua Rong; Rohrbacher, Jutta; Hermans, An N; Gallacher, David J; Greig, Robert J H; Bueno-Orovio, Alfonso; Rodriguez, Blanca
2017-01-01
Early prediction of cardiotoxicity is critical for drug development. Current animal models raise ethical and translational questions, and have limited accuracy in clinical risk prediction. Human-based computer models constitute a fast, cheap and potentially effective alternative to experimental assays, also facilitating translation to human. Key challenges include consideration of inter-cellular variability in drug responses and integration of computational and experimental methods in safety pharmacology. Our aim is to evaluate the ability of in silico drug trials in populations of human action potential (AP) models to predict clinical risk of drug-induced arrhythmias based on ion channel information, and to compare simulation results against experimental assays commonly used for drug testing. A control population of 1,213 human ventricular AP models in agreement with experimental recordings was constructed. In silico drug trials were performed for 62 reference compounds at multiple concentrations, using pore-block drug models (IC 50 /Hill coefficient). Drug-induced changes in AP biomarkers were quantified, together with occurrence of repolarization/depolarization abnormalities. Simulation results were used to predict clinical risk based on reports of Torsade de Pointes arrhythmias, and further evaluated in a subset of compounds through comparison with electrocardiograms from rabbit wedge preparations and Ca 2+ -transient recordings in human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs). Drug-induced changes in silico vary in magnitude depending on the specific ionic profile of each model in the population, thus allowing to identify cell sub-populations at higher risk of developing abnormal AP phenotypes. Models with low repolarization reserve (increased Ca 2+ /late Na + currents and Na + /Ca 2+ -exchanger, reduced Na + /K + -pump) are highly vulnerable to drug-induced repolarization abnormalities, while those with reduced inward current density (fast/late Na + and Ca 2+ currents) exhibit high susceptibility to depolarization abnormalities. Repolarization abnormalities in silico predict clinical risk for all compounds with 89% accuracy. Drug-induced changes in biomarkers are in overall agreement across different assays: in silico AP duration changes reflect the ones observed in rabbit QT interval and hiPS-CMs Ca 2+ -transient, and simulated upstroke velocity captures variations in rabbit QRS complex. Our results demonstrate that human in silico drug trials constitute a powerful methodology for prediction of clinical pro-arrhythmic cardiotoxicity, ready for integration in the existing drug safety assessment pipelines.
Udachina, Alisa; Thewissen, Viviane; Myin-Germeys, Inez; Fitzpatrick, Sam; O'kane, Aisling; Bentall, Richard P
2009-09-01
Hypothesized relationships between experiential avoidance (EA), self-esteem, and paranoia were tested using structural equation modeling in a sample of student participants (N = 427). EA in everyday life was also investigated using the Experience Sampling Method in a subsample of students scoring high (N = 17) and low (N = 15) on paranoia. Results showed that paranoid students had lower self-esteem and reported higher levels of EA than nonparanoid participants. The interactive influence of EA and stress predicted negative self-esteem: EA was particularly damaging at high levels of stress. Greater EA and higher social stress independently predicted lower positive self-esteem. Low positive self-esteem predicted engagement in EA. A direct association between EA and paranoia was also found. These results suggest that similar mechanisms may underlie EA and thought suppression. Although people may employ EA to regulate self-esteem, this strategy is maladaptive as it damages self-esteem, incurs cognitive costs, and fosters paranoid thinking.
Willette, Auriel A.; Modanlo, Nina
2015-01-01
Alzheimer disease (AD) is characterized by progressive hypometabolism on [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) scans. Peripheral insulin resistance (IR) increases AD risk. No studies have examined associations between FDG metabolism and IR in mild cognitive impairment (MCI) and AD, as well as MCI conversion to AD. We studied 26 cognitively normal (CN), 194 MCI (39 MCI-progressors, 148 MCI-stable, 2 years after baseline), and 60 AD subjects with baseline FDG-PET from the Alzheimer’s Disease Neuroimaging Initiative. Mean FDG metabolism was derived for AD-vulnerable regions of interest (ROIs), including lateral parietal and posteromedial cortices, medial temporal lobe (MTL), hippocampus, and ventral prefrontal cortices (vPFC), as well as postcentral gyrus and global cerebrum control regions. The homeostasis model assessment of IR (HOMA-IR) was used to measure IR. For AD, higher HOMA-IR predicted lower FDG in all ROIs. For MCI-progressors, higher HOMA-IR predicted higher FDG in the MTL and hippocampus. Control regions showed no associations. Higher HOMA-IR predicted hypermetabolism in MCI-progressors and hypometabolism in AD in medial temporal regions. Future longitudinal studies should examine the pathophysiologic significance of the shift from MTL hyper- to hypometabolism associated with IR. PMID:25576061
Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction
NASA Astrophysics Data System (ADS)
Wilson, Teresa; Bartlett, Jennifer L.
2016-01-01
Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.
Experimental Evaluation of Tuned Chamber Core Panels for Payload Fairing Noise Control
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Allen, Albert R.; Herlan, Jonathan W.; Rosenthal, Bruce N.
2015-01-01
Analytical models have been developed to predict the sound absorption and sound transmission loss of tuned chamber core panels. The panels are constructed of two facesheets sandwiching a corrugated core. When ports are introduced through one facesheet, the long chambers within the core can be used as an array of low-frequency acoustic resonators. To evaluate the accuracy of the analytical models, absorption and sound transmission loss tests were performed on flat panels. Measurements show that the acoustic resonators embedded in the panels improve both the absorption and transmission loss of the sandwich structure at frequencies near the natural frequency of the resonators. Analytical predictions for absorption closely match measured data. However, transmission loss predictions miss important features observed in the measurements. This suggests that higher-fidelity analytical or numerical models will be needed to supplement transmission loss predictions in the future.
A two-layer composite model of the vocal fold lamina propria for fundamental frequency regulation.
Zhang, Kai; Siegmund, Thomas; Chan, Roger W
2007-08-01
The mechanical properties of the vocal fold lamina propria, including the vocal fold cover and the vocal ligament, play an important role in regulating the fundamental frequency of human phonation. This study examines the equilibrium hyperelastic tensile deformation behavior of cover and ligament specimens isolated from excised human larynges. Ogden's hyperelastic model is used to characterize the tensile stress-stretch behaviors at equilibrium. Several statistically significant differences in the mechanical response differentiating cover and ligament, as well as gender are found. Fundamental frequencies are predicted from a string model and a beam model, both accounting for the cover and the ligament. The beam model predicts nonzero F(0) for the unstretched state of the vocal fold. It is demonstrated that bending stiffness significantly contributes to the predicted F(0), with the ligament contributing to a higher F(0), especially in females. Despite the availability of only a small data set, the model predicts an age dependence of F(0) in males in agreement with experimental findings. Accounting for two mechanisms of fundamental frequency regulation--vocal fold posturing (stretching) and extended clamping--brings predicted F(0) close to the lower bound of the human phonatory range. Advantages and limitations of the current model are discussed.
Study of angular momentum variation due to entrance channel effect in heavy ion fusion reactions
NASA Astrophysics Data System (ADS)
Kumar, Ajay
2014-05-01
A systematic investigation of the properties of hot nuclei may be studied by detecting the evaporated particles. These emissions reflect the behavior of the nucleus at various stages of the deexcitation cascade. When the nucleus is formed by the collision of a heavy nucleus with a light particle, the statistical model has done a good job of predicting the distribution of evaporated particles when reasonable choices were made for the level densities and yrast lines. Comparison to more specific measurements could, of course, provide a more severe test of the model and enable one to identify the deviations from the statistical model as the signature of other effects not included in the model. Some papers have claimed that experimental evaporation spectra from heavy-ion fusion reactions at higher excitation energies and angular momenta are no longer consistent with the predictions of the standard statistical model. In order to confirm this prediction we have employed two systems, a mass-symmetric (31P+45Sc) and a mass-asymmetric channel (12C+64Zn), leading to the same compound nucleus 76Kr* at the excitation energy of 75 MeV. Neutron energy spectra of the asymmetric system (12C+64Zn) at different angles are well described by the statistical model predictions using the normal value of the level density parameter a = A/8 MeV-1. However, in the case of the symmetric system (31P+45Sc), the statistical model interpretation of the data requires the change in the value of a = A/10 MeV-1. The delayed evolution of the compound system in case of the symmetric 31P+45Sc system may lead to the formation of a temperature equilibrated dinuclear complex, which may be responsible for the neutron emission at higher temperature, while the protons and alpha particles are evaporated after neutron emission when the system is sufficiently cooled down and the higher g-values do not contribute in the formation of the compound nucleus for the symmetric entrance channel in case of charged particle emission.
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo
2016-01-01
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo
2017-01-05
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.
Learning receptive fields using predictive feedback.
Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H
2006-01-01
Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.
Moreau, Marjory; Leonard, Jeremy; Phillips, Katherine A; Campbell, Jerry; Pendse, Salil N; Nicolas, Chantel; Phillips, Martin; Yoon, Miyoung; Tan, Yu-Mei; Smith, Sherrie; Pudukodu, Harish; Isaacs, Kristin; Clewell, Harvey
2017-10-01
A few different exposure prediction tools were evaluated for use in the new in vitro-based safety assessment paradigm using di-2-ethylhexyl phthalate (DEHP) and dibutyl phthalate (DnBP) as case compounds. Daily intake of each phthalate was estimated using both high-throughput (HT) prediction models such as the HT Stochastic Human Exposure and Dose Simulation model (SHEDS-HT) and the ExpoCast heuristic model and non-HT approaches based on chemical specific exposure estimations in the environment in conjunction with human exposure factors. Reverse dosimetry was performed using a published physiologically based pharmacokinetic (PBPK) model for phthalates and their metabolites to provide a comparison point. Daily intakes of DEHP and DnBP were estimated based on the urinary concentrations of their respective monoesters, mono-2-ethylhexyl phthalate (MEHP) and monobutyl phthalate (MnBP), reported in NHANES (2011-2012). The PBPK-reverse dosimetry estimated daily intakes at the 50th and 95th percentiles were 0.68 and 9.58 μg/kg/d and 0.089 and 0.68 μg/kg/d for DEHP and DnBP, respectively. For DEHP, the estimated median from PBPK-reverse dosimetry was about 3.6-fold higher than the ExpoCast estimate (0.68 and 0.18 μg/kg/d, respectively). For DnBP, the estimated median was similar to that predicted by ExpoCast (0.089 and 0.094 μg/kg/d, respectively). The SHEDS-HT prediction of DnBP intake from consumer product pathways alone was higher at 0.67 μg/kg/d. The PBPK-reverse dosimetry-estimated median intake of DEHP and DnBP was comparable to values previously reported for US populations. These comparisons provide insights into establishing criteria for selecting appropriate exposure prediction tools for use in an integrated modeling platform to link exposure to health effects. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Ding Wei; Yen, Edward
1989-08-01
We propose a detailed model, combining the concepts from a partition temperature model and wounded nucleon model, to describe high-energy nucleus-nucleus collisions. One partition temperature is associated with collisions at a fixed wounded nucleon number. The (pseudo-) rapidity distributions are calculated and compared with experimental data. Predictions at higher energy are also presented.
Decadal prediction of European soil moisture from 1961 to 2010 using a regional climate model
NASA Astrophysics Data System (ADS)
Mieruch-Schnuelle, S.; Schädler, G.; Feldmann, H.
2014-12-01
The German national research program on decadal climate prediction(MiKlip) aims at the development of an operational decadal predictionsystem. To explore the potential of decadal predictions a hindcastensemble from 1961 to 2010 has been generated by the MPI-ESM, the newEarth system model of the Max Planck Institute for Meteorology. Toimprove the decadal predictions on higher spatial resolutions wedownscaled the MPI-ESM simulations by the regional model COSMO-CLM(CCLM) for Europe. In this study we will characterize and validatethe predictability of extreme states of soil moisture in Europesimulated by the MPI-ESM and the value added by the CCLM. The wateramount stored in the soil is a crucial component of the climate systemand especially important for agriculture, and has an influence onevaporation, groundwater and runoff. Thus, skillful prediction of soilmoisture in the order of years up to a decade could be used tomitigate risk and benefit society. Since soil moisture observationsare rare and validation of model output is difficult, we will ratherinvestigate the effective drought index (EDI), which can be retrievedsolely from precipitation data. Therefore we show that the EDI is agood estimator of the soil water content.
Comparison of Models for Ball Bearing Dynamic Capacity and Life
NASA Technical Reports Server (NTRS)
Gupta, Pradeep K.; Oswald, Fred B.; Zaretsky, Erwin V.
2015-01-01
Generalized formulations for dynamic capacity and life of ball bearings, based on the models introduced by Lundberg and Palmgren and Zaretsky, have been developed and implemented in the bearing dynamics computer code, ADORE. Unlike the original Lundberg-Palmgren dynamic capacity equation, where the elastic properties are part of the life constant, the generalized formulations permit variation of elastic properties of the interacting materials. The newly updated Lundberg-Palmgren model allows prediction of life as a function of elastic properties. For elastic properties similar to those of AISI 52100 bearing steel, both the original and updated Lundberg-Palmgren models provide identical results. A comparison between the Lundberg-Palmgren and the Zaretsky models shows that at relatively light loads the Zaretsky model predicts a much higher life than the Lundberg-Palmgren model. As the load increases, the Zaretsky model provides a much faster drop off in life. This is because the Zaretsky model is much more sensitive to load than the Lundberg-Palmgren model. The generalized implementation where all model parameters can be varied provides an effective tool for future model validation and enhancement in bearing life prediction capabilities.
Carbone, Chris; Codron, Daryl; Scofield, Conrad; Clauss, Marcus; Bielby, Jon; Enquist, Brian
2014-01-01
Predator–prey relationships are vital to ecosystem function and there is a need for greater predictive understanding of these interactions. We develop a geometric foraging model predicting minimum prey size scaling in marine and terrestrial vertebrate predators taking into account habitat dimensionality and biological traits. Our model predicts positive predator–prey size relationships on land but negative relationships in the sea. To test the model, we compiled data on diets of 794 predators (mammals, snakes, sharks and rays). Consistent with predictions, both terrestrial endotherm and ectotherm predators have significantly positive predator–prey size relationships. Marine predators, however, exhibit greater variation. Some of the largest predators specialise on small invertebrates while others are large vertebrate specialists. Prey–predator mass ratios were generally higher for ectothermic than endothermic predators, although dietary patterns were similar. Model-based simulations of predator–prey relationships were consistent with observed relationships, suggesting that our approach provides insights into both trends and diversity in predator–prey interactions. PMID:25265992
Prediction of Winter Storm Tracks and Intensities Using the GFDL fvGFS Model
NASA Astrophysics Data System (ADS)
Rees, S.; Boaggio, K.; Marchok, T.; Morin, M.; Lin, S. J.
2017-12-01
The GFDL Finite-Volume Cubed-Sphere Dynamical core (FV3) is coupled to a modified version of the Global Forecast System (GFS) physics and initial conditions, to form the fvGFS model. This model is similar to the one being implemented as the next-generation operational weather model for the NWS, which is also FV3-powered. Much work has been done to verify fvGFS tropical cyclone prediction, but little has been done to verify winter storm prediction. These costly and dangerous storms impact parts of the U.S. every year. To verify winter storms we ran the NCEP operational cyclone tracker, developed at GFDL, on semi-real-time 13 km horizontal resolution fvGFS forecasts. We have found that fvGFS compares well to the operational GFS in storm track and intensity, though often predicts slightly higher intensities. This presentation will show the track and intensity verification from the past two winter seasons and explore possible reasons for bias.
Pretreatment data is highly predictive of liver chemistry signals in clinical trials
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
Purpose The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Patients and methods Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Results Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy’s law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. Conclusion It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones. PMID:23226004
Riddlesworth, Tonya D.; Kollman, Craig; Lass, Jonathan H.; Patel, Sanjay V.; Stulting, R. Doyle; Benetz, Beth Ann; Gal, Robin L.; Beck, Roy W.
2014-01-01
Purpose. We constructed several mathematical models that predict endothelial cell density (ECD) for patients after penetrating keratoplasty (PK) for a moderate-risk condition (principally Fuchs' dystrophy or pseudophakic/aphakic corneal edema). Methods. In a subset (n = 591) of Cornea Donor Study participants, postoperative ECD was determined by a central reading center. Various statistical models were considered to estimate the ECD trend longitudinally over 10 years of follow-up. A biexponential model with and without a logarithm transformation was fit using the Gauss-Newton nonlinear least squares algorithm. To account for correlated data, a log-polynomial model was fit using the restricted maximum likelihood method. A sensitivity analysis for the potential bias due to selective dropout was performed using Bayesian analysis techniques. Results. The three models using a logarithm transformation yield similar trends, whereas the model without the transform predicts higher ECD values. The adjustment for selective dropout turns out to be negligible. However, this is possibly due to the relatively low rate of graft failure in this cohort (19% at 10 years). Fuchs' dystrophy and pseudophakic/aphakic corneal edema (PACE) patients had similar ECD decay curves, with the PACE group having slightly higher cell densities by 10 years. Conclusions. Endothelial cell loss after PK can be modeled via a log-polynomial model, which accounts for the correlated data from repeated measures on the same subject. This model is not significantly affected by the selective dropout due to graft failure. Our findings warrant further study on how this may extend to ECD following endothelial keratoplasty. PMID:25425307
NASA Astrophysics Data System (ADS)
Beecham, Jonathan; Bruggeman, Jorn; Aldridge, John; Mackinson, Steven
2016-03-01
End-to-end modelling is a rapidly developing strategy for modelling in marine systems science and management. However, problems remain in the area of data matching and sub-model compatibility. A mechanism and novel interfacing system (Couplerlib) is presented whereby a physical-biogeochemical model (General Ocean Turbulence Model-European Regional Seas Ecosystem Model, GOTM-ERSEM) that predicts dynamics of the lower trophic level (LTL) organisms in marine ecosystems is coupled to a dynamic ecosystem model (Ecosim), which predicts food-web interactions among higher trophic level (HTL) organisms. Coupling is achieved by means of a bespoke interface, which handles the system incompatibilities between the models and a more generic Couplerlib library, which uses metadata descriptions in extensible mark-up language (XML) to marshal data between groups, paying attention to functional group mappings and compatibility of units between models. In addition, within Couplerlib, models can be coupled across networks by means of socket mechanisms. As a demonstration of this approach, a food-web model (Ecopath with Ecosim, EwE) and a physical-biogeochemical model (GOTM-ERSEM) representing the North Sea ecosystem were joined with Couplerlib. The output from GOTM-ERSEM varies between years, depending on oceanographic and meteorological conditions. Although inter-annual variability was clearly present, there was always the tendency for an annual cycle consisting of a peak of diatoms in spring, followed by (less nutritious) flagellates and dinoflagellates through the summer, resulting in an early summer peak in the mesozooplankton biomass. Pelagic productivity, predicted by the LTL model, was highly seasonal with little winter food for the higher trophic levels. The Ecosim model was originally based on the assumption of constant annual inputs of energy and, consequently, when coupled, pelagic species suffered population losses over the winter months. By contrast, benthic populations were more stable (although the benthic linkage modelled was purely at the detritus level, so this stability reflects the stability of the Ecosim model). The coupled model was used to examine long-term effects of environmental change, and showed the system to be nutrient limited and relatively unaffected by forecast climate change, especially in the benthos. The stability of an Ecosim formulation for large higher tropic level food webs is discussed and it is concluded that this kind of coupled model formulation is better for examining the effects of long-term environmental change than short-term perturbations.
The rotary subwoofer: a controllable infrasound source.
Park, Joseph; Garcés, Milton; Thigpen, Bruce
2009-04-01
The rotary subwoofer is a novel acoustic transducer capable of projecting infrasonic signals at high sound pressure levels. The projector produces higher acoustic particle velocities than conventional transducers which translate into higher radiated sound pressure levels. This paper characterizes measured performance of a rotary subwoofer and presents a model to predict sound pressure levels.
Ensemble method for dengue prediction.
Buczak, Anna L; Baugher, Benjamin; Moniz, Linda J; Bagley, Thomas; Babin, Steven M; Guven, Erhan
2018-01-01
In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.
Ensemble method for dengue prediction
Baugher, Benjamin; Moniz, Linda J.; Bagley, Thomas; Babin, Steven M.; Guven, Erhan
2018-01-01
Background In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Methods Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Principal findings Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. Conclusions The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru. PMID:29298320
Alvarez, Karina; Loehr, Laura; Folsom, Aaron R.; Newman, Anne B.; Weissfeld, Lisa A.; Wunderink, Richard G.; Kritchevsky, Stephen B.; Mukamal, Kenneth J.; London, Stephanie J.; Harris, Tamara B.; Bauer, Doug C.; Angus, Derek C.
2013-01-01
Background: Preventing pneumonia requires better understanding of incidence, mortality, and long-term clinical and biologic risk factors, particularly in younger individuals. Methods: This was a cohort study in three population-based cohorts of community-dwelling individuals. A derivation cohort (n = 16,260) was used to determine incidence and survival and develop a risk prediction model. The prediction model was validated in two cohorts (n = 8,495). The primary outcome was 10-year risk of pneumonia hospitalization. Results: The crude and age-adjusted incidences of pneumonia were 6.71 and 9.43 cases/1,000 person-years (10-year risk was 6.15%). The 30-day and 1-year mortality were 16.5% and 31.5%. Although age was the most important risk factor (range of crude incidence rates, 1.69-39.13 cases/1,000 person-years for each 5-year increment from 45-85 years), 38% of pneumonia cases occurred in adults < 65 years of age. The 30-day and 1-year mortality were 12.5% and 25.7% in those < 65 years of age. Although most comorbidities were associated with higher risk of pneumonia, reduced lung function was the most important risk factor (relative risk = 6.61 for severe reduction based on FEV1 by spirometry). A clinical risk prediction model based on age, smoking, and lung function predicted 10-year risk (area under curve [AUC] = 0.77 and Hosmer-Lemeshow [HL] C statistic = 0.12). Model discrimination and calibration were similar in the internal validation cohort (AUC = 0.77; HL C statistic, 0.65) but lower in the external validation cohort (AUC = 0.62; HL C statistic, 0.45). The model also calibrated well in blacks and younger adults. C-reactive protein and IL-6 were associated with higher pneumonia risk but did not improve model performance. Conclusions: Pneumonia hospitalization is common and associated with high mortality, even in younger healthy adults. Long-term risk of pneumonia can be predicted in community-dwelling adults with a simple clinical risk prediction model. PMID:23744106
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paret, Paul P; DeVoto, Douglas J; Narumanchi, Sreekant V
Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. Amore » fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.« less
NASA Astrophysics Data System (ADS)
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2017-01-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Towards malaria risk prediction in Afghanistan using remote sensing.
Adimi, Farida; Soebiyanto, Radina P; Safi, Najibullah; Kiang, Richard
2010-05-13
Malaria is a significant public health concern in Afghanistan. Currently, approximately 60% of the population, or nearly 14 million people, live in a malaria-endemic area. Afghanistan's diverse landscape and terrain contributes to the heterogeneous malaria prevalence across the country. Understanding the role of environmental variables on malaria transmission can further the effort for malaria control programme. Provincial malaria epidemiological data (2004-2007) collected by the health posts in 23 provinces were used in conjunction with space-borne observations from NASA satellites. Specifically, the environmental variables, including precipitation, temperature and vegetation index measured by the Tropical Rainfall Measuring Mission and the Moderate Resolution Imaging Spectoradiometer, were used. Regression techniques were employed to model malaria cases as a function of environmental predictors. The resulting model was used for predicting malaria risks in Afghanistan. The entire time series except the last 6 months is used for training, and the last 6-month data is used for prediction and validation. Vegetation index, in general, is the strongest predictor, reflecting the fact that irrigation is the main factor that promotes malaria transmission in Afghanistan. Surface temperature is the second strongest predictor. Precipitation is not shown as a significant predictor, as it may not directly lead to higher larval population. Autoregressiveness of the malaria epidemiological data is apparent from the analysis. The malaria time series are modelled well, with provincial average R2 of 0.845. Although the R2 for prediction has larger variation, the total 6-month cases prediction is only 8.9% higher than the actual cases. The provincial monthly malaria cases can be modelled and predicted using satellite-measured environmental parameters with reasonable accuracy. The Third Strategic Approach of the WHO EMRO Malaria Control and Elimination Plan is aimed to develop a cost-effective surveillance system that includes forecasting, early warning and detection. The predictive and early warning capabilities shown in this paper support this strategy.
Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing
2018-08-01
Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Liu, Jiping; Song, Mirong; Horton, Radley M.; Hu, Yongyun
2015-01-01
The rapid change in Arctic sea ice in recent decades has led to a rising demand for seasonal sea ice prediction. A recent modeling study that employed a prognostic melt pond model in a stand-alone sea ice model found that September Arctic sea ice extent can be accurately predicted from the melt pond fraction in May. Here we show that satellite observations show no evidence of predictive skill in May. However, we find that a significantly strong relationship (high predictability) first emerges as the melt pond fraction is integrated from early May to late June, with a persistent strong relationship only occurring after late July. Our results highlight that late spring to mid summer melt pond information is required to improve the prediction skill of the seasonal sea ice minimum. Furthermore, satellite observations indicate a much higher percentage of melt pond formation in May than does the aforementioned model simulation, which points to the need to reconcile model simulations and observations, in order to better understand key mechanisms of melt pond formation and evolution and their influence on sea ice state.
NASA Astrophysics Data System (ADS)
Wayand, N. E.; Stimberis, J.; Zagrodnik, J.; Mass, C.; Lundquist, J. D.
2016-12-01
Low-level cold air from eastern Washington state often flows westward through mountain passes in the Washington Cascades, creating localized inversions and locally reducing climatological temperatures. The persistence of this inversion during a frontal passage can result in complex patterns of snow and rain that are difficult to predict. Yet, these predictions are critical to support highway avalanche control, ski resort operations, and modeling of headwater snowpack storage. In this study we used observations of precipitation phase from a disdrometer and snow depth sensors across Snoqualmie Pass, WA, to evaluate surface-air-temperature-based and mesoscale-model-based predictions of precipitation phase during the anomalously warm 2014-2015 winter. The skill of surface-based methods was greatly improved by using air temperature from a nearby higher-elevation station, which was less impacted by low-level inversions. Alternatively, we found a hybrid method that combines surface-based predictions with output from the Weather Research and Forecasting mesoscale model to have improved skill over both parent models. These results suggest that prediction of precipitation phase in mountain passes can be improved by incorporating observations or models from above the surface layer.
Network model for thermal conductivities of unidirectional fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Wang, Yang; Peng, Chaoyi; Zhang, Weihua
2014-12-01
An empirical network model has been developed to predict the in-plane thermal conductivities along arbitrary directions for unidirectional fiber-reinforced composites lamina. Measurements of thermal conductivities along different orientations were carried out. Good agreement was observed between values predicted by the network model and the experimental data; compared with the established analytical models, the newly proposed network model could give values with higher precision. Therefore, this network model is helpful to get a wider and more comprehensive understanding of heat transmission characteristics of fiber-reinforced composites and can be utilized as guidance to design and fabricate laminated composites with specific directional or specific locational thermal conductivities for structures that simultaneously perform mechanical and thermal functions, i.e. multifunctional structures (MFS).
Navarro, Rafael; Palos, Fernando; Lanchares, Elena; Calvo, Begoña; Cristóbal, José A
2009-01-01
To develop a realistic model of the optomechanical behavior of the cornea after curved relaxing incisions to simulate the induced astigmatic change and predict the optical aberrations produced by the incisions. ICMA Consejo Superior de Investigaciones Científicas and Universidad de Zaragoza, Zaragoza, Spain. A 3-dimensional finite element model of the anterior hemisphere of the ocular surface was used. The corneal tissue was modeled as a quasi-incompressible, anisotropic hyperelastic constitutive behavior strongly dependent on the physiological collagen fibril distribution. Similar behaviors were assigned to the limbus and sclera. With this model, some corneal incisions were computer simulated after the Lindstrom nomogram. The resulting geometry of the biomechanical simulation was analyzed in the optical zone, and finite ray tracing was performed to compute refractive power and higher-order aberrations (HOAs). The finite-element simulation provided new geometry of the corneal surfaces, from which elevation topographies were obtained. The surgically induced astigmatism (SIA) of the simulated incisions according to the Lindstrom nomogram was computed by finite ray tracing. However, paraxial computations would yield slightly different results (undercorrection of astigmatism). In addition, arcuate incisions would induce significant amounts of HOAs. Finite-element models, together with finite ray-tracing computations, yielded realistic simulations of the biomechanical and optical changes induced by relaxing incisions. The model reproduced the SIA indicated by the Lindstrom nomogram for the simulated incisions and predicted a significant increase in optical aberrations induced by arcuate keratotomy.
Miyata, Hiroaki; Hashimoto, Hideki; Horiguchi, Hiromasa; Fushimi, Kiyohide; Matsuda, Shinya
2010-05-19
Few studies have examined whether risk adjustment is evenly applicable to hospitals with various characteristics and case-mix. In this study, we applied a generic prediction model to nationwide discharge data from hospitals with various characteristics. We used standardized data of 1,878,767 discharged patients provided by 469 hospitals from July 1 to October 31, 2006. We generated and validated a case-mix in-hospital mortality prediction model using 50/50 split sample validation. We classified hospitals into two groups based on c-index value (hospitals with c-index > or = 0.8; hospitals with c-index < 0.8) and examined differences in their characteristics. The model demonstrated excellent discrimination as indicated by the high average c-index and small standard deviation (c-index = 0.88 +/- 0.04). Expected mortality rate of each hospital was highly correlated with observed mortality rate (r = 0.693, p < 0.001). Among the studied hospitals, 446 (95%) had a c-index of >/=0.8 and were classified as the higher c-index group. A significantly higher proportion of hospitals in the lower c-index group were specialized hospitals and hospitals with convalescent wards. The model fits well to a group of hospitals with a wide variety of acute care events, though model fit is less satisfactory for specialized hospitals and those with convalescent wards. Further sophistication of the generic prediction model would be recommended to obtain optimal indices to region specific conditions.
Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans
2011-11-01
We describe an empirical model for exposure to respirable crystalline silica (RCS) to create a quantitative job-exposure matrix (JEM) for community-based studies. Personal measurements of exposure to RCS from Europe and Canada were obtained for exposure modelling. A mixed-effects model was elaborated, with region/country and job titles as random effect terms. The fixed effect terms included year of measurement, measurement strategy (representative or worst-case), sampling duration (minutes) and a priori exposure intensity rating for each job from an independently developed JEM (none, low, high). 23,640 personal RCS exposure measurements, covering a time period from 1976 to 2009, were available for modelling. The model indicated an overall downward time trend in RCS exposure levels of -6% per year. Exposure levels were higher in the UK and Canada, and lower in Northern Europe and Germany. Worst-case sampling was associated with higher reported exposure levels and an increase in sampling duration was associated with lower reported exposure levels. Highest predicted RCS exposure levels in the reference year (1998) were for chimney bricklayers (geometric mean 0.11 mg m(-3)), monument carvers and other stone cutters and carvers (0.10 mg m(-3)). The resulting model enables us to predict time-, job-, and region/country-specific exposure levels of RCS. These predictions will be used in the SYNERGY study, an ongoing pooled multinational community-based case-control study on lung cancer.
NASA Astrophysics Data System (ADS)
Li, Danfeng; Gao, Guangyao; Shao, Ming'an; Fu, Bojie
2016-07-01
A detailed understanding of soil hydraulic properties, particularly the available water content of soil, (AW, cm3 cm-3), is required for optimal water management. Direct measurement of soil hydraulic properties is impractical for large scale application, but routinely available soil particle-size distribution (PSD) and bulk density can be used as proxies to develop various prediction functions. In this study, we compared the performance of the Arya and Paris (AP) model, Mohammadi and Vanclooster (MV) model, Arya and Heitman (AH) model, and Rosetta program in predicting the soil water characteristic curve (SWCC) at 34 points with experimental SWCC data in an oasis-desert transect (20 × 5 km) in the middle reaches of the Heihe River basin, northwestern China. The idea of the three models emerges from the similarity of the shapes of the PSD and SWCC. The AP model, MV model, and Rosetta program performed better in predicting the SWCC than the AH model. The AW determined from the SWCCs predicted by the MV model agreed better with the experimental values than those derived from the AP model and Rosetta program. The fine-textured soils were characterized by higher AW values, while the sandy soils had lower AW values. The MV model has the advantages of having robust physical basis, being independent of database-related parameters, and involving subclasses of texture data. These features make it promising in predicting soil water retention at regional scales, serving for the application of hydrological models and the optimization of soil water management.
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.; Tang, chun Y.; Palmer, Grant E.; Hyatt, Andrew J.; Wise, Adam J.; McCloud, Peter L.
2010-01-01
Surface temperature measurements from the STS-119 boundary-layer transition experiment on the space shuttle orbiter Discovery provide a rare opportunity to assess turbulent CFD models at hypersonic flight conditions. This flight data was acquired by on-board thermocouples and by infrared images taken off-board by the Hypersonic Thermodynamic Infrared Measurements (HYTHIRM) team, and is suitable for hypersonic CFD turbulence assessment between Mach 6 and 14. The primary assessment is for the Baldwin-Lomax and Cebeci-Smith algebraic turbulence models in the DPLR and LAURA CFD codes, respectively. A secondary assessment is made of the Shear-Stress Transport (SST) two-equation turbulence model in the DPLR code. Based upon surface temperature comparisons at eleven thermocouple locations, the algebraic-model turbulent CFD results average 4% lower than the measurements for Mach numbers less than 11. For Mach numbers greater than 11, the algebraic-model turbulent CFD results average 5% higher than the three available thermocouple measurements. Surface temperature predictions from the two SST cases were consistently 3 4% higher than the algebraic-model results. The thermocouple temperatures exhibit a change in trend with Mach number at about Mach 11; this trend is not reflected in the CFD results. Because the temperature trends from the turbulent CFD simulations and the flight data diverge above Mach 11, extrapolation of the turbulent CFD accuracy to higher Mach numbers is not recommended.
Predicting Increased Blood Pressure Using Machine Learning
Golino, Hudson Fernandes; Amaral, Liliany Souza de Brito; Duarte, Stenio Fernando Pimentel; Soares, Telma de Jesus; dos Reis, Luciana Araujo
2014-01-01
The present study investigates the prediction of increased blood pressure by body mass index (BMI), waist (WC) and hip circumference (HC), and waist hip ratio (WHR) using a machine learning technique named classification tree. Data were collected from 400 college students (56.3% women) from 16 to 63 years old. Fifteen trees were calculated in the training group for each sex, using different numbers and combinations of predictors. The result shows that for women BMI, WC, and WHR are the combination that produces the best prediction, since it has the lowest deviance (87.42), misclassification (.19), and the higher pseudo R 2 (.43). This model presented a sensitivity of 80.86% and specificity of 81.22% in the training set and, respectively, 45.65% and 65.15% in the test sample. For men BMI, WC, HC, and WHC showed the best prediction with the lowest deviance (57.25), misclassification (.16), and the higher pseudo R 2 (.46). This model had a sensitivity of 72% and specificity of 86.25% in the training set and, respectively, 58.38% and 69.70% in the test set. Finally, the result from the classification tree analysis was compared with traditional logistic regression, indicating that the former outperformed the latter in terms of predictive power. PMID:24669313
Predicting increased blood pressure using machine learning.
Golino, Hudson Fernandes; Amaral, Liliany Souza de Brito; Duarte, Stenio Fernando Pimentel; Gomes, Cristiano Mauro Assis; Soares, Telma de Jesus; Dos Reis, Luciana Araujo; Santos, Joselito
2014-01-01
The present study investigates the prediction of increased blood pressure by body mass index (BMI), waist (WC) and hip circumference (HC), and waist hip ratio (WHR) using a machine learning technique named classification tree. Data were collected from 400 college students (56.3% women) from 16 to 63 years old. Fifteen trees were calculated in the training group for each sex, using different numbers and combinations of predictors. The result shows that for women BMI, WC, and WHR are the combination that produces the best prediction, since it has the lowest deviance (87.42), misclassification (.19), and the higher pseudo R (2) (.43). This model presented a sensitivity of 80.86% and specificity of 81.22% in the training set and, respectively, 45.65% and 65.15% in the test sample. For men BMI, WC, HC, and WHC showed the best prediction with the lowest deviance (57.25), misclassification (.16), and the higher pseudo R (2) (.46). This model had a sensitivity of 72% and specificity of 86.25% in the training set and, respectively, 58.38% and 69.70% in the test set. Finally, the result from the classification tree analysis was compared with traditional logistic regression, indicating that the former outperformed the latter in terms of predictive power.
Qiu, Mingyue; Song, Yu
2016-01-01
In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders' expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day's price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately.
Qiu, Mingyue; Song, Yu
2016-01-01
In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders’ expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day’s price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately. PMID:27196055
Predicting arsenic in drinking water wells of the Central Valley, California
Ayotte, Joseph; Nolan, Bernard T.; Gronberg, JoAnn M.
2016-01-01
Probabilities of arsenic in groundwater at depths used for domestic and public supply in the Central Valley of California are predicted using weak-learner ensemble models (boosted regression trees, BRT) and more traditional linear models (logistic regression, LR). Both methods captured major processes that affect arsenic concentrations, such as the chemical evolution of groundwater, redox differences, and the influence of aquifer geochemistry. Inferred flow-path length was the most important variable but near-surface-aquifer geochemical data also were significant. A unique feature of this study was that previously predicted nitrate concentrations in three dimensions were themselves predictive of arsenic and indicated an important redox effect at >10 μg/L, indicating low arsenic where nitrate was high. Additionally, a variable representing three-dimensional aquifer texture from the Central Valley Hydrologic Model was an important predictor, indicating high arsenic associated with fine-grained aquifer sediment. BRT outperformed LR at the 5 μg/L threshold in all five predictive performance measures and at 10 μg/L in four out of five measures. BRT yielded higher prediction sensitivity (39%) than LR (18%) at the 10 μg/L threshold–a useful outcome because a major objective of the modeling was to improve our ability to predict high arsenic areas.
Establishment of a mathematic model for predicting malignancy in solitary pulmonary nodules.
Zhang, Man; Zhuo, Na; Guo, Zhanlin; Zhang, Xingguang; Liang, Wenhua; Zhao, Sheng; He, Jianxing
2015-10-01
The aim of this study was to establish a model for predicting the probability of malignancy in solitary pulmonary nodules (SPNs) and provide guidance for the diagnosis and follow-up intervention of SPNs. We retrospectively analyzed the clinical data and computed tomography (CT) images of 294 patients with a clear pathological diagnosis of SPN. Multivariate logistic regression analysis was used to screen independent predictors of the probability of malignancy in the SPN and to establish a model for predicting malignancy in SPNs. Then, another 120 SPN patients who did not participate in the model establishment were chosen as group B and used to verify the accuracy of the prediction model. Multivariate logistic regression analysis showed that there were significant differences in age, smoking history, maximum diameter of nodules, spiculation, clear borders, and Cyfra21-1 levels between subgroups with benign and malignant SPNs (P<0.05). These factors were identified as independent predictors of malignancy in SPNs. The area under the curve (AUC) was 0.910 [95% confidence interval (CI), 0.857-0.963] in model with Cyfra21-1 significantly better than 0.812 (95% CI, 0.763-0.861) in model without Cyfra21-1 (P=0.008). The area under receiver operating characteristic (ROC) curve of our model is significantly higher than the Mayo model, VA model and Peking University People's (PKUPH) model. Our model (AUC =0.910) compared with Brock model (AUC =0.878, P=0.350), the difference was not statistically significant. The model added Cyfra21-1 could improve prediction. The prediction model established in this study can be used to assess the probability of malignancy in SPNs, thereby providing help for the diagnosis of SPNs and the selection of follow-up interventions.
Predicting watershed acidification under alternate rainfall conditions
Huntington, T.G.
1996-01-01
The effect of alternate rainfall scenarios on acidification of a forested watershed subjected to chronic acidic deposition was assessed using the model of acidification of groundwater in catchments (MAGIC). The model was calibrated at the Panola Mountain Research Watershed, near Atlanta, Georgia, U.S.A. using measured soil properties, wet and dry deposition, and modeled hydrologic routing. Model forecast simulations were evaluated to compare alternate temporal averaging of rainfall inputs and variations in rainfall amount and seasonal distribution. Soil water alkalinity was predicted to decrease to substantially lower concentrations under lower rainfall compared with current or higher rainfall conditions. Soil water alkalinity was also predicted to decrease to lower levels when the majority of rainfall occurred during the growing season compared with other rainfall distributions. Changes in rainfall distribution that result in decreases in net soil water flux will temporarily delay acidification. Ultimately, however, decreased soil water flux will result in larger increases in soil- adsorbed sulfur and soil-water sulfate concentrations and decreases in alkalinity when compared to higher water flux conditions. Potential climate change resulting in significant changes in rainfall amounts, seasonal distribution of rainfall, or evapotranspiration will change net soil water flux and, consequently, will affect the dynamics of the acidification response to continued sulfate loading.
NASA Astrophysics Data System (ADS)
Ghani, A. H. A.; Lihan, T.; Rahim, S. A.; Musthapha, M. A.; Idris, W. M. R.; Rahman, Z. A.
2013-11-01
Soil erosion and sediment yield are strongly affected by land use change. Spatially distributed erosion models are of great interest to predict soil erosion loss and sediment yield. Hence, the objective of this study was to determine sediment yield using Revised Universal Soil Loss Equation (RUSLE) model in Geographical Information System (GIS) environment at Cameron Highlands, Pahang, Malaysia. Sediment yield at the study area was determined using RUSLE model in GIS environment The RUSLE factors were computed by utilizing information on rainfall erosivity (R) using interpolation of rainfall data, soil erodibility (K) using soil map and field measurement, vegetation cover (C) using satellite images, length and steepness (LS) using contour map and conservation practices using satellite images based on land use/land cover. Field observations were also done to verify the predicted sediment yield. The results indicated that the rate of sediment yield in the study area ranged from very low to extremely high. The higher SY value can be found at middle and lower catchments of Cameron Highland. Meanwhile, the lower SY value can be found at the north part of the study area. Sediment yield value turned out to be higher close to the river due to the topographic characteristic, vegetation type and density, climate and land use within the drainage basin.
Anger: cause or consequence of posttraumatic stress? A prospective study of Dutch soldiers.
Lommen, Miriam J J; Engelhard, Iris M; van de Schoot, Rens; van den Hout, Marcel A
2014-04-01
Many studies have shown that individuals with posttraumatic stress disorder (PTSD) experience more anger over time and across situations (i.e., trait anger) than trauma-exposed individuals without PTSD. There is a lack of prospective research, however, that considers anger levels before trauma exposure. The aim of this study was to prospectively assess the relationship between trait anger and PTSD symptoms, with several known risk factors, including baseline symptoms, neuroticism, and stressor severity in the model. Participants were 249 Dutch soldiers tested approximately 2 months before and approximately 2 months and 9 months after their deployment to Afghanistan. Trait anger and PTSD symptom severity were measured at all assessments. Structural equation modeling including cross-lagged effects showed that higher trait anger before deployment predicted higher PTSD symptoms 2 months after deployment (β = .36), with stressor severity and baseline symptoms in the model, but not with neuroticism in the model. Trait anger at 2 months postdeployment did not predict PTSD symptom severity at 9 months, and PTSD symptom severity 2 months postdeployment did not predict subsequent trait anger scores. Findings suggest that trait anger may be a pretrauma vulnerability factor for PTSD symptoms, but does not add variance beyond the effect of neuroticism. Copyright © 2014 International Society for Traumatic Stress Studies.
Pothula, Venu M.; Yuan, Stanley C.; Maerz, David A.; Montes, Lucresia; Oleszkiewicz, Stephen M.; Yusupov, Albert; Perline, Richard
2015-01-01
Background Advanced predictive analytical techniques are being increasingly applied to clinical risk assessment. This study compared a neural network model to several other models in predicting the length of stay (LOS) in the cardiac surgical intensive care unit (ICU) based on pre-incision patient characteristics. Methods Thirty six variables collected from 185 cardiac surgical patients were analyzed for contribution to ICU LOS. The Automatic Linear Modeling (ALM) module of IBM-SPSS software identified 8 factors with statistically significant associations with ICU LOS; these factors were also analyzed with the Artificial Neural Network (ANN) module of the same software. The weighted contributions of each factor (“trained” data) were then applied to data for a “new” patient to predict ICU LOS for that individual. Results Factors identified in the ALM model were: use of an intra-aortic balloon pump; O2 delivery index; age; use of positive cardiac inotropic agents; hematocrit; serum creatinine ≥ 1.3 mg/deciliter; gender; arterial pCO2. The r2 value for ALM prediction of ICU LOS in the initial (training) model was 0.356, p <0.0001. Cross validation in prediction of a “new” patient yielded r2 = 0.200, p <0.0001. The same 8 factors analyzed with ANN yielded a training prediction r2 of 0.535 (p <0.0001) and a cross validation prediction r2 of 0.410, p <0.0001. Two additional predictive algorithms were studied, but they had lower prediction accuracies. Our validated neural network model identified the upper quartile of ICU LOS with an odds ratio of 9.8(p <0.0001). Conclusions ANN demonstrated a 2-fold greater accuracy than ALM in prediction of observed ICU LOS. This greater accuracy would be presumed to result from the capacity of ANN to capture nonlinear effects and higher order interactions. Predictive modeling may be of value in early anticipation of risks of post-operative morbidity and utilization of ICU facilities. PMID:26710254
NASA Astrophysics Data System (ADS)
Kurtulus, Bedri; Razack, Moumtaz
2010-02-01
SummaryThis paper compares two methods for modeling karst aquifers, which are heterogeneous, highly non-linear, and hierarchical systems. There is a clear need to model these systems given the crucial role they play in water supply in many countries. In recent years, the main components of soft computing (fuzzy logic (FL), and Artificial Neural Networks, (ANNs)) have come to prevail in the modeling of complex non-linear systems in different scientific and technologic disciplines. In this study, Artificial Neural Networks and Adaptive Neuro-Fuzzy Interface System (ANFIS) methods were used for the prediction of daily discharge of karstic aquifers and their capability was compared. The approach was applied to 7 years of daily data of La Rochefoucauld karst system in south-western France. In order to predict the karst daily discharges, single-input (rainfall, piezometric level) vs. multiple-input (rainfall and piezometric level) series were used. In addition to these inputs, all models used measured or simulated discharges from the previous days with a specified delay. The models were designed in a Matlab™ environment. An automatic procedure was used to select the best calibrated models. Daily discharge predictions were then performed using the calibrated models. Comparing predicted and observed hydrographs indicates that both models (ANN and ANFIS) provide close predictions of the karst daily discharges. The summary statistics of both series (observed and predicted daily discharges) are comparable. The performance of both models is improved when the number of inputs is increased from one to two. The root mean square error between the observed and predicted series reaches a minimum for two-input models. However, the ANFIS model demonstrates a better performance than the ANN model to predict peak flow. The ANFIS approach demonstrates a better generalization capability and slightly higher performance than the ANN, especially for peak discharges.
Sarwar, Golam; Gantt, Brett; Schwede, Donna; Foley, Kristen; Mathur, Rohit; Saiz-Lopez, Alfonso
2015-08-04
Fate of ozone in marine environments has been receiving increased attention due to the tightening of ambient air quality standards. The role of deposition and halogen chemistry is examined through incorporation of an enhanced ozone deposition algorithm and inclusion of halogen chemistry in a comprehensive atmospheric modeling system. The enhanced ozone deposition treatment accounts for the interaction of iodide in seawater with ozone and increases deposition velocities by 1 order of magnitude. Halogen chemistry includes detailed chemical reactions of organic and inorganic bromine and iodine species. Two different simulations are completed with the halogen chemistry: without and with photochemical reactions of higher iodine oxides. Enhanced deposition reduces mean summer-time surface ozone by ∼3% over marine regions in the Northern Hemisphere. Halogen chemistry without the photochemical reactions of higher iodine oxides reduces surface ozone by ∼15% whereas simulations with the photochemical reactions of higher iodine oxides indicate ozone reductions of ∼48%. The model without these processes overpredicts ozone compared to observations whereas the inclusion of these processes improves predictions. The inclusion of photochemical reactions for higher iodine oxides leads to ozone predictions that are lower than observations, underscoring the need for further refinement of the halogen emissions and chemistry scheme in the model.
An analysis of the productivity of a CELSS continuous algal culture system
NASA Technical Reports Server (NTRS)
Radmer, R.; Behrens, P.; Fernandez, E.; Arnett, K.
1986-01-01
One of the most attractive aspects of using algal cultures as plant components for a Closed Ecological Life Support Systems (CELSS) is the efficiency with which they can be grown. Although algae are not necessarily intrinsically more efficient than higher plants, the ease which they can be handled and manipulated (more like chemical reagents than plants), and the culturing techniques available, result in much higher growth rates than are usually attainable with higher plants. Furthermore, preliminary experiments have demonstrated that algal growth and physiology is not detectable altered in a microgravity environment, (1) whereas the response of higher plants to zero gravity is unknown. In order to rationally design and operate culture systems, it is necessary to understand how the macroparameters of a culture system, e.g., productivity, are related to the physiological aspects of the algal culture. A first principles analysis of culture system is discussed, and a mathematical model that describes the relationship of culture productivity to the cell concentration of light-limited culture is derived. The predicted productivity vs cell concentration curve agrees well with the experimental data obtained to test this model, indicating that this model permits an accurate prediction of culture productivity given the growth parameters of the system.
Safiuddin, Md.; Raman, Sudharshan N.; Abdus Salam, Md.; Jumaat, Mohd. Zamin
2016-01-01
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination (R2) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN. PMID:28773520
Safiuddin, Md; Raman, Sudharshan N; Abdus Salam, Md; Jumaat, Mohd Zamin
2016-05-20
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination ( R ²) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN.
Theran, Sally A
2009-09-01
The current study empirically examined predictors of level of voice (ethnicity, attachment, and gender role socialization) in a diverse sample of 108 14-year-old girls. Structural equation modeling results indicated that parental attachment predicted level of voice with authority figures, and gender role socialization predicted level of voice with authority figures and peers. Both masculinity and femininity were salient for higher levels of voice with authority figures whereas higher scores on masculinity contributed to higher levels of voice with peers. These findings suggest that, contrary to previous theoretical work, femininity itself is not a risk factor for low levels of voice. In addition, African-American girls had higher levels of voice with teachers and classmates than did Caucasian girls, and girls who were in a school with a greater concentration of ethnic minorities had higher levels of voice with peers than did girls at a school with fewer minority students.
A multivariable model for predicting the frictional behaviour and hydration of the human skin.
Veijgen, N K; van der Heide, E; Masen, M A
2013-08-01
The frictional characteristics of skin-object interactions are important when handling objects, in the assessment of perception and comfort of products and materials and in the origins and prevention of skin injuries. In this study, based on statistical methods, a quantitative model is developed that describes the friction behaviour of human skin as a function of the subject characteristics, contact conditions, the properties of the counter material as well as environmental conditions. Although the frictional behaviour of human skin is a multivariable problem, in literature the variables that are associated with skin friction have been studied using univariable methods. In this work, multivariable models for the static and dynamic coefficients of friction as well as for the hydration of the skin are presented. A total of 634 skin-friction measurements were performed using a recently developed tribometer. Using a statistical analysis, previously defined potential influential variables were linked to the static and dynamic coefficient of friction and to the hydration of the skin, resulting in three predictive quantitative models that descibe the friction behaviour and the hydration of human skin respectively. Increased dynamic coefficients of friction were obtained from older subjects, on the index finger, with materials with a higher surface energy at higher room temperatures, whereas lower dynamic coefficients of friction were obtained at lower skin temperatures, on the temple with rougher contact materials. The static coefficient of friction increased with higher skin hydration, increasing age, on the index finger, with materials with a higher surface energy and at higher ambient temperatures. The hydration of the skin was associated with the skin temperature, anatomical location, presence of hair on the skin and the relative air humidity. Predictive models have been derived for the static and dynamic coefficient of friction using a multivariable approach. These two coefficients of friction show a strong correlation. Consequently the two multivariable models resemble, with the static coefficient of friction being on average 18% lower than the dynamic coefficient of friction. The multivariable models in this study can be used to describe the data set that was the basis for this study. Care should be taken when generalising these results. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Asher, Anthony L; Devin, Clinton J; Archer, Kristin R; Chotai, Silky; Parker, Scott L; Bydon, Mohamad; Nian, Hui; Harrell, Frank E; Speroff, Theodore; Dittus, Robert S; Philips, Sharon E; Shaffrey, Christopher I; Foley, Kevin T; McGirt, Matthew J
2017-10-01
OBJECTIVE Current costs associated with spine care are unsustainable. Productivity loss and time away from work for patients who were once gainfully employed contributes greatly to the financial burden experienced by individuals and, more broadly, society. Therefore, it is vital to identify the factors associated with return to work (RTW) after lumbar spine surgery. In this analysis, the authors used data from a national prospective outcomes registry to create a predictive model of patients' ability to RTW after undergoing lumbar spine surgery for degenerative spine disease. METHODS Data from 4694 patients who underwent elective spine surgery for degenerative lumbar disease, who had been employed preoperatively, and who had completed a 3-month follow-up evaluation, were entered into a prospective, multicenter registry. Patient-reported outcomes-Oswestry Disability Index (ODI), numeric rating scale (NRS) for back pain (BP) and leg pain (LP), and EQ-5D scores-were recorded at baseline and at 3 months postoperatively. The time to RTW was defined as the period between operation and date of returning to work. A multivariable Cox proportional hazards regression model, including an array of preoperative factors, was fitted for RTW. The model performance was measured using the concordance index (c-index). RESULTS Eighty-two percent of patients (n = 3855) returned to work within 3 months postoperatively. The risk-adjusted predictors of a lower likelihood of RTW were being preoperatively employed but not working at the time of presentation, manual labor as an occupation, worker's compensation, liability insurance for disability, higher preoperative ODI score, higher preoperative NRS-BP score, and demographic factors such as female sex, African American race, history of diabetes, and higher American Society of Anesthesiologists score. The likelihood of a RTW within 3 months was higher in patients with higher education level than in those with less than high school-level education. The c-index of the model's performance was 0.71. CONCLUSIONS This study presents a novel predictive model for the probability of returning to work after lumbar spine surgery. Spine care providers can use this model to educate patients and encourage them in shared decision-making regarding the RTW outcome. This evidence-based decision support will result in better communication between patients and clinicians and improve postoperative recovery expectations, which will ultimately increase the likelihood of a positive RTW trajectory.
Numerical simulation of experiments in the Giant Planet Facility
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.
1979-01-01
Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas. PMID:29163117
NASA Astrophysics Data System (ADS)
Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing
2017-02-01
UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.
Forecasting peak asthma admissions in London: an application of quantile regression models.
Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe
2013-07-01
Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.
Forecasting peak asthma admissions in London: an application of quantile regression models
NASA Astrophysics Data System (ADS)
Soyiri, Ireneous N.; Reidpath, Daniel D.; Sarran, Christophe
2013-07-01
Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.
What can 35 years and over 700,000 measurements tell us about noise exposure in the mining industry?
Roberts, Benjamin; Sun, Kan; Neitzel, Richard L.
2017-01-01
Objective To analyze over 700,000 cross-sectional measurements from the Mine Safety and Health Administration (MHSA) and develop statistical models to predict noise exposure for a worker. Design Descriptive statistics were used to summarize the data. Two linear regression models were used to predict noise exposure based on MSHA permissible exposure limit (PEL) and action level (AL) respectively. Two-fold cross validation was used to compare the exposure estimates from the models to actual measurements in the hold out data. The mean difference and t-statistic was calculated for each job title to determine if the model exposure predictions were significantly different from the actual data. Study Sample Measurements were acquired from MSHA through a Freedom of Information Act request. Results From 1979 to 2014 the average noise measurement has decreased. Measurements taken before the implementation of MSHA’s revised noise regulation in 2000 were on average 4.5 dBA higher than after the law came in to effect. Both models produced mean exposure predictions that were less than 1 dBA different compared to the holdout data. Conclusion Overall noise levels in mines have been decreasing. However, this decrease has not been uniform across all mining sectors. The exposure predictions from the model will be useful to help predict hearing loss in workers from the mining industry. PMID:27871188
Cheng, G.; Hu, X. H.; Choi, K. S.; ...
2017-07-08
Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different model sizes are used in this paper to predict the grid-size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson–Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the grid-size-dependent fracture strains for multiphase materials. In addition to the grid-size dependency, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Finally, application of the derived fracture strain versus model size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, G.; Hu, X. H.; Choi, K. S.
Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different model sizes are used in this paper to predict the grid-size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson–Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the grid-size-dependent fracture strains for multiphase materials. In addition to the grid-size dependency, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Finally, application of the derived fracture strain versus model size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less
PySeqLab: an open source Python package for sequence labeling and segmentation.
Allam, Ahmed; Krauthammer, Michael
2017-11-01
Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, G.; Hu, X. H.; Choi, K. S.
Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different representative volume element (RVE) sizes are used to predict the size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson-Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the size-dependent fracture strains for multiphase materials. In addition to the RVE sizes, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Application of the derived fracture strain versus RVE size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less
NASA Astrophysics Data System (ADS)
de Campos, Luana Janaína; de Melo, Eduardo Borges
2017-08-01
In the present study, 199 compounds derived from pyrimidine, pyrimidone and pyridopyrazine carboxamides with inhibitory activity against HIV-1 integrase were modeled. Subsequently, a multivariate QSAR study was conducted with 54 molecules employed by Ordered Predictors Selection (OPS) and Partial Least Squares (PLS) for the selection of variables and model construction, respectively. Topological, electrotopological, geometric, and molecular descriptors were used. The selected real model was robust and free from chance correlation; in addition, it demonstrated favorable internal and external statistical quality. Once statistically validated, the training model was used to predict the activity of a second data set (n = 145). The root mean square deviation (RMSD) between observed and predicted values was 0.698. Although it is a value outside of the standards, only 15 (10.34%) of the samples exhibited higher residual values than 1 log unit, a result considered acceptable. Results of Williams and Euclidean applicability domains relative to the prediction showed that the predictions did not occur by extrapolation and that the model is representative of the chemical space of test compounds.
PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.
Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude
2015-04-27
Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.
Crossa, José; Campos, Gustavo de Los; Pérez, Paulino; Gianola, Daniel; Burgueño, Juan; Araus, José Luis; Makumbi, Dan; Singh, Ravi P; Dreisigacker, Susanne; Yan, Jianbing; Arief, Vivi; Banziger, Marianne; Braun, Hans-Joachim
2010-10-01
The availability of dense molecular markers has made possible the use of genomic selection (GS) for plant breeding. However, the evaluation of models for GS in real plant populations is very limited. This article evaluates the performance of parametric and semiparametric models for GS using wheat (Triticum aestivum L.) and maize (Zea mays) data in which different traits were measured in several environmental conditions. The findings, based on extensive cross-validations, indicate that models including marker information had higher predictive ability than pedigree-based models. In the wheat data set, and relative to a pedigree model, gains in predictive ability due to inclusion of markers ranged from 7.7 to 35.7%. Correlation between observed and predictive values in the maize data set achieved values up to 0.79. Estimates of marker effects were different across environmental conditions, indicating that genotype × environment interaction is an important component of genetic variability. These results indicate that GS in plant breeding can be an effective strategy for selecting among lines whose phenotypes have yet to be observed.
Challoner, Avril; Pilla, Francesco; Gill, Laurence
2015-12-01
NO₂ and particulate matter are the air pollutants of most concern in Ireland, with possible links to the higher respiratory and cardiovascular mortality and morbidity rates found in the country compared to the rest of Europe. Currently, air quality limits in Europe only cover outdoor environments yet the quality of indoor air is an essential determinant of a person's well-being, especially since the average person spends more than 90% of their time indoors. The modelling conducted in this research aims to provide a framework for epidemiological studies by the use of publically available data from fixed outdoor monitoring stations to predict indoor air quality more accurately. Predictions are made using two modelling techniques, the Personal-exposure Activity Location Model (PALM), to predict outdoor air quality at a particular building, and Artificial Neural Networks, to model the indoor/outdoor relationship of the building. This joint approach has been used to predict indoor air concentrations for three inner city commercial buildings in Dublin, where parallel indoor and outdoor diurnal monitoring had been carried out on site. This modelling methodology has been shown to provide reasonable predictions of average NO₂ indoor air quality compared to the monitored data, but did not perform well in the prediction of indoor PM2.5 concentrations. Hence, this approach could be used to determine NO₂ exposures more rigorously of those who work and/or live in the city centre, which can then be linked to potential health impacts.
Higher-than-predicted saltation threshold wind speeds on Titan.
Burr, Devon M; Bridges, Nathan T; Marshall, John R; Smith, James K; White, Bruce R; Emery, Joshua P
2015-01-01
Titan, the largest satellite of Saturn, exhibits extensive aeolian, that is, wind-formed, dunes, features previously identified exclusively on Earth, Mars and Venus. Wind tunnel data collected under ambient and planetary-analogue conditions inform our models of aeolian processes on the terrestrial planets. However, the accuracy of these widely used formulations in predicting the threshold wind speeds required to move sand by saltation, or by short bounces, has not been tested under conditions relevant for non-terrestrial planets. Here we derive saltation threshold wind speeds under the thick-atmosphere, low-gravity and low-sediment-density conditions on Titan, using a high-pressure wind tunnel refurbished to simulate the appropriate kinematic viscosity for the near-surface atmosphere of Titan. The experimentally derived saltation threshold wind speeds are higher than those predicted by models based on terrestrial-analogue experiments, indicating the limitations of these models for such extreme conditions. The models can be reconciled with the experimental results by inclusion of the extremely low ratio of particle density to fluid density on Titan. Whereas the density ratio term enables accurate modelling of aeolian entrainment in thick atmospheres, such as those inferred for some extrasolar planets, our results also indicate that for environments with high density ratios, such as in jets on icy satellites or in tenuous atmospheres or exospheres, the correction for low-density-ratio conditions is not required.
NASA Astrophysics Data System (ADS)
Wan, Xiaodong; Wang, Yuanxun; Zhao, Dawei; Huang, YongAn
2017-09-01
Our study aims at developing an effective quality monitoring system in small scale resistance spot welding of titanium alloy. The measured electrical signals were interpreted in combination with the nugget development. Features were extracted from the dynamic resistance and electrode voltage curve. A higher welding current generally indicated a lower overall dynamic resistance level. A larger electrode voltage peak and higher change rate of electrode voltage could be detected under a smaller electrode force or higher welding current condition. Variation of the extracted features and weld quality was found more sensitive to the change of welding current than electrode force. Different neural network model were proposed for weld quality prediction. The back propagation neural network was more proper in failure load estimation. The probabilistic neural network model was more appropriate to be applied in quality level classification. A real-time and on-line weld quality monitoring system may be developed by taking advantages of both methods.
Pu, Jie; Fang, Di; Wilson, Jeffrey R
2017-02-03
The analysis of correlated binary data is commonly addressed through the use of conditional models with random effects included in the systematic component as opposed to generalized estimating equations (GEE) models that addressed the random component. Since the joint distribution of the observations is usually unknown, the conditional distribution is a natural approach. Our objective was to compare the fit of different binary models for correlated data in Tabaco use. We advocate that the joint modeling of the mean and dispersion may be at times just as adequate. We assessed the ability of these models to account for the intraclass correlation. In so doing, we concentrated on fitting logistic regression models to address smoking behaviors. Frequentist and Bayes' hierarchical models were used to predict conditional probabilities, and the joint modeling (GLM and GAM) models were used to predict marginal probabilities. These models were fitted to National Longitudinal Study of Adolescent to Adult Health (Add Health) data for Tabaco use. We found that people were less likely to smoke if they had higher income, high school or higher education and religious. Individuals were more likely to smoke if they had abused drug or alcohol, spent more time on TV and video games, and been arrested. Moreover, individuals who drank alcohol early in life were more likely to be a regular smoker. Children who experienced mistreatment from their parents were more likely to use Tabaco regularly. The joint modeling of the mean and dispersion models offered a flexible and meaningful method of addressing the intraclass correlation. They do not require one to identify random effects nor distinguish from one level of the hierarchy to the other. Moreover, once one can identify the significant random effects, one can obtain similar results to the random coefficient models. We found that the set of marginal models accounting for extravariation through the additional dispersion submodel produced similar results with regards to inferences and predictions. Moreover, both marginal and conditional models demonstrated similar predictive power.
On the effect of acoustic coupling on random and harmonic plate vibrations
NASA Technical Reports Server (NTRS)
Frendi, A.; Robinson, J. H.
1993-01-01
The effect of acoustic coupling on random and harmonic plate vibrations is studied using two numerical models. In the coupled model, the plate response is obtained by integration of the nonlinear plate equation coupled with the nonlinear Euler equations for the surrounding acoustic fluid. In the uncoupled model, the nonlinear plate equation with an equivalent linear viscous damping term is integrated to obtain the response of the plate subject to the same excitation field. For a low-level, narrow-band excitation, the two models predict the same plate response spectra. As the excitation level is increased, the response power spectrum predicted by the uncoupled model becomes broader and more shifted towards the high frequencies than that obtained by the coupled model. In addition, the difference in response between the coupled and uncoupled models at high frequencies becomes larger. When a high intensity harmonic excitation is used, causing a nonlinear plate response, both models predict the same frequency content of the response. However, the level of the harmonics and subharmonics are higher for the uncoupled model. Comparisons to earlier experimental and numerical results show that acoustic coupling has a significant effect on the plate response at high excitation levels. Its absence in previous models may explain the discrepancy between predicted and measured responses.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-06-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-04-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
Intensity level for exercise training in fibromyalgia by using mathematical models.
Lemos, Maria Carolina D; Valim, Valéria; Zandonade, Eliana; Natour, Jamil
2010-03-22
It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 x age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients.
Intensity level for exercise training in fibromyalgia by using mathematical models
2010-01-01
Background It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. Methods A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 × age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Results Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Conclusion Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients. PMID:20307323
Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander
2015-01-01
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using random forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers were 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the ScoreCard database of possible skin or sense organ toxicants as primary candidates for experimental validation. PMID:25560674