DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang
2014-02-15
Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang
Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less
Genomic prediction in a nuclear population of layers using single-step models.
Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning
2018-02-01
Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
Image Discrimination Predictions of a Single Channel Model with Contrast Gain Control
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Null, Cynthia H.
1995-01-01
Image discrimination models predict the number of just-noticeable-differences between two images. We report the predictions of a single channel model with contrast masking for a range of standard discrimination experiments. Despite its computational simplicity, this model has performed as well as a multiple channel model in an object detection task.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
NASA Technical Reports Server (NTRS)
Stouffer, D. C.; Sheh, M. Y.
1988-01-01
A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.
Improved prediction of biochemical recurrence after radical prostatectomy by genetic polymorphisms.
Morote, Juan; Del Amo, Jokin; Borque, Angel; Ars, Elisabet; Hernández, Carlos; Herranz, Felipe; Arruza, Antonio; Llarena, Roberto; Planas, Jacques; Viso, María J; Palou, Joan; Raventós, Carles X; Tejedor, Diego; Artieda, Marta; Simón, Laureano; Martínez, Antonio; Rioja, Luis A
2010-08-01
Single nucleotide polymorphisms are inherited genetic variations that can predispose or protect individuals against clinical events. We hypothesized that single nucleotide polymorphism profiling may improve the prediction of biochemical recurrence after radical prostatectomy. We performed a retrospective, multi-institutional study of 703 patients treated with radical prostatectomy for clinically localized prostate cancer who had at least 5 years of followup after surgery. All patients were genotyped for 83 prostate cancer related single nucleotide polymorphisms using a low density oligonucleotide microarray. Baseline clinicopathological variables and single nucleotide polymorphisms were analyzed to predict biochemical recurrence within 5 years using stepwise logistic regression. Discrimination was measured by ROC curve AUC, specificity, sensitivity, predictive values, net reclassification improvement and integrated discrimination index. The overall biochemical recurrence rate was 35%. The model with the best fit combined 8 covariates, including the 5 clinicopathological variables prostate specific antigen, Gleason score, pathological stage, lymph node involvement and margin status, and 3 single nucleotide polymorphisms at the KLK2, SULT1A1 and TLR4 genes. Model predictive power was defined by 80% positive predictive value, 74% negative predictive value and an AUC of 0.78. The model based on clinicopathological variables plus single nucleotide polymorphisms showed significant improvement over the model without single nucleotide polymorphisms, as indicated by 23.3% net reclassification improvement (p = 0.003), integrated discrimination index (p <0.001) and likelihood ratio test (p <0.001). Internal validation proved model robustness (bootstrap corrected AUC 0.78, range 0.74 to 0.82). The calibration plot showed close agreement between biochemical recurrence observed and predicted probabilities. Predicting biochemical recurrence after radical prostatectomy based on clinicopathological data can be significantly improved by including patient genetic information. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
2014-01-01
Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231
Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin
2014-04-28
It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.
USDA-ARS?s Scientific Manuscript database
Near-Infrared reflectance spectroscopic prediction models were developed for common constituents of corn and soybeans using bulk reference values and mean spectra from single-seeds. The bulk reference model and a true single-seed model for soybean protein were compared to determine how well the bul...
Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.
Cuyabano, B C D; Su, G; Rosa, G J M; Lund, M S; Gianola, D
2015-10-01
This study compared the accuracy of genome-enabled prediction models using individual single nucleotide polymorphisms (SNP) or haplotype blocks as covariates when using either a single breed or a combined population of Nordic Red cattle. The main objective was to compare predictions of breeding values of complex traits using a combined training population with haplotype blocks, with predictions using a single breed as training population and individual SNP as predictors. To compare the prediction reliabilities, bootstrap samples were taken from the test data set. With the bootstrapped samples of prediction reliabilities, we built and graphed confidence ellipses to allow comparisons. Finally, measures of statistical distances were used to calculate the gain in predictive ability. Our analyses are innovative in the context of assessment of predictive models, allowing a better understanding of prediction reliabilities and providing a statistical basis to effectively calibrate whether one prediction scenario is indeed more accurate than another. An ANOVA indicated that use of haplotype blocks produced significant gains mainly when Bayesian mixture models were used but not when Bayesian BLUP was fitted to the data. Furthermore, when haplotype blocks were used to train prediction models in a combined Nordic Red cattle population, we obtained up to a statistically significant 5.5% average gain in prediction accuracy, over predictions using individual SNP and training the model with a single breed. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Rapid prediction of single green coffee bean moisture and lipid content by hyperspectral imaging.
Caporaso, Nicola; Whitworth, Martin B; Grebby, Stephen; Fisk, Ian D
2018-06-01
Hyperspectral imaging (1000-2500 nm) was used for rapid prediction of moisture and total lipid content in intact green coffee beans on a single bean basis. Arabica and Robusta samples from several growing locations were scanned using a "push-broom" system. Hypercubes were segmented to select single beans, and average spectra were measured for each bean. Partial Least Squares regression was used to build quantitative prediction models on single beans (n = 320-350). The models exhibited good performance and acceptable prediction errors of ∼0.28% for moisture and ∼0.89% for lipids. This study represents the first time that HSI-based quantitative prediction models have been developed for coffee, and specifically green coffee beans. In addition, this is the first attempt to build such models using single intact coffee beans. The composition variability between beans was studied, and fat and moisture distribution were visualized within individual coffee beans. This rapid, non-destructive approach could have important applications for research laboratories, breeding programmes, and for rapid screening for industry.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Cho, In-Jeong; Sung, Ji Min; Chang, Hyuk-Jae; Chung, Namsik; Kim, Hyeon Chang
2017-11-01
Increasing evidence suggests that repeatedly measured cardiovascular disease (CVD) risk factors may have an additive predictive value compared with single measured levels. Thus, we evaluated the incremental predictive value of incorporating periodic health screening data for CVD prediction in a large nationwide cohort with periodic health screening tests. A total of 467 708 persons aged 40 to 79 years and free from CVD were randomly divided into development (70%) and validation subcohorts (30%). We developed 3 different CVD prediction models: a single measure model using single time point screening data; a longitudinal average model using average risk factor values from periodic screening data; and a longitudinal summary model using average values and the variability of risk factors. The development subcohort included 327 396 persons who had 3.2 health screenings on average and 25 765 cases of CVD over 12 years. The C statistics (95% confidence interval [CI]) for the single measure, longitudinal average, and longitudinal summary models were 0.690 (95% CI, 0.682-0.698), 0.695 (95% CI, 0.687-0.703), and 0.752 (95% CI, 0.744-0.760) in men and 0.732 (95% CI, 0.722-0.742), 0.735 (95% CI, 0.725-0.745), and 0.790 (95% CI, 0.780-0.800) in women, respectively. The net reclassification index from the single measure model to the longitudinal average model was 1.78% in men and 1.33% in women, and the index from the longitudinal average model to the longitudinal summary model was 32.71% in men and 34.98% in women. Using averages of repeatedly measured risk factor values modestly improves CVD predictability compared with single measurement values. Incorporating the average and variability information of repeated measurements can lead to great improvements in disease prediction. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02931500. © 2017 American Heart Association, Inc.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting...
Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J
2015-10-01
An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906
Application of GA-SVM method with parameter optimization for landslide development prediction
NASA Astrophysics Data System (ADS)
Li, X. Z.; Kong, J. M.
2013-10-01
Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.
Sorting protein decoys by machine-learning-to-rank
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-01-01
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967
Sorting protein decoys by machine-learning-to-rank.
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-08-17
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
Liu, Yushan; Ge, Baoming; Abu-Rub, Haitham; ...
2016-06-14
In this study, the active power filter (APF) that consists of a half-bridge leg and an ac capacitor is integrated in the single-phase quasi-Z-source inverter (qZSI) in this paper to avoid the second harmonic power flowing into the dc side. The capacitor of APF buffers the second harmonic power of the load, and the ac capacitor allows highly pulsating ac voltage, so that the capacitances of both dc and ac sides can be small. A model predictive direct power control (DPC) is further proposed to achieve the purpose of this newtopology through predicting the capacitor voltage of APF at eachmore » sampling period and ensuring the APF power to track the second harmonic power of single-phase qZSI. Simulation and experimental results verify the model predictive DPC for the APF-integrated single-phase qZSI.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yushan; Ge, Baoming; Abu-Rub, Haitham
In this study, the active power filter (APF) that consists of a half-bridge leg and an ac capacitor is integrated in the single-phase quasi-Z-source inverter (qZSI) in this paper to avoid the second harmonic power flowing into the dc side. The capacitor of APF buffers the second harmonic power of the load, and the ac capacitor allows highly pulsating ac voltage, so that the capacitances of both dc and ac sides can be small. A model predictive direct power control (DPC) is further proposed to achieve the purpose of this newtopology through predicting the capacitor voltage of APF at eachmore » sampling period and ensuring the APF power to track the second harmonic power of single-phase qZSI. Simulation and experimental results verify the model predictive DPC for the APF-integrated single-phase qZSI.« less
Foundations for computer simulation of a low pressure oil flooded single screw air compressor
NASA Astrophysics Data System (ADS)
Bein, T. W.
1981-12-01
The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.
Smith, Morgan E; Singh, Brajendra K; Irvine, Michael A; Stolk, Wilma A; Subramanian, Swaminathan; Hollingsworth, T Déirdre; Michael, Edwin
2017-03-01
Mathematical models of parasite transmission provide powerful tools for assessing the impacts of interventions. Owing to complexity and uncertainty, no single model may capture all features of transmission and elimination dynamics. Multi-model ensemble modelling offers a framework to help overcome biases of single models. We report on the development of a first multi-model ensemble of three lymphatic filariasis (LF) models (EPIFIL, LYMFASIM, and TRANSFIL), and evaluate its predictive performance in comparison with that of the constituents using calibration and validation data from three case study sites, one each from the three major LF endemic regions: Africa, Southeast Asia and Papua New Guinea (PNG). We assessed the performance of the respective models for predicting the outcomes of annual MDA strategies for various baseline scenarios thought to exemplify the current endemic conditions in the three regions. The results show that the constructed multi-model ensemble outperformed the single models when evaluated across all sites. Single models that best fitted calibration data tended to do less well in simulating the out-of-sample, or validation, intervention data. Scenario modelling results demonstrate that the multi-model ensemble is able to compensate for variance between single models in order to produce more plausible predictions of intervention impacts. Our results highlight the value of an ensemble approach to modelling parasite control dynamics. However, its optimal use will require further methodological improvements as well as consideration of the organizational mechanisms required to ensure that modelling results and data are shared effectively between all stakeholders. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Genomewide predictions from maize single-cross data.
Massman, Jon M; Gordillo, Andres; Lorenzana, Robenzon E; Bernardo, Rex
2013-01-01
Maize (Zea mays L.) breeders evaluate many single-cross hybrids each year in multiple environments. Our objective was to determine the usefulness of genomewide predictions, based on marker effects from maize single-cross data, for identifying the best untested single crosses and the best inbreds within a biparental cross. We considered 479 experimental maize single crosses between 59 Iowa Stiff Stalk Synthetic (BSSS) inbreds and 44 non-BSSS inbreds. The single crosses were evaluated in multilocation experiments from 2001 to 2009 and the BSSS and non-BSSS inbreds had genotypic data for 669 single nucleotide polymorphism (SNP) markers. Single-cross performance was predicted by a previous best linear unbiased prediction (BLUP) approach that utilized marker-based relatedness and information on relatives, and from genomewide marker effects calculated by ridge-regression BLUP (RR-BLUP). With BLUP, the mean prediction accuracy (r(MG)) of single-cross performance was 0.87 for grain yield, 0.90 for grain moisture, 0.69 for stalk lodging, and 0.84 for root lodging. The BLUP and RR-BLUP models did not lead to r(MG) values that differed significantly. We then used the RR-BLUP model, developed from single-cross data, to predict the performance of testcrosses within 14 biparental populations. The r(MG) values within each testcross population were generally low and were often negative. These results were obtained despite the above-average level of linkage disequilibrium, i.e., r(2) between adjacent markers of 0.35 in the BSSS inbreds and 0.26 in the non-BSSS inbreds. Overall, our results suggested that genomewide marker effects estimated from maize single crosses are not advantageous (cofmpared with BLUP) for predicting single-cross performance and have erratic usefulness for predicting testcross performance within a biparental cross.
NIR spectroscopic measurement of moisture content in Scots pine seeds.
Lestander, Torbjörn A; Geladi, Paul
2003-04-01
When tree seeds are used for seedling production it is important that they are of high quality in order to be viable. One of the factors influencing viability is moisture content and an ideal quality control system should be able to measure this factor quickly for each seed. Seed moisture content within the range 3-34% was determined by near-infrared (NIR) spectroscopy on Scots pine (Pinus sylvestris L.) single seeds and on bulk seed samples consisting of 40-50 seeds. The models for predicting water content from the spectra were made by partial least squares (PLS) and ordinary least squares (OLS) regression. Different conditions were simulated involving both using less wavelengths and going from samples to single seeds. Reflectance and transmission measurements were used. Different spectral pretreatment methods were tested on the spectra. Including bias, the lowest prediction errors for PLS models based on reflectance within 780-2280 nm from bulk samples and single seeds were 0.8% and 1.9%, respectively. Reduction of the single seed reflectance spectrum to 850-1048 nm gave higher biases and prediction errors in the test set. In transmission (850-1048 nm) the prediction error was 2.7% for single seeds. OLS models based on simulated 4-sensor single seed system consisting of optical filters with Gaussian transmission indicated more than 3.4% error in prediction. A practical F-test based on test sets to differentiate models is introduced.
Population activity statistics dissect subthreshold and spiking variability in V1.
Bányai, Mihály; Koman, Zsombor; Orbán, Gergő
2017-07-01
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
A Single-System Model Predicts Recognition Memory and Repetition Priming in Amnesia
Kessels, Roy P.C.; Wester, Arie J.; Shanks, David R.
2014-01-01
We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with amnesia categorized pictures of objects at study and then, at test, identified fragmented versions of studied (old) and nonstudied (new) objects (providing a measure of priming), and made a recognition memory judgment (old vs new) for each object. Numerous results in the amnesic patients were predicted in advance by the single-system model, as follows: (1) deficits in recognition memory and priming were evident relative to a control group; (2) items judged as old were identified at greater levels of fragmentation than items judged new, regardless of whether the items were actually old or new; and (3) the magnitude of the priming effect (the identification advantage for old vs new items) overall was greater than that of items judged new. Model evidence measures also favored the single-system model over two formal multiple-systems models. The findings support the single-system model, which explains the pattern of recognition and priming in amnesia primarily as a reduction in the strength of a single dimension of memory strength, rather than a selective explicit memory system deficit. PMID:25122896
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
MQAPRank: improved global protein model quality assessment by learning-to-rank.
Jing, Xiaoyang; Dong, Qiwen
2017-05-25
Protein structure prediction has achieved a lot of progress during the last few decades and a greater number of models for a certain sequence can be predicted. Consequently, assessing the qualities of predicted protein models in perspective is one of the key components of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, which could be roughly divided into three categories: single methods, quasi-single methods and clustering (or consensus) methods. Although these methods achieve much success at different levels, accurate protein model quality assessment is still an open problem. Here, we present the MQAPRank, a global protein model quality assessment program based on learning-to-rank. The MQAPRank first sorts the decoy models by using single method based on learning-to-rank algorithm to indicate their relative qualities for the target protein. And then it takes the first five models as references to predict the qualities of other models by using average GDT_TS scores between reference models and other models. Benchmarked on CASP11 and 3DRobot datasets, the MQAPRank achieved better performances than other leading protein model quality assessment methods. Recently, the MQAPRank participated in the CASP12 under the group name FDUBio and achieved the state-of-the-art performances. The MQAPRank provides a convenient and powerful tool for protein model quality assessment with the state-of-the-art performances, it is useful for protein structure prediction and model quality assessment usages.
NASA Astrophysics Data System (ADS)
Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi
2017-06-01
Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.
Application of First Principles Model to Spacecraft Operations
NASA Technical Reports Server (NTRS)
Timmerman, Paul; Bugga, Ratnakumar; DiStefano, Salvidor
1996-01-01
Previous models use a single phase reaction; cycled cell predicts cannot be met with a single phase; interphase conversion provides means for film aging; aging cells predictions display typical behaviors: pressure changes in NiH² cells; voltage fading upon cycling; second plateau on discharge of cycled cells; negative limited behavior for Ni-Cds.
Dissimilarity based Partial Least Squares (DPLS) for genomic prediction from SNPs.
Singh, Priyanka; Engel, Jasper; Jansen, Jeroen; de Haan, Jorn; Buydens, Lutgarde Maria Celina
2016-05-04
Genomic prediction (GP) allows breeders to select plants and animals based on their breeding potential for desirable traits, without lengthy and expensive field trials or progeny testing. We have proposed to use Dissimilarity-based Partial Least Squares (DPLS) for GP. As a case study, we use the DPLS approach to predict Bacterial wilt (BW) in tomatoes using SNPs as predictors. The DPLS approach was compared with the Genomic Best-Linear Unbiased Prediction (GBLUP) and single-SNP regression with SNP as a fixed effect to assess the performance of DPLS. Eight genomic distance measures were used to quantify relationships between the tomato accessions from the SNPs. Subsequently, each of these distance measures was used to predict the BW using the DPLS prediction model. The DPLS model was found to be robust to the choice of distance measures; similar prediction performances were obtained for each distance measure. DPLS greatly outperformed the single-SNP regression approach, showing that BW is a comprehensive trait dependent on several loci. Next, the performance of the DPLS model was compared to that of GBLUP. Although GBLUP and DPLS are conceptually very different, the prediction quality (PQ) measured by DPLS models were similar to the prediction statistics obtained from GBLUP. A considerable advantage of DPLS is that the genotype-phenotype relationship can easily be visualized in a 2-D scatter plot. This so-called score-plot provides breeders an insight to select candidates for their future breeding program. DPLS is a highly appropriate method for GP. The model prediction performance was similar to the GBLUP and far better than the single-SNP approach. The proposed method can be used in combination with a wide range of genomic dissimilarity measures and genotype representations such as allele-count, haplotypes or allele-intensity values. Additionally, the data can be insightfully visualized by the DPLS model, allowing for selection of desirable candidates from the breeding experiments. In this study, we have assessed the DPLS performance on a single trait.
Ye, Jiang-Feng; Zhao, Yu-Xin; Ju, Jian; Wang, Wei
2017-10-01
To discuss the value of the Bedside Index for Severity in Acute Pancreatitis (BISAP), Modified Early Warning Score (MEWS), serum Ca2+, similarly hereinafter, and red cell distribution width (RDW) for predicting the severity grade of acute pancreatitis and to develop and verify a more accurate scoring system to predict the severity of AP. In 302 patients with AP, we calculated BISAP and MEWS scores and conducted regression analyses on the relationships of BISAP scoring, RDW, MEWS, and serum Ca2+ with the severity of AP using single-factor logistics. The variables with statistical significance in the single-factor logistic regression were used in a multi-factor logistic regression model; forward stepwise regression was used to screen variables and build a multi-factor prediction model. A receiver operating characteristic curve (ROC curve) was constructed, and the significance of multi- and single-factor prediction models in predicting the severity of AP using the area under the ROC curve (AUC) was evaluated. The internal validity of the model was verified through bootstrapping. Among 302 patients with AP, 209 had mild acute pancreatitis (MAP) and 93 had severe acute pancreatitis (SAP). According to single-factor logistic regression analysis, we found that BISAP, MEWS and serum Ca2+ are prediction indexes of the severity of AP (P-value<0.001), whereas RDW is not a prediction index of AP severity (P-value>0.05). The multi-factor logistic regression analysis showed that BISAP and serum Ca2+ are independent prediction indexes of AP severity (P-value<0.001), and MEWS is not an independent prediction index of AP severity (P-value>0.05); BISAP is negatively related to serum Ca2+ (r=-0.330, P-value<0.001). The constructed model is as follows: ln()=7.306+1.151*BISAP-4.516*serum Ca2+. The predictive ability of each model for SAP follows the order of the combined BISAP and serum Ca2+ prediction model>Ca2+>BISAP. There is no statistical significance for the predictive ability of BISAP and serum Ca2+ (P-value>0.05); however, there is remarkable statistical significance for the predictive ability using the newly built prediction model as well as BISAP and serum Ca2+ individually (P-value<0.01). Verification of the internal validity of the models by bootstrapping is favorable. BISAP and serum Ca2+ have high predictive value for the severity of AP. However, the model built by combining BISAP and serum Ca2+ is remarkably superior to those of BISAP and serum Ca2+ individually. Furthermore, this model is simple, practical and appropriate for clinical use. Copyright © 2016. Published by Elsevier Masson SAS.
Airport Noise Prediction Model -- MOD 7
DOT National Transportation Integrated Search
1978-07-01
The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...
Criteria for predicting the formation of single-phase high-entropy alloys
Troparevsky, M Claudia; Morris, James R..; Kent, Paul R.; ...
2015-03-15
High entropy alloys constitute a new class of materials whose very existence poses fundamental questions. Originally thought to be stabilized by the large entropy of mixing, these alloys have attracted attention due to their potential applications, yet no model capable of robustly predicting which combinations of elements will form a single-phase currently exists. Here we propose a model that, through the use of high-throughput computation of the enthalpies of formation of binary compounds, is able to confirm all known high-entropy alloys while rejecting similar alloys that are known to form multiple phases. Despite the increasing entropy, our model predicts thatmore » the number of potential single-phase multicomponent alloys decreases with an increasing number of components: out of more than two million possible 7-component alloys considered, fewer than twenty single-phase alloys are likely.« less
Prediction of Human Cytochrome P450 Inhibition Using a Multitask Deep Autoencoder Neural Network.
Li, Xiang; Xu, Youjun; Lai, Luhua; Pei, Jianfeng
2018-05-30
Adverse side effects of drug-drug interactions induced by human cytochrome P450 (CYP450) inhibition is an important consideration in drug discovery. It is highly desirable to develop computational models that can predict the inhibitive effect of a compound against a specific CYP450 isoform. In this study, we developed a multitask model for concurrent inhibition prediction of five major CYP450 isoforms, namely, 1A2, 2C9, 2C19, 2D6, and 3A4. The model was built by training a multitask autoencoder deep neural network (DNN) on a large dataset containing more than 13 000 compounds, extracted from the PubChem BioAssay Database. We demonstrate that the multitask model gave better prediction results than that of single-task models, previous reported classifiers, and traditional machine learning methods on an average of five prediction tasks. Our multitask DNN model gave average prediction accuracies of 86.4% for the 10-fold cross-validation and 88.7% for the external test datasets. In addition, we built linear regression models to quantify how the other tasks contributed to the prediction difference of a given task between single-task and multitask models, and we explained under what conditions the multitask model will outperform the single-task model, which suggested how to use multitask DNN models more effectively. We applied sensitivity analysis to extract useful knowledge about CYP450 inhibition, which may shed light on the structural features of these isoforms and give hints about how to avoid side effects during drug development. Our models are freely available at http://repharma.pku.edu.cn/deepcyp/home.php or http://www.pkumdl.cn/deepcyp/home.php .
APOLLO: a quality assessment service for single and multiple protein models.
Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin
2011-06-15
We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.
1994-01-01
Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085
Bernardo, R
1996-11-01
Best linear unbiased prediction (BLUP) has been found to be useful in maize (Zea mays L.) breeding. The advantage of including both testcross additive and dominance effects (Intralocus Model) in BLUP, rather than only testcross additive effects (Additive Model), has not been clearly demonstrated. The objective of this study was to compare the usefulness of Intralocus and Additive Models for BLUP of maize single-cross performance. Multilocation data from 1990 to 1995 were obtained from the hybrid testing program of Limagrain Genetics. Grain yield, moisture, stalk lodging, and root lodging of untested single crosses were predicted from (1) the performance of tested single crosses and (2) known genetic relationships among the parental inbreds. Correlations between predicted and observed performance were obtained with a delete-one cross-validation procedure. For the Intralocus Model, the correlations ranged from 0.50 to 0.66 for yield, 0.88 to 0.94 for moisture, 0.47 to 0.69 for stalk lodging, and 0.31 to 0.45 for root lodging. The BLUP procedure was consistently more effective with the Intralocus Model than with the Additive Model. When the Additive Model was used instead of the Intralocus Model, the reductions in the correlation were largest for root lodging (0.06-0.35), smallest for moisture (0.00-0.02), and intermediate for yield (0.02-0.06) and stalk lodging (0.02-0.08). The ratio of dominance variance (v D) to total genetic variance (v G) was highest for root lodging (0.47) and lowest for moisture (0.10). The Additive Model may be used if prior information indicates that VD for a given trait has little contribution to VG. Otherwise, the continued use of the Intralocus Model for BLUP of single-cross performance is recommended.
Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.
2009-01-01
This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.
Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric
2016-09-01
A model that describes solvent evaporation dynamics in meniscus-guided coating techniques is developed. In combination with a single fitting parameter, it is shown that this formula can accurately predict a processing window for various coating conditions. Organic thin-film transistors (OTFTs), fabricated by a zone-casting setup, indeed show the best performance at the predicted coating speeds with mobilities reaching 7 cm 2 V -1 s -1 . © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A statistical method for predicting seizure onset zones from human single-neuron recordings
NASA Astrophysics Data System (ADS)
Valdez, André B.; Hickman, Erin N.; Treiman, David M.; Smith, Kris A.; Steinmetz, Peter N.
2013-02-01
Objective. Clinicians often use depth-electrode recordings to localize human epileptogenic foci. To advance the diagnostic value of these recordings, we applied logistic regression models to single-neuron recordings from depth-electrode microwires to predict seizure onset zones (SOZs). Approach. We collected data from 17 epilepsy patients at the Barrow Neurological Institute and developed logistic regression models to calculate the odds of observing SOZs in the hippocampus, amygdala and ventromedial prefrontal cortex, based on statistics such as the burst interspike interval (ISI). Main results. Analysis of these models showed that, for a single-unit increase in burst ISI ratio, the left hippocampus was approximately 12 times more likely to contain a SOZ; and the right amygdala, 14.5 times more likely. Our models were most accurate for the hippocampus bilaterally (at 85% average sensitivity), and performance was comparable with current diagnostics such as electroencephalography. Significance. Logistic regression models can be combined with single-neuron recording to predict likely SOZs in epilepsy patients being evaluated for resective surgery, providing an automated source of clinically useful information.
NASA Technical Reports Server (NTRS)
Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily;
2013-01-01
The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.
DeShaw, Jonathan; Rahmatalla, Salam
2014-08-01
The aim of this study was to develop a predictive discomfort model in single-axis, 3-D, and 6-D combined-axis whole-body vibrations of seated occupants considering different postures. Non-neutral postures in seated whole-body vibration play a significant role in the resulting level of perceived discomfort and potential long-term injury. The current international standards address contact points but not postures. The proposed model computes discomfort on the basis of static deviation of human joints from their neutral positions and how fast humans rotate their joints under vibration. Four seated postures were investigated. For practical implications, the coefficients of the predictive discomfort model were changed into the Borg scale with psychophysical data from 12 volunteers in different vibration conditions (single-axis random fore-aft, lateral, and vertical and two magnitudes of 3-D). The model was tested under two magnitudes of 6-D vibration. Significant correlations (R = .93) were found between the predictive discomfort model and the reported discomfort with different postures and vibrations. The ISO 2631-1 correlated very well with discomfort (R2 = .89) but was not able to predict the effect of posture. Human discomfort in seated whole-body vibration with different non-neutral postures can be closely predicted by a combination of static posture and the angular velocities of the joint. The predictive discomfort model can assist ergonomists and human factors researchers design safer environments for seated operators under vibration. The model can be integrated with advanced computer biomechanical models to investigate the complex interaction between posture and vibration.
Anisotropic constitutive modeling for nickel-base single crystal superalloys. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sheh, Michael Y.
1988-01-01
An anisotropic constitutive model was developed based on crystallographic slip theory for nickel base single crystal superalloys. The constitutive equations developed utilizes drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments were conducted to evaluate the existence of back stress in single crystal superalloy Rene N4 at 982 C. The results suggest that: (1) the back stress is orientation dependent; and (2) the back stress state variable is required for the current model to predict material anelastic recovery behavior. The model was evaluated for its predictive capability on single crystal material behavior including orientation dependent stress-strain response, tension/compression asymmetry, strain rate sensitivity, anelastic recovery behavior, cyclic hardening and softening, stress relaxation, creep and associated crystal lattice rotation. Limitation and future development needs are discussed.
Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models
Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin
2017-01-01
In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384
Genomic Selection in Multi-environment Crop Trials.
Oakey, Helena; Cullis, Brian; Thompson, Robin; Comadran, Jordi; Halpin, Claire; Waugh, Robbie
2016-05-03
Genomic selection in crop breeding introduces modeling challenges not found in animal studies. These include the need to accommodate replicate plants for each line, consider spatial variation in field trials, address line by environment interactions, and capture nonadditive effects. Here, we propose a flexible single-stage genomic selection approach that resolves these issues. Our linear mixed model incorporates spatial variation through environment-specific terms, and also randomization-based design terms. It considers marker, and marker by environment interactions using ridge regression best linear unbiased prediction to extend genomic selection to multiple environments. Since the approach uses the raw data from line replicates, the line genetic variation is partitioned into marker and nonmarker residual genetic variation (i.e., additive and nonadditive effects). This results in a more precise estimate of marker genetic effects. Using barley height data from trials, in 2 different years, of up to 477 cultivars, we demonstrate that our new genomic selection model improves predictions compared to current models. Analyzing single trials revealed improvements in predictive ability of up to 5.7%. For the multiple environment trial (MET) model, combining both year trials improved predictive ability up to 11.4% compared to a single environment analysis. Benefits were significant even when fewer markers were used. Compared to a single-year standard model run with 3490 markers, our partitioned MET model achieved the same predictive ability using between 500 and 1000 markers depending on the trial. Our approach can be used to increase accuracy and confidence in the selection of the best lines for breeding and/or, to reduce costs by using fewer markers. Copyright © 2016 Oakey et al.
Improved model quality assessment using ProQ2.
Ray, Arjun; Lindahl, Erik; Wallner, Björn
2012-09-10
Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2.wallnerlab.org.
Multi input single output model predictive control of non-linear bio-polymerization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugasamy, Senthil Kumar; Ahmad, Z.
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less
Review on failure prediction techniques of composite single lap joint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ab Ghani, A.F., E-mail: ahmadfuad@utem.edu.my; Rivai, Ahmad, E-mail: ahmadrivai@utem.edu.my
2016-03-29
Adhesive bonding is the most appropriate joining method in construction of composite structures. The use of reliable design and prediction technique will produce better performance of bonded joints. Several papers from recent papers and journals have been reviewed and synthesized to understand the current state of the art in this area. It is done by studying the most relevant analytical solutions for composite adherends with start of reviewing the most fundamental ones involving beam/plate theory. It is then extended to review single lap joint non linearity and failure prediction and finally on the failure prediction on composite single lap joint.more » The review also encompasses the finite element modelling part as tool to predict the elastic response of composite single lap joint and failure prediction numerically.« less
A crystallographic model for the tensile and fatigue response for Rene N4 at 982 C
NASA Technical Reports Server (NTRS)
Sheh, M. Y.; Stouffer, D. C.
1990-01-01
An anisotropic constitutive model based on crystallographic slip theory was formulated for nickel-base single-crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the existence of back stress in single crystals. The results showed that the back stress effect of reverse inelastic flow on the unloading stress is orientation-dependent, and a back stress state variable in the inelastic flow equation is necessary for predicting inelastic behavior. Model correlations and predictions of experimental data are presented for the single crystal superalloy Rene N4 at 982 C.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
Barrett, Jessica; Pennells, Lisa; Sweeting, Michael; Willeit, Peter; Di Angelantonio, Emanuele; Gudnason, Vilmundur; Nordestgaard, Børge G.; Psaty, Bruce M; Goldbourt, Uri; Best, Lyle G; Assmann, Gerd; Salonen, Jukka T; Nietert, Paul J; Verschuren, W. M. Monique; Brunner, Eric J; Kronmal, Richard A; Salomaa, Veikko; Bakker, Stephan J L; Dagenais, Gilles R; Sato, Shinichi; Jansson, Jan-Håkan; Willeit, Johann; Onat, Altan; de la Cámara, Agustin Gómez; Roussel, Ronan; Völzke, Henry; Dankner, Rachel; Tipping, Robert W; Meade, Tom W; Donfrancesco, Chiara; Kuller, Lewis H; Peters, Annette; Gallacher, John; Kromhout, Daan; Iso, Hiroyasu; Knuiman, Matthew; Casiglia, Edoardo; Kavousi, Maryam; Palmieri, Luigi; Sundström, Johan; Davis, Barry R; Njølstad, Inger; Couper, David; Danesh, John; Thompson, Simon G; Wood, Angela
2017-01-01
Abstract The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data encompassing 1962–2014) with more than 1 million measurements of systolic blood pressure, total cholesterol, and high-density lipoprotein cholesterol. Over a median 12 years of follow-up, 21,170 CVD events occurred. Risk prediction models using cumulative mean values of repeated measurements and summary measures from longitudinal modeling of the repeated measurements were compared with models using measurements from a single time point. Risk discrimination (C-index) and net reclassification were calculated, and changes in C-indices were meta-analyzed across studies. Compared with the single-time-point model, the cumulative means and longitudinal models increased the C-index by 0.0040 (95% confidence interval (CI): 0.0023, 0.0057) and 0.0023 (95% CI: 0.0005, 0.0042), respectively. Reclassification was also improved in both models; compared with the single-time-point model, overall net reclassification improvements were 0.0369 (95% CI: 0.0303, 0.0436) for the cumulative-means model and 0.0177 (95% CI: 0.0110, 0.0243) for the longitudinal model. In conclusion, incorporating repeated measurements of blood pressure and cholesterol into CVD risk prediction models slightly improves risk prediction. PMID:28549073
Bias in Prediction: A Test of Three Models with Elementary School Children
ERIC Educational Resources Information Center
Frazer, William G.; And Others
1975-01-01
Explores the differences among the traditional single-equation prediction model of test bias, the Cleary and the Thorndike model in a situation involving typical educational variables with young female and male children. (Author/DEP)
Li, Yankun; Shao, Xueguang; Cai, Wensheng
2007-04-15
Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Predictive models of radiative neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julio, J., E-mail: julio@lipi.go.id
2016-06-21
We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
ISO Mid-Infrared Spectra of Reflection Nebulae
NASA Technical Reports Server (NTRS)
Werner, M.; Uchida, K.; Sellgren, K.; Houdashelt, M.
1999-01-01
Our goal is to test predictions of models attributing the IEFs to polycyclic aromatic hydrocarbons (PAHs). Interstellar models predict PAHs change from singly ionized to neutral as the UV intensity, Go, decreases.
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo
2009-01-01
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...
Zielinski, Michal W; McGann, Locksley E; Nychka, John A; Elliott, Janet A W
2014-10-01
Thermodynamic solution theories allow the prediction of chemical potentials in solutions of known composition. In cryobiology, such models are a critical component of many mathematical models that are used to simulate the biophysical processes occurring in cells and tissues during cryopreservation. A number of solution theories, both thermodynamically ideal and non-ideal, have been proposed for use with cryobiological solutions. In this work, we have evaluated two non-ideal solution theories for predicting water chemical potential (i.e. osmolality) in multi-solute solutions relevant to cryobiology: the Elliott et al. form of the multi-solute osmotic virial equation, and the Kleinhans and Mazur freezing point summation model. These two solution theories require fitting to only single-solute data, although they can make predictions in multi-solute solutions. The predictions of these non-ideal solution theories were compared to predictions made using ideal dilute assumptions and to available literature multi-solute experimental osmometric data. A single, consistent set of literature single-solute solution data was used to fit for the required solute-specific coefficients for each of the non-ideal models. Our results indicate that the two non-ideal solution theories have similar overall performance, and both give more accurate predictions than ideal models. These results can be used to select between the non-ideal models for a specific multi-solute solution, and the updated coefficients provided in this work can be used to make the desired predictions. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, Claire, E-mail: claire.grant@astrazeneca.com; Ewart, Lorna; Muthas, Daniel
Nausea and vomiting are components of a complex mechanism that signals food avoidance and protection of the body against the absorption of ingested toxins. This response can also be triggered by pharmaceuticals. Predicting clinical nausea and vomiting liability for pharmaceutical agents based on pre-clinical data can be problematic as no single animal model is a universal predictor. Moreover, efforts to improve models are hampered by the lack of translational animal and human data in the public domain. AZD3514 is a novel, orally-administered compound that inhibits androgen receptor signaling and down-regulates androgen receptor expression. Here we have explored the utility ofmore » integrating data from several pre-clinical models to predict nausea and vomiting in the clinic. Single and repeat doses of AZD3514 resulted in emesis, salivation and gastrointestinal disturbances in the dog, and inhibited gastric emptying in rats after a single dose. AZD3514, at clinically relevant exposures, induced dose-responsive “pica” behaviour in rats after single and multiple daily doses, and induced retching and vomiting behaviour in ferrets after a single dose. We compare these data with the clinical manifestation of nausea and vomiting encountered in patients with castration-resistant prostate cancer receiving AZD3514. Our data reveal a striking relationship between the pre-clinical observations described and the experience of nausea and vomiting in the clinic. In conclusion, the emetic nature of AZD3514 was predicted across a range of pre-clinical models, and the approach presented provides a valuable framework for predicition of clinical nausea and vomiting. - Highlights: • Integrated pre-clinical data can be used to predict clinical nausea and vomiting. • Data integrated from standard toxicology studies is sufficient to make a prediction. • The use of the nausea algorithm developed by Parkinson (2012) aids the prediction. • Additional pre-clinical studies can be used to confirm and quantify the risk.« less
Accuracies of univariate and multivariate genomic prediction models in African cassava.
Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2017-12-04
Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.
NASA Astrophysics Data System (ADS)
Bisdom, Kevin; Bertotti, Giovanni; Nick, Hamidreza M.
2016-05-01
Predicting equivalent permeability in fractured reservoirs requires an understanding of the fracture network geometry and apertures. There are different methods for defining aperture, based on outcrop observations (power law scaling), fundamental mechanics (sublinear length-aperture scaling), and experiments (Barton-Bandis conductive shearing). Each method predicts heterogeneous apertures, even along single fractures (i.e., intrafracture variations), but most fractured reservoir models imply constant apertures for single fractures. We compare the relative differences in aperture and permeability predicted by three aperture methods, where permeability is modeled in explicit fracture networks with coupled fracture-matrix flow. Aperture varies along single fractures, and geomechanical relations are used to identify which fractures are critically stressed. The aperture models are applied to real-world large-scale fracture networks. (Sub)linear length scaling predicts the largest average aperture and equivalent permeability. Barton-Bandis aperture is smaller, predicting on average a sixfold increase compared to matrix permeability. Application of critical stress criteria results in a decrease in the fraction of open fractures. For the applied stress conditions, Coulomb predicts that 50% of the network is critically stressed, compared to 80% for Barton-Bandis peak shear. The impact of the fracture network on equivalent permeability depends on the matrix hydraulic properties, as in a low-permeable matrix, intrafracture connectivity, i.e., the opening along a single fracture, controls equivalent permeability, whereas for a more permeable matrix, absolute apertures have a larger impact. Quantification of fracture flow regimes using only the ratio of fracture versus matrix permeability is insufficient, as these regimes also depend on aperture variations within fractures.
Prediction of global and local model quality in CASP8 using the ModFOLD server.
McGuffin, Liam J
2009-01-01
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Lu, Yinghui; Gribok, Andrei V; Ward, W Kenneth; Reifman, Jaques
2010-08-01
We investigated the relative importance and predictive power of different frequency bands of subcutaneous glucose signals for the short-term (0-50 min) forecasting of glucose concentrations in type 1 diabetic patients with data-driven autoregressive (AR) models. The study data consisted of minute-by-minute glucose signals collected from nine deidentified patients over a five-day period using continuous glucose monitoring devices. AR models were developed using single and pairwise combinations of frequency bands of the glucose signal and compared with a reference model including all bands. The results suggest that: for open-loop applications, there is no need to explicitly represent exogenous inputs, such as meals and insulin intake, in AR models; models based on a single-frequency band, with periods between 60-120 min and 150-500 min, yield good predictive power (error <3 mg/dL) for prediction horizons of up to 25 min; models based on pairs of bands produce predictions that are indistinguishable from those of the reference model as long as the 60-120 min period band is included; and AR models can be developed on signals of short length (approximately 300 min), i.e., ignoring long circadian rhythms, without any detriment in prediction accuracy. Together, these findings provide insights into efficient development of more effective and parsimonious data-driven models for short-term prediction of glucose concentrations in diabetic patients.
Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas
2016-05-01
We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.
NASA Astrophysics Data System (ADS)
Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin
2018-03-01
The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.
Chesi, Marta; Matthews, Geoffrey M.; Garbitt, Victoria M.; Palmer, Stephen E.; Shortt, Jake; Lefebure, Marcus; Stewart, A. Keith; Johnstone, Ricky W.
2012-01-01
The attrition rate for anticancer drugs entering clinical trials is unacceptably high. For multiple myeloma (MM), we postulate that this is because of preclinical models that overemphasize the antiproliferative activity of drugs, and clinical trials performed in refractory end-stage patients. We validate the Vk*MYC transgenic mouse as a faithful model to predict single-agent drug activity in MM with a positive predictive value of 67% (4 of 6) for clinical activity, and a negative predictive value of 86% (6 of 7) for clinical inactivity. We identify 4 novel agents that should be prioritized for evaluation in clinical trials. Transplantation of Vk*MYC tumor cells into congenic mice selected for a more aggressive disease that models end-stage drug-resistant MM and responds only to combinations of drugs with single-agent activity in untreated Vk*MYC MM. We predict that combinations of standard agents, histone deacetylase inhibitors, bromodomain inhibitors, and hypoxia-activated prodrugs will demonstrate efficacy in the treatment of relapsed MM. PMID:22451422
Initial comparison of single cylinder Stirling engine computer model predictions with test results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.
ISOBAR MODEL ANALYSIS OF SINGLE PION PRODUCTION IN PION-NUCLEON COLLISIONS BELOW 1 Bev
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsson, M.; Yodh, G.B.
1963-04-15
The isobar model of Bergia, Bonsignori, and Stanghellini for single ceramic materia production in ceramic materia -N collisions is shown to account for the majority of the observed mass spectra and the ratio of ceramic materia / sup 0/ to ceramic materia /sup +/ production in ceramic materia /sup +/-p collisions fr3350 Mev to 1 Bev when the p-wave decay of the isobar and requirements of Bose statistics are included. Predictions of this improved model are compared with experimental data and with the predictions of other models. (D.C.W.)
A dual-porosity reactive-transport model of off-axis hydrothermal systems
NASA Astrophysics Data System (ADS)
Farahat, N. X.; Abbot, D. S.; Archer, D. E.
2017-12-01
We built a dual-porosity reactive-transport 2D numerical model of off-axis pillow basalt alteration. An "outer chamber" full of porous glassy material supports significant seawater flushing, and an "inner chamber", which represents the more crystalline interior of a pillow, supports diffusive alteration. Hydrothermal fluids in the two chambers interact, and the two chambers are coupled to 2D flows. In a few million years of low-temperature alteration, the dual-porosity model predicts progressive stages of alteration that have been observed in drilled crust. A single-porosity model, with all else being equal, does not predict alteration stages as well. The dual-chamber model also does a better job than the single-chamber model at predicting the types of minerals expected in off-axis environments. We validate the model's ability to reproduce observations by configuring it to represent a thoroughly-studied transect of the Juan de Fuca Ridge eastern flank.
Single-trial dynamics of motor cortex and their applications to brain-machine interfaces
Kao, Jonathan C.; Nuyujukian, Paul; Ryu, Stephen I.; Churchland, Mark M.; Cunningham, John P.; Shenoy, Krishna V.
2015-01-01
Increasing evidence suggests that neural population responses have their own internal drive, or dynamics, that describe how the neural population evolves through time. An important prediction of neural dynamical models is that previously observed neural activity is informative of noisy yet-to-be-observed activity on single-trials, and may thus have a denoising effect. To investigate this prediction, we built and characterized dynamical models of single-trial motor cortical activity. We find these models capture salient dynamical features of the neural population and are informative of future neural activity on single trials. To assess how neural dynamics may beneficially denoise single-trial neural activity, we incorporate neural dynamics into a brain–machine interface (BMI). In online experiments, we find that a neural dynamical BMI achieves substantially higher performance than its non-dynamical counterpart. These results provide evidence that neural dynamics beneficially inform the temporal evolution of neural activity on single trials and may directly impact the performance of BMIs. PMID:26220660
NASA Technical Reports Server (NTRS)
Bulzan, Daniel L.
1988-01-01
A theoretical and experimental investigation of particle-laden, weakly swirling, turbulent free jets was conducted. Glass particles, having a Sauter mean diameter of 39 microns, with a standard deviation of 15 microns, were used. A single loading ratio (the mass flow rate of particles per unit mass flow rate of air) of 0.2 was used in the experiments. Measurements are reported for three swirl numbers, ranging from 0 to 0.33. The measurements included mean and fluctuating velocities of both phases, and particle mass flux distributions. Measurements were also completed for single-phase non-swirling and swirling jets, as baselines. Measurements were compared with predictions from three types of multiphase flow analysis, as follows: (1) locally homogeneous flow (LHF) where slip between the phases was neglected; (2) deterministic separated flow (DSF), where slip was considered but effects of turbulence/particle interactions were neglected; and (3) stochastic separated flow (SSF), where effects of both interphase slip and turbulence/particle interactions were considered using random sampling for turbulence properties in conjunction with random-walk computations for particle motion. Single-phase weakly swirling jets were considered first. Predictions using a standard k-epsilon turbulence model, as well as two versions modified to account for effects of streamline curvature, were compared with measurements. Predictions using a streamline curvature modification based on the flux Richardson number gave better agreement with measurements for the single-phase swirling jets than the standard k-epsilon model. For the particle-laden jets, the LHF and DSF models did not provide very satisfactory predictions. The LHF model generally overestimated the rate of decay of particle mean axial and angular velocities with streamwise distance, and predicted particle mass fluxes also showed poor agreement with measurements, due to the assumption of no-slip between phases. The DSF model also performed quite poorly for predictions of particle mass flux because turbulent dispersion of the particles was neglected. The SSF model, which accounts for both particle inertia and turbulent dispersion of the particles, yielded reasonably good predictions throughout the flow field for the particle-laden jets.
Development of burnup dependent fuel rod model in COBRA-TF
NASA Astrophysics Data System (ADS)
Yilmaz, Mine Ozdemir
The purpose of this research was to develop a burnup dependent fuel thermal conductivity model within Pennsylvania State University, Reactor Dynamics and Fuel Management Group (RDFMG) version of the subchannel thermal-hydraulics code COBRA-TF (CTF). The model takes into account first, the degradation of fuel thermal conductivity with high burnup; and second, the fuel thermal conductivity dependence on the Gadolinium content for both UO2 and MOX fuel rods. The modified Nuclear Fuel Industries (NFI) model for UO2 fuel rods and Duriez/Modified NFI Model for MOX fuel rods were incorporated into CTF and fuel centerline predictions were compared against Halden experimental test data and FRAPCON-3.4 predictions to validate the burnup dependent fuel thermal conductivity model in CTF. Experimental test cases from Halden reactor fuel rods for UO2 fuel rods at Beginning of Life (BOL), through lifetime without Gd2O3 and through lifetime with Gd 2O3 and a MOX fuel rod were simulated with CTF. Since test fuel rod and FRAPCON-3.4 results were based on single rod measurements, CTF was run for a single fuel rod surrounded with a single channel configuration. Input decks for CTF were developed for one fuel rod located at the center of a subchannel (rod-centered subchannel approach). Fuel centerline temperatures predicted by CTF were compared against the measurements from Halden experimental test data and the predictions from FRAPCON-3.4. After implementing the new fuel thermal conductivity model in CTF and validating the model with experimental data, CTF model was applied to steady state and transient calculations. 4x4 PWR fuel bundle configuration from Purdue MOX benchmark was used to apply the new model for steady state and transient calculations. First, one of each high burnup UO2 and MOX fuel rods from 4x4 matrix were selected to carry out single fuel rod calculations and fuel centerline temperatures predicted by CTF/TORT-TD were compared against CTF /TORT-TD /FRAPTRAN predictions. After confirming that the new fuel thermal conductivity model in CTF worked and provided consistent results with FRAPTRAN predictions for a single fuel rod configuration, the same type of analysis was carried out for a bigger system which is the 4x4 PWR bundle consisting of 15 fuel pins and one control guide tube. Steady- state calculations at Hot Full Power (HFP) conditions for control guide tube out (unrodded) were performed using the 4x4 PWR array with CTF/TORT-TD coupled code system. Fuel centerline, surface and average temperatures predicted by CTF/TORT-TD with and without the new fuel thermal conductivity model were compared against CTF/TORT-TD/FRAPTRAN predictions to demonstrate the improvement in fuel centerline predictions when new model was used. In addition to that constant and CTF dynamic gap conductance model were used with the new thermal conductivity model to show the performance of the CTF dynamic gap conductance model and its impact on fuel centerline and surface temperatures. Finally, a Rod Ejection Accident (REA) scenario using the same 4x4 PWR array was run both at Hot Zero Power (HZP) and Hot Full Power (HFP) condition, starting at a position where half of the control rod is inserted. This scenario was run using CTF/TORT-TD coupled code system with and without the new fuel thermal conductivity model. The purpose of this transient analysis was to show the impact of thermal conductivity degradation (TCD) on feedback effects, specifically Doppler Reactivity Coefficient (DRC) and, eventually, total core reactivity.
DeepQA: improving the estimation of single protein model quality with deep belief networks.
Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin
2016-12-05
Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .
ERIC Educational Resources Information Center
Peterson, Robin L.; Pennington, Bruce F.; Olson, Richard K.
2013-01-01
We investigated the phonological and surface subtypes of developmental dyslexia in light of competing predictions made by two computational models of single word reading, the Dual-Route Cascaded Model (DRC; Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001) and Harm and Seidenberg's connectionist model (HS model; Harm & Seidenberg, 1999). The…
Single Motherhood, Alcohol Dependence, and Smoking During Pregnancy: A Propensity Score Analysis.
Waldron, Mary; Bucholz, Kathleen K; Lian, Min; Lessov-Schlaggar, Christina N; Miller, Ruth Huang; Lynskey, Michael T; Knopik, Valerie S; Madden, Pamela A F; Heath, Andrew C
2017-09-01
Few studies linking single motherhood and maternal smoking during pregnancy consider correlated risk from problem substance use beyond history of smoking and concurrent use of alcohol. In the present study, we used propensity score methods to examine whether the risk of smoking during pregnancy associated with single motherhood is the result of potential confounders, including alcohol dependence. Data were drawn from mothers participating in a birth cohort study of their female like-sex twin offspring (n = 257 African ancestry; n = 1,711 European or other ancestry). We conducted standard logistic regression models predicting smoking during pregnancy from single motherhood at twins' birth, followed by propensity score analyses comparing single-mother and two-parent families stratified by predicted probability of single motherhood. In standard models, single motherhood predicted increased risk of smoking during pregnancy in European ancestry but not African ancestry families. In propensity score analyses, rates of smoking during pregnancy were elevated in single-mother relative to two-parent European ancestry families across much of the spectrum a priori risk of single motherhood. Among African ancestry families, within-strata comparisons of smoking during pregnancy by single-mother status were nonsignificant. These findings highlight single motherhood as a unique risk factor for smoking during pregnancy in European ancestry mothers, over and above alcohol dependence. Additional research is needed to identify risks, beyond single motherhood, associated with smoking during pregnancy in African ancestry mothers.
Glavatskikh, Marta; Madzhidov, Timur; Solov'ev, Vitaly; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre
2016-12-01
In this work, we report QSPR modeling of the free energy ΔG of 1 : 1 hydrogen bond complexes of different H-bond acceptors and donors. The modeling was performed on a large and structurally diverse set of 3373 complexes featuring a single hydrogen bond, for which ΔG was measured at 298 K in CCl 4 . The models were prepared using Support Vector Machine and Multiple Linear Regression, with ISIDA fragment descriptors. The marked atoms strategy was applied at fragmentation stage, in order to capture the location of H-bond donor and acceptor centers. Different strategies of model validation have been suggested, including the targeted omission of individual H-bond acceptors and donors from the training set, in order to check whether the predictive ability of the model is not limited to the interpolation of H-bond strength between two already encountered partners. Successfully cross-validating individual models were combined into a consensus model, and challenged to predict external test sets of 629 and 12 complexes, in which donor and acceptor formed single and cooperative H-bonds, respectively. In all cases, SVM models outperform MLR. The SVM consensus model performs well both in 3-fold cross-validation (RMSE=1.50 kJ/mol), and on the external test sets containing complexes with single (RMSE=3.20 kJ/mol) and cooperative H-bonds (RMSE=1.63 kJ/mol). © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Local Burn-Up Effects in the NBSR Fuel Element
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown N. R.; Hanson A.; Diamond, D.
2013-01-31
This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
A coupled ductile fracture phase-field model for crystal plasticity
NASA Astrophysics Data System (ADS)
Hernandez Padilla, Carlos Alberto; Markert, Bernd
2017-07-01
Nowadays crack initiation and evolution play a key role in the design of mechanical components. In the past few decades, several numerical approaches have been developed with the objective to predict these phenomena. The objective of this work is to present a simplified, nonetheless representative phenomenological model to predict the crack evolution of ductile fracture in single crystals. The proposed numerical approach is carried out by merging a conventional elasto-plastic crystal plasticity model and a phase-field model modified to predict ductile fracture. A two-dimensional initial boundary value problem of ductile fracture is introduced considering a single-crystal setup and Nickel-base superalloy material properties. The model is implemented into the finite element context subjected to a quasi-static uniaxial tension test. The results are then qualitatively analyzed and briefly compared to current benchmark results in the literature.
Dias, Kaio Olímpio Das Graças; Gezan, Salvador Alejandro; Guimarães, Claudia Teixeira; Nazarian, Alireza; da Costa E Silva, Luciano; Parentoni, Sidney Netto; de Oliveira Guimarães, Paulo Evaristo; de Oliveira Anoni, Carina; Pádua, José Maria Villela; de Oliveira Pinto, Marcos; Noda, Roberto Willians; Ribeiro, Carlos Alexandre Gomes; de Magalhães, Jurandir Vieira; Garcia, Antonio Augusto Franco; de Souza, João Cândido; Guimarães, Lauro José Moreira; Pastina, Maria Marta
2018-07-01
Breeding for drought tolerance is a challenging task that requires costly, extensive, and precise phenotyping. Genomic selection (GS) can be used to maximize selection efficiency and the genetic gains in maize (Zea mays L.) breeding programs for drought tolerance. Here, we evaluated the accuracy of genomic selection (GS) using additive (A) and additive + dominance (AD) models to predict the performance of untested maize single-cross hybrids for drought tolerance in multi-environment trials. Phenotypic data of five drought tolerance traits were measured in 308 hybrids along eight trials under water-stressed (WS) and well-watered (WW) conditions over two years and two locations in Brazil. Hybrids' genotypes were inferred based on their parents' genotypes (inbred lines) using single-nucleotide polymorphism markers obtained via genotyping-by-sequencing. GS analyses were performed using genomic best linear unbiased prediction by fitting a factor analytic (FA) multiplicative mixed model. Two cross-validation (CV) schemes were tested: CV1 and CV2. The FA framework allowed for investigating the stability of additive and dominance effects across environments, as well as the additive-by-environment and the dominance-by-environment interactions, with interesting applications for parental and hybrid selection. Results showed differences in the predictive accuracy between A and AD models, using both CV1 and CV2, for the five traits in both water conditions. For grain yield (GY) under WS and using CV1, the AD model doubled the predictive accuracy in comparison to the A model. Through CV2, GS models benefit from borrowing information of correlated trials, resulting in an increase of 40% and 9% in the predictive accuracy of GY under WS for A and AD models, respectively. These results highlight the importance of multi-environment trial analyses using GS models that incorporate additive and dominance effects for genomic predictions of GY under drought in maize single-cross hybrids.
Seven lessons from manyfield inflation in random potentials
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2018-01-01
We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.
Evaluation of an ensemble of genetic models for prediction of a quantitative trait.
Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola
2014-01-01
Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.
Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming
2017-07-24
Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.
NASA Astrophysics Data System (ADS)
Zhang, Yiqing; Wang, Lifeng; Jiang, Jingnong
2018-03-01
Vibrational behavior is very important for nanostructure-based resonators. In this work, an orthotropic plate model together with a molecular dynamics (MD) simulation is used to investigate the thermal vibration of rectangular single-layered black phosphorus (SLBP). Two bending stiffness, two Poisson's ratios, and one shear modulus of SLBP are calculated using the MD simulation. The natural frequency of the SLBP predicted by the orthotropic plate model agrees with the one obtained from the MD simulation very well. The root of mean squared (RMS) amplitude of the SLBP is obtained by MD simulation and the orthotropic plate model considering the law of energy equipartition. The RMS amplitude of the thermal vibration of the SLBP is predicted well by the orthotropic plate model compared to the MD results. Furthermore, the thermal vibration of the SLBP with an initial stress is also well-described by the orthotropic plate model.
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
NASA Astrophysics Data System (ADS)
Lunt, A. J. G.; Xie, M. Y.; Baimpas, N.; Zhang, S. Y.; Kabra, S.; Kelleher, J.; Neo, T. K.; Korsunsky, A. M.
2014-08-01
Yttria Stabilised Zirconia (YSZ) is a tough, phase-transforming ceramic that finds use in a wide range of commercial applications from dental prostheses to thermal barrier coatings. Micromechanical modelling of phase transformation can deliver reliable predictions in terms of the influence of temperature and stress. However, models must rely on the accurate knowledge of single crystal elastic stiffness constants. Some techniques for elastic stiffness determination are well-established. The most popular of these involve exploiting frequency shifts and phase velocities of acoustic waves. However, the application of these techniques to YSZ can be problematic due to the micro-twinning observed in larger crystals. Here, we propose an alternative approach based on selective elastic strain sampling (e.g., by diffraction) of grain ensembles sharing certain orientation, and the prediction of the same quantities by polycrystalline modelling, for example, the Reuss or Voigt average. The inverse problem arises consisting of adjusting the single crystal stiffness matrix to match the polycrystal predictions to observations. In the present model-matching study, we sought to determine the single crystal stiffness matrix of tetragonal YSZ using the results of time-of-flight neutron diffraction obtained from an in situ compression experiment and Finite Element modelling of the deformation of polycrystalline tetragonal YSZ. The best match between the model predictions and observations was obtained for the optimized stiffness values of C11 = 451, C33 = 302, C44 = 39, C66 = 82, C12 = 240, and C13 = 50 (units: GPa). Considering the significant amount of scatter in the published literature data, our result appears reasonably consistent.
Short Term Single Station GNSS TEC Prediction Using Radial Basis Function Neural Network
NASA Astrophysics Data System (ADS)
Muslim, Buldan; Husin, Asnawi; Efendy, Joni
2018-04-01
TEC prediction models for 24 hours ahead have been developed from JOG2 GPS TEC data during 2016. Eleven month of TEC data were used as a training model of the radial basis function neural network (RBFNN) and 1 month of last data (December 2016) is used for the RBFNN model testing. The RBFNN inputs are the previous 24 hour TEC data and the minimum of Dst index during the previous 24 hours. Outputs of the model are 24 ahead TEC prediction. Comparison of model prediction show that the RBFNN model is able to predict the next 24 hours TEC is more accurate than the TEC GIM model.
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
ProTSAV: A protein tertiary structure analysis and validation server.
Singh, Ankita; Kaushik, Rahul; Mishra, Avinash; Shanker, Asheesh; Jayaram, B
2016-01-01
Quality assessment of predicted model structures of proteins is as important as the protein tertiary structure prediction. A highly efficient quality assessment of predicted model structures directs further research on function. Here we present a new server ProTSAV, capable of evaluating predicted model structures based on some popular online servers and standalone tools. ProTSAV furnishes the user with a single quality score in case of individual protein structure along with a graphical representation and ranking in case of multiple protein structure assessment. The server is validated on ~64,446 protein structures including experimental structures from RCSB and predicted model structures for CASP targets and from public decoy sets. ProTSAV succeeds in predicting quality of protein structures with a specificity of 100% and a sensitivity of 98% on experimentally solved structures and achieves a specificity of 88%and a sensitivity of 91% on predicted protein structures of CASP11 targets under 2Å.The server overcomes the limitations of any single server/method and is seen to be robust in helping in quality assessment. ProTSAV is freely available at http://www.scfbio-iitd.res.in/software/proteomics/protsav.jsp. Copyright © 2015 Elsevier B.V. All rights reserved.
Comparing Binaural Pre-processing Strategies III
Warzybok, Anna; Ernst, Stephan M. A.
2015-01-01
A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922
Life prediction and constitutive models for engine hot section anisotropic materials
NASA Technical Reports Server (NTRS)
Swanson, G. A.; Linask, I.; Nissley, D. M.; Norris, P. P.; Meyer, T. G.; Walker, K. P.
1987-01-01
The results are presented of a program designed to develop life prediction and constitutive models for two coated single crystal alloys used in gas turbine airfoils. The two alloys are PWA 1480 and Alloy 185. The two oxidation resistant coatings are PWA 273, an aluminide coating, and PWA 286, an overlay NiCoCrAlY coating. To obtain constitutive and fatigue data, tests were conducted on uncoated and coated specimens loaded in the CH76 100 CH110 , CH76 110 CH110 , CH76 111 CH110 and CH76 123 CH110 crystallographic directions. Two constitutive models are being developed and evaluated for the single crystal materials: a micromechanic model based on crystallographic slip systems, and a macroscopic model which employs anisotropic tensors to model inelastic deformation anisotropy. Based on tests conducted on the overlay coating material, constitutive models for coatings also appear feasible and two initial models were selected. A life prediction approach was proposed for coated single crystal materials, including crack initiation either in the coating or in the substrate. The coating initiated failures dominated in the tests at load levels typical of gas turbine operation. Coating life was related to coating stress/strain history which was determined from specimen data using the constitutive models.
USDA-ARS?s Scientific Manuscript database
Validation of model predictions for independent variables not included in model development can save time and money by identifying conditions for which new models are not needed. A single strain of Salmonella Typhimurium DT104 was used to develop a general regression neural network model for growth...
Calculation of single chain cellulose elasticity using fully atomistic modeling
Xiawa Wu; Robert J. Moon; Ashlie Martini
2011-01-01
Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...
Life prediction and constitutive models for engine hot section anisotropic materials program
NASA Technical Reports Server (NTRS)
Swanson, G. A.; Linask, I.; Nissley, D. M.; Norris, P. P.; Meyer, T. G.; Walker, K. P.
1986-01-01
This report presents the results of the first year of a program designed to develop life prediction and constitutive models for two coated single crystal alloys used in gas turbine airfoils. The two alloys are PWA 1480 and Alloy 185. The two oxidation resistant coatings are PWA 273, an aluminide coating, and PWA 286, an overlay NiCoCrAlY coating. To obtain constitutive and/or fatigue data, tests were conducted on coated and uncoated PWA 1480 specimens tensilely loaded in the 100 , 110 , 111 , and 123 directions. A literature survey of constitutive models was completed for both single crystal alloys and metallic coating materials; candidate models were selected. One constitutive model under consideration for single crystal alloys applies Walker's micromechanical viscoplastic formulation to all slip systems participating in the single crystal deformation. The constitutive models for the overlay coating correlate the viscoplastic data well. For the aluminide coating, a unique test method is under development. LCF and TMF tests are underway. The two coatings caused a significant drop in fatigue life, and each produced a much different failure mechanism.
NASA Technical Reports Server (NTRS)
Weil, Joseph; Sleeman, William C , Jr
1949-01-01
The effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes are analyzed, and a simple method is presented for computing power-on pitching-moment curves for flap-retracted flight conditions. The methods evolved are based on the results of powered-model wind-tunnel investigations of 28 model configurations. Correlation curves are presented from which the effects of power on the downwash over the tail and the stabilizer effectiveness can be rapidly predicted. The procedures developed enable prediction of power-on longitudinal stability characteristics that are generally in very good agreement with experiment.
Mysid Population Responses to Resource Limitation Differ from those Predicted by Cohort Studies
Effects of anthropogenic stressors on animal populations are often evaluated by assembling vital rate responses from isolated cohort studies into a single demographic model. However, models constructed from cohort studies are difficult to translate into ecological predictions be...
Single-leg squats can predict leg alignment in dancers performing ballet movements in "turnout".
Hopper, Luke S; Sato, Nahoko; Weidemann, Andries L
2016-01-01
The physical assessments used in dance injury surveillance programs are often adapted from the sports and exercise domain. Bespoke physical assessments may be required for dance, particularly when ballet movements involve "turning out" or external rotation of the legs beyond that typically used in sports. This study evaluated the ability of the traditional single-leg squat to predict the leg alignment of dancers performing ballet movements with turnout. Three-dimensional kinematic data of dancers performing the single-leg squat and five ballet movements were recorded and analyzed. Reduction of the three-dimensional data into a one-dimensional variable incorporating the ankle, knee, and hip joint center positions provided the strongest predictive model between the single-leg squat and the ballet movements. The single-leg squat can predict leg alignment in dancers performing ballet movements, even in "turned out" postures. Clinicians should pay careful attention to observational positioning and rating criteria when assessing dancers performing the single-leg squat.
Single-leg squats can predict leg alignment in dancers performing ballet movements in “turnout”
Hopper, Luke S; Sato, Nahoko; Weidemann, Andries L
2016-01-01
The physical assessments used in dance injury surveillance programs are often adapted from the sports and exercise domain. Bespoke physical assessments may be required for dance, particularly when ballet movements involve “turning out” or external rotation of the legs beyond that typically used in sports. This study evaluated the ability of the traditional single-leg squat to predict the leg alignment of dancers performing ballet movements with turnout. Three-dimensional kinematic data of dancers performing the single-leg squat and five ballet movements were recorded and analyzed. Reduction of the three-dimensional data into a one-dimensional variable incorporating the ankle, knee, and hip joint center positions provided the strongest predictive model between the single-leg squat and the ballet movements. The single-leg squat can predict leg alignment in dancers performing ballet movements, even in “turned out” postures. Clinicians should pay careful attention to observational positioning and rating criteria when assessing dancers performing the single-leg squat. PMID:27895518
Improvement of Progressive Damage Model to Predicting Crashworthy Composite Corrugated Plate
NASA Astrophysics Data System (ADS)
Ren, Yiru; Jiang, Hongyong; Ji, Wenyuan; Zhang, Hanyu; Xiang, Jinwu; Yuan, Fuh-Gwo
2018-02-01
To predict the crashworthy composite corrugated plate, different single and stacked shell models are evaluated and compared, and a stacked shell progressive damage model combined with continuum damage mechanics is proposed and investigated. To simulate and predict the failure behavior, both of the intra- and inter- laminar failure behavior are considered. The tiebreak contact method, 1D spot weld element and cohesive element are adopted in stacked shell model, and a surface-based cohesive behavior is used to capture delamination in the proposed model. The impact load and failure behavior of purposed and conventional progressive damage models are demonstrated. Results show that the single shell could simulate the impact load curve without the delamination simulation ability. The general stacked shell model could simulate the interlaminar failure behavior. The improved stacked shell model with continuum damage mechanics and cohesive element not only agree well with the impact load, but also capture the fiber, matrix debonding, and interlaminar failure of composite structure.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wu, Hua'an; Zeng, Bo; Zhou, Meng
2017-11-15
High accuracy in water demand predictions is an important basis for the rational allocation of city water resources and forms the basis for sustainable urban development. The shortage of water resources in Chongqing, the youngest central municipality in Southwest China, has significantly increased with the population growth and rapid economic development. In this paper, a new grey water-forecasting model (GWFM) was built based on the data characteristics of water consumption. The parameter estimation and error checking methods of the GWFM model were investigated. Then, the GWFM model was employed to simulate the water demands of Chongqing from 2009 to 2015 and forecast it in 2016. The simulation and prediction errors of the GWFM model was checked, and the results show the GWFM model exhibits better simulation and prediction precisions than those of the classical Grey Model with one variable and single order equation GM(1,1) for short and the frequently-used Discrete Grey Model with one variable and single order equation, DGM(1,1) for short. Finally, the water demand in Chongqing from 2017 to 2022 was forecasted, and some corresponding control measures and recommendations were provided based on the prediction results to ensure a viable water supply and promote the sustainable development of the Chongqing economy.
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
Single Nucleotide Polymorphisms Predict Symptom Severity of Autism Spectrum Disorder
ERIC Educational Resources Information Center
Jiao, Yun; Chen, Rong; Ke, Xiaoyan; Cheng, Lu; Chu, Kangkang; Lu, Zuhong; Herskovits, Edward H.
2012-01-01
Autism is widely believed to be a heterogeneous disorder; diagnosis is currently based solely on clinical criteria, although genetic, as well as environmental, influences are thought to be prominent factors in the etiology of most forms of autism. Our goal is to determine whether a predictive model based on single-nucleotide polymorphisms (SNPs)…
Life prediction and constitutive models for engine hot section anisotropic materials program
NASA Technical Reports Server (NTRS)
Nissley, D. M.; Meyer, T. G.
1992-01-01
This report presents the results from a 35 month period of a program designed to develop generic constitutive and life prediction approaches and models for nickel-based single crystal gas turbine airfoils. The program is composed of a base program and an optional program. The base program addresses the high temperature coated single crystal regime above the airfoil root platform. The optional program investigates the low temperature uncoated single crystal regime below the airfoil root platform including the notched conditions of the airfoil attachment. Both base and option programs involve experimental and analytical efforts. Results from uniaxial constitutive and fatigue life experiments of coated and uncoated PWA 1480 single crystal material form the basis for the analytical modeling effort. Four single crystal primary orientations were used in the experiments: (001), (011), (111), and (213). Specific secondary orientations were also selected for the notched experiments in the optional program. Constitutive models for an overlay coating and PWA 1480 single crystal material were developed based on isothermal hysteresis loop data and verified using thermomechanical (TMF) hysteresis loop data. A fatigue life approach and life models were selected for TMF crack initiation of coated PWA 1480. An initial life model used to correlate smooth and notched fatigue data obtained in the option program shows promise. Computer software incorporating the overlay coating and PWA 1480 constitutive models was developed.
On the predictiveness of single-field inflationary models
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Patil, Subodh P.; Trott, Michael
2014-06-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for A S , r and n s are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in principle) for a slightly larger range of Higgs masses. We comment on the origin of the various UV scales that arise at large field values for the SM Higgs, clarifying cut off scale arguments by further developing the formalism of a non-linear realization of SU L (2) × U(1) in curved space. We discuss the interesting fact that, outside of Higgs Inflation, the effect of a non-minimal coupling to gravity, even in the SM, results in a non-linear EFT for the Higgs sector. Finally, we briefly comment on post BICEP2 attempts to modify the Higgs Inflation scenario.
A computational substrate for incentive salience.
McClure, Samuel M; Daw, Nathaniel D; Montague, P Read
2003-08-01
Theories of dopamine function are at a crossroads. Computational models derived from single-unit recordings capture changes in dopaminergic neuron firing rate as a prediction error signal. These models employ the prediction error signal in two roles: learning to predict future rewarding events and biasing action choice. Conversely, pharmacological inhibition or lesion of dopaminergic neuron function diminishes the ability of an animal to motivate behaviors directed at acquiring rewards. These lesion experiments have raised the possibility that dopamine release encodes a measure of the incentive value of a contemplated behavioral act. The most complete psychological idea that captures this notion frames the dopamine signal as carrying 'incentive salience'. On the surface, these two competing accounts of dopamine function seem incommensurate. To the contrary, we demonstrate that both of these functions can be captured in a single computational model of the involvement of dopamine in reward prediction for the purpose of reward seeking.
Ferguson, Jake M; Ponciano, José M
2014-01-01
Predicting population extinction risk is a fundamental application of ecological theory to the practice of conservation biology. Here, we compared the prediction performance of a wide array of stochastic, population dynamics models against direct observations of the extinction process from an extensive experimental data set. By varying a series of biological and statistical assumptions in the proposed models, we were able to identify the assumptions that affected predictions about population extinction. We also show how certain autocorrelation structures can emerge due to interspecific interactions, and that accounting for the stochastic effect of these interactions can improve predictions of the extinction process. We conclude that it is possible to account for the stochastic effects of community interactions on extinction when using single-species time series. PMID:24304946
Predicted phototoxicities of carbon nano-material by quantum mechanical calculations
The purpose of this research is to develop a predictive model for the phototoxicity potential of carbon nanomaterials (fullerenols and single-walled carbon nanotubes). This model is based on the quantum mechanical (ab initio) calculations on these carbon-based materials and compa...
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
NASA Astrophysics Data System (ADS)
Glass, Alexis; Fukudome, Kimitoshi
2004-12-01
A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.
Can spatial statistical river temperature models be transferred between catchments?
NASA Astrophysics Data System (ADS)
Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.
2017-09-01
There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across multiple catchments and larger spatial scales.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-01-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671
Bonet, Isis; Franco-Montero, Pedro; Rivero, Virginia; Teijeira, Marta; Borges, Fernanda; Uriarte, Eugenio; Morales Helguera, Aliuska
2013-12-23
A(2B) adenosine receptor antagonists may be beneficial in treating diseases like asthma, diabetes, diabetic retinopathy, and certain cancers. This has stimulated research for the development of potent ligands for this subtype, based on quantitative structure-affinity relationships. In this work, a new ensemble machine learning algorithm is proposed for classification and prediction of the ligand-binding affinity of A(2B) adenosine receptor antagonists. This algorithm is based on the training of different classifier models with multiple training sets (composed of the same compounds but represented by diverse features). The k-nearest neighbor, decision trees, neural networks, and support vector machines were used as single classifiers. To select the base classifiers for combining into the ensemble, several diversity measures were employed. The final multiclassifier prediction results were computed from the output obtained by using a combination of selected base classifiers output, by utilizing different mathematical functions including the following: majority vote, maximum and average probability. In this work, 10-fold cross- and external validation were used. The strategy led to the following results: i) the single classifiers, together with previous features selections, resulted in good overall accuracy, ii) a comparison between single classifiers, and their combinations in the multiclassifier model, showed that using our ensemble gave a better performance than the single classifier model, and iii) our multiclassifier model performed better than the most widely used multiclassifier models in the literature. The results and statistical analysis demonstrated the supremacy of our multiclassifier approach for predicting the affinity of A(2B) adenosine receptor antagonists, and it can be used to develop other QSAR models.
NASA Technical Reports Server (NTRS)
Polanco, Michael A.; Kellas, Sotiris; Jackson, Karen
2009-01-01
The performance of material models to simulate a novel composite honeycomb Deployable Energy Absorber (DEA) was evaluated using the nonlinear explicit dynamic finite element code LS-DYNA(Registered TradeMark). Prototypes of the DEA concept were manufactured using a Kevlar/Epoxy composite material in which the fibers are oriented at +/-45 degrees with respect to the loading axis. The development of the DEA has included laboratory tests at subcomponent and component levels such as three-point bend testing of single hexagonal cells, dynamic crush testing of single multi-cell components, and impact testing of a full-scale fuselage section fitted with a system of DEA components onto multi-terrain environments. Due to the thin nature of the cell walls, the DEA was modeled using shell elements. In an attempt to simulate the dynamic response of the DEA, it was first represented using *MAT_LAMINATED_COMPOSITE_FABRIC, or *MAT_58, in LS-DYNA. Values for each parameter within the material model were generated such that an in-plane isotropic configuration for the DEA material was assumed. Analytical predictions showed that the load-deflection behavior of a single-cell during three-point bending was within the range of test data, but predicted the DEA crush response to be very stiff. In addition, a *MAT_PIECEWISE_LINEAR_PLASTICITY, or *MAT_24, material model in LS-DYNA was developed, which represented the Kevlar/Epoxy composite as an isotropic elastic-plastic material with input from +/-45 degrees tensile coupon data. The predicted crush response matched that of the test and localized folding patterns of the DEA were captured under compression, but the model failed to predict the single-cell three-point bending response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunt, A. J. G., E-mail: alexander.lunt@eng.ox.ac.uk; Xie, M. Y.; Baimpas, N.
2014-08-07
Yttria Stabilised Zirconia (YSZ) is a tough, phase-transforming ceramic that finds use in a wide range of commercial applications from dental prostheses to thermal barrier coatings. Micromechanical modelling of phase transformation can deliver reliable predictions in terms of the influence of temperature and stress. However, models must rely on the accurate knowledge of single crystal elastic stiffness constants. Some techniques for elastic stiffness determination are well-established. The most popular of these involve exploiting frequency shifts and phase velocities of acoustic waves. However, the application of these techniques to YSZ can be problematic due to the micro-twinning observed in larger crystals.more » Here, we propose an alternative approach based on selective elastic strain sampling (e.g., by diffraction) of grain ensembles sharing certain orientation, and the prediction of the same quantities by polycrystalline modelling, for example, the Reuss or Voigt average. The inverse problem arises consisting of adjusting the single crystal stiffness matrix to match the polycrystal predictions to observations. In the present model-matching study, we sought to determine the single crystal stiffness matrix of tetragonal YSZ using the results of time-of-flight neutron diffraction obtained from an in situ compression experiment and Finite Element modelling of the deformation of polycrystalline tetragonal YSZ. The best match between the model predictions and observations was obtained for the optimized stiffness values of C11 = 451, C33 = 302, C44 = 39, C66 = 82, C12 = 240, and C13 = 50 (units: GPa). Considering the significant amount of scatter in the published literature data, our result appears reasonably consistent.« less
High speed turboprop aeroacoustic study (counterrotation). Volume 1: Model development
NASA Technical Reports Server (NTRS)
Whitfield, C. E.; Mani, R.; Gliebe, P. R.
1990-01-01
The isolated counterrotating high speed turboprop noise prediction program was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in NASA-Lewis' 8x6 and 9x15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counterotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attach was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combines into a single prediction program, results of which were compared with data taken during the flight test of the B727/UDF engine demonstrator aircraft. Satisfactory comparisons between prediction and measured data for the demonstrator airplane, together with the identification of a nontraditional radiation mechanism for propellers at angle of attack are achieved.
High speed turboprop aeroacoustic study (counterrotation). Volume 1: Model development
NASA Astrophysics Data System (ADS)
Whitfield, C. E.; Mani, R.; Gliebe, P. R.
1990-07-01
The isolated counterrotating high speed turboprop noise prediction program was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in NASA-Lewis' 8x6 and 9x15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counterotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attach was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combines into a single prediction program, results of which were compared with data taken during the flight test of the B727/UDF engine demonstrator aircraft. Satisfactory comparisons between prediction and measured data for the demonstrator airplane, together with the identification of a nontraditional radiation mechanism for propellers at angle of attack are achieved.
COMPARING MID-INFRARED GLOBULAR CLUSTER COLORS WITH POPULATION SYNTHESIS MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barmby, P.; Jalilian, F. F.
2012-04-15
Several population synthesis models now predict integrated colors of simple stellar populations in the mid-infrared bands. To date, the models have not been extensively tested in this wavelength range. In a comparison of the predictions of several recent population synthesis models, the integrated colors are found to cover approximately the same range but to disagree in detail, for example, on the effects of metallicity. To test against observational data, globular clusters (GCs) are used as the closest objects to idealized groups of stars with a single age and single metallicity. Using recent mass estimates, we have compiled a sample ofmore » massive, old GCs in M31 which contain enough stars to guard against the stochastic effects of small-number statistics, and measured their integrated colors in the Spitzer/IRAC bands. Comparison of the cluster photometry in the IRAC bands with the model predictions shows that the models reproduce the cluster colors reasonably well, except for a small (not statistically significant) offset in [4.5] - [5.8]. In this color, models without circumstellar dust emission predict bluer values than are observed. Model predictions of colors formed from the V band and the IRAC 3.6 and 4.5 {mu}m bands are redder than the observed data at high metallicities and we discuss several possible explanations. In agreement with model predictions, V - [3.6] and V - [4.5] colors are found to have metallicity sensitivity similar to or slightly better than V - K{sub s}.« less
NASA Astrophysics Data System (ADS)
De Kauwe, M. G.; Medlyn, B.; Walker, A.; Zaehle, S.; Pendall, E.; Norby, R. J.
2017-12-01
Multifactor experiments are often advocated as important for advancing models, yet to date, such models have only been tested against single-factor experiments. We applied 10 models to the multifactor Prairie Heating and CO2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions: comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO2-induced water savings to extend the growing season length. Observed interactive (CO2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.
Grant, Claire; Ewart, Lorna; Muthas, Daniel; Deavall, Damian; Smith, Simon A; Clack, Glen; Newham, Pete
2016-04-01
Nausea and vomiting are components of a complex mechanism that signals food avoidance and protection of the body against the absorption of ingested toxins. This response can also be triggered by pharmaceuticals. Predicting clinical nausea and vomiting liability for pharmaceutical agents based on pre-clinical data can be problematic as no single animal model is a universal predictor. Moreover, efforts to improve models are hampered by the lack of translational animal and human data in the public domain. AZD3514 is a novel, orally-administered compound that inhibits androgen receptor signaling and down-regulates androgen receptor expression. Here we have explored the utility of integrating data from several pre-clinical models to predict nausea and vomiting in the clinic. Single and repeat doses of AZD3514 resulted in emesis, salivation and gastrointestinal disturbances in the dog, and inhibited gastric emptying in rats after a single dose. AZD3514, at clinically relevant exposures, induced dose-responsive "pica" behaviour in rats after single and multiple daily doses, and induced retching and vomiting behaviour in ferrets after a single dose. We compare these data with the clinical manifestation of nausea and vomiting encountered in patients with castration-resistant prostate cancer receiving AZD3514. Our data reveal a striking relationship between the pre-clinical observations described and the experience of nausea and vomiting in the clinic. In conclusion, the emetic nature of AZD3514 was predicted across a range of pre-clinical models, and the approach presented provides a valuable framework for predicition of clinical nausea and vomiting. Copyright © 2016 Elsevier Inc. All rights reserved.
Predicting bending stiffness of randomly oriented hybrid panels
Laura Moya; William T.Y. Tze; Jerrold E. Winandy
2010-01-01
This study was conducted to develop a simple model to predict the bending modulus of elasticity (MOE) of randomly oriented hybrid panels. The modeling process involved three modules: the behavior of a single layer was computed by applying micromechanics equations, layer properties were adjusted for densification effects, and the entire panel was modeled as a three-...
Predicting Child Abuse Potential: An Empirical Investigation of Two Theoretical Frameworks
ERIC Educational Resources Information Center
Begle, Angela Moreland; Dumas, Jean E.; Hanson, Rochelle F.
2010-01-01
This study investigated two theoretical risk models predicting child maltreatment potential: (a) Belsky's (1993) developmental-ecological model and (b) the cumulative risk model in a sample of 610 caregivers (49% African American, 46% European American; 53% single) with a child between 3 and 6 years old. Results extend the literature by using a…
A novel single-parameter approach for forecasting algal blooms.
Xiao, Xi; He, Junyu; Huang, Haomin; Miller, Todd R; Christakos, George; Reichwaldt, Elke S; Ghadouani, Anas; Lin, Shengpan; Xu, Xinhua; Shi, Jiyan
2017-01-01
Harmful algal blooms frequently occur globally, and forecasting could constitute an essential proactive strategy for bloom control. To decrease the cost of aquatic environmental monitoring and increase the accuracy of bloom forecasting, a novel single-parameter approach combining wavelet analysis with artificial neural networks (WNN) was developed and verified based on daily online monitoring datasets of algal density in the Siling Reservoir, China and Lake Winnebago, U.S.A. Firstly, a detailed modeling process was illustrated using the forecasting of cyanobacterial cell density in the Chinese reservoir as an example. Three WNN models occupying various prediction time intervals were optimized through model training using an early stopped training approach. All models performed well in fitting historical data and predicting the dynamics of cyanobacterial cell density, with the best model predicting cyanobacteria density one-day ahead (r = 0.986 and mean absolute error = 0.103 × 10 4 cells mL -1 ). Secondly, the potential of this novel approach was further confirmed by the precise predictions of algal biomass dynamics measured as chl a in both study sites, demonstrating its high performance in forecasting algal blooms, including cyanobacteria as well as other blooming species. Thirdly, the WNN model was compared to current algal forecasting methods (i.e. artificial neural networks, autoregressive integrated moving average model), and was found to be more accurate. In addition, the application of this novel single-parameter approach is cost effective as it requires only a buoy-mounted fluorescent probe, which is merely a fraction (∼15%) of the cost of a typical auto-monitoring system. As such, the newly developed approach presents a promising and cost-effective tool for the future prediction and management of harmful algal blooms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Re-Analysis of the Solar Phase Curves of the Icy Galilean Satellites
NASA Technical Reports Server (NTRS)
Domingue, Deborah; Verbiscer, Anne
1997-01-01
Re-analysis of the solar phase curves of the icy Galilean satellites demonstrates that the quantitative results are dependent on the single particle scattering function incorporated into the photometric model; however, the qualitative properties are independent. The results presented here show that the general physical characteristics predicted by a Hapke model (B. Hapke, 1986, Icarus 67, 264-280) incorporating a two parameter double Henyey-Greenstein scattering function are similar to the predictions given by the same model incorporating a three parameter double Henyey-Greenstein scattering function as long as the data set being modeled has adequate coverage in phase angle. Conflicting results occur when the large phase angle coverage is inadequate. Analysis of the role of isotropic versus anisotropic multiple scattering shows that for surfaces as bright as Europa the two models predict very similar results over phase angles covered by the data. Differences arise only at those phase angles for which there are no data. The single particle scattering behavior between the leading and trailing hemispheres of Europa and Ganymede is commensurate with magnetospheric alterations of their surfaces. Ion bombardment will produce more forward scattering single scattering functions due to annealing of potential scattering centers within regolith particles (N. J. Sack et al., 1992, Icarus 100, 534-540). Both leading and trailing hemispheres of Europa are consistent with a high porosity model and commensurate with a frost surface. There are no strong differences in predicted porosity between the two hemispheres of Callisto, both are consistent with model porosities midway between that deduced for Europa and the Moon. Surface roughness model estimates predict that surface roughness increases with satellite distance from Jupiter, with lunar surface roughness values falling midway between those measured for Ganymede and Callisto. There is no obvious variation in predicted surface roughness with hemisphere for any of the Galilean satellites.
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo
2016-01-01
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo
2017-01-05
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.
NASA Technical Reports Server (NTRS)
1996-01-01
Because of their superior high-temperature properties, gas generator turbine airfoils made of single-crystal, nickel-base superalloys are fast becoming the standard equipment on today's advanced, high-performance aerospace engines. The increased temperature capabilities of these airfoils has allowed for a significant increase in the operating temperatures in turbine sections, resulting in superior propulsion performance and greater efficiencies. However, the previously developed methodologies for life-prediction models are based on experience with polycrystalline alloys and may not be applicable to single-crystal alloys under certain operating conditions. One of the main areas where behavior differences between single-crystal and polycrystalline alloys are readily apparent is subcritical fatigue crack growth (FCG). The NASA Lewis Research Center's work in this area enables accurate prediction of the subcritical fatigue crack growth behavior in single-crystal, nickel-based superalloys at elevated temperatures.
Extended charge banking model of dual path shocks for implantable cardioverter defibrillators
Dosdall, Derek J; Sweeney, James D
2008-01-01
Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
Genomic prediction of the polled and horned phenotypes in Merino sheep.
Duijvesteijn, Naomi; Bolormaa, Sunduimijid; Daetwyler, Hans D; van der Werf, Julius H J
2018-05-22
In horned sheep breeds, breeding for polledness has been of interest for decades. The objective of this study was to improve prediction of the horned and polled phenotypes using horn scores classified as polled, scurs, knobs or horns. Derived phenotypes polled/non-polled (P/NP) and horned/non-horned (H/NH) were used to test four different strategies for prediction in 4001 purebred Merino sheep. These strategies include the use of single 'single nucleotide polymorphism' (SNP) genotypes, multiple-SNP haplotypes, genome-wide and chromosome-wide genomic best linear unbiased prediction and information from imputed sequence variants from the region including the RXFP2 gene. Low-density genotypes of these animals were imputed to the Illumina Ovine high-density (600k) chip and the 1.78-kb insertion polymorphism in RXFP2 was included in the imputation process to whole-genome sequence. We evaluated the mode of inheritance and validated models by a fivefold cross-validation and across- and between-family prediction. The most significant SNPs for prediction of P/NP and H/NH were OAR10_29546872.1 and OAR10_29458450, respectively, located on chromosome 10 close to the 1.78-kb insertion at 29.5 Mb. The mode of inheritance included an additive effect and a sex-dependent effect for dominance for P/NP and a sex-dependent additive and dominance effect for H/NH. Models with the highest prediction accuracies for H/NH used either single SNPs or 3-SNP haplotypes and included a polygenic effect estimated based on traditional pedigree relationships. Prediction accuracies for H/NH were 0.323 for females and 0.725 for males. For predicting P/NP, the best models were the same as for H/NH but included a genomic relationship matrix with accuracies of 0.713 for females and 0.620 for males. Our results show that prediction accuracy is high using a single SNP, but does not reach 1 since the causative mutation is not genotyped. Incomplete penetrance or allelic heterogeneity, which can influence expression of the phenotype, may explain why prediction accuracy did not approach 1 with any of the genetic models tested here. Nevertheless, a breeding program to eradicate horns from Merino sheep can be effective by selecting genotypes GG of SNP OAR10_29458450 or TT of SNP OAR10_29546872.1 since all sheep with these genotypes will be non-horned.
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.
2000-01-01
Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Responses to single photons in visual cells of Limulus
Borsellino, A.; Fuortes, M. G. F.
1968-01-01
1. A system proposed in a previous article as a model of responses of visual cells has been analysed with the purpose of predicting the features of responses to single absorbed photons. 2. As a result of this analysis, the stochastic variability of responses has been expressed as a function of the amplification of the system. 3. The theoretical predictions have been compared to the results obtained by recording electrical responses of visual cells of Limulus to flashes delivering only few photons. 4. Experimental responses to single photons have been tentatively identified and it was shown that the stochastic variability of these responses is similar to that predicted for a model with a multiplication factor of at least twenty-five. 5. These results lead to the conclusion that the processes responsible for visual responses incorporate some form of amplification. This conclusion may prove useful for identifying the physical mechanisms underlying the transducer action of visual cells. PMID:5664231
Numerical model of spray combustion in a single cylinder diesel engine
NASA Astrophysics Data System (ADS)
Acampora, Luigi; Sequino, Luigi; Nigro, Giancarlo; Continillo, Gaetano; Vaglieco, Bianca Maria
2017-11-01
A numerical model is developed for predicting the pressure cycle from Intake Valve Closing (IVC) to the Exhaust Valve Opening (EVO) events. The model is based on a modified one-dimensional (1D) Musculus and Kattke spray model, coupled with a zero-dimensional (0D) non-adiabatic transient Fed-Batch reactor model. The 1D spray model provides an estimate of the fuel evaporation rate during the injection phenomenon, as a function of time. The 0D Fed-Batch reactor model describes combustion. The main goal of adopting a 0D (perfectly stirred) model is to use highly detailed reaction mechanisms for Diesel fuel combustion in air, while keeping the computational cost as low as possible. The proposed model is validated by comparing its predictions with experimental data of pressure obtained from an optical single cylinder Diesel engine.
Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A; Arnold, Steven M; Pineda, Evan J
2016-05-04
A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e. , each individual grain. Two-three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A.; Arnold, Steven M.; Pineda, Evan J.
2016-01-01
A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e., each individual grain. Two–three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities. PMID:28773458
CaPTHUS scoring model in primary hyperparathyroidism: can it eliminate the need for ioPTH testing?
Elfenbein, Dawn M; Weber, Sara; Schneider, David F; Sippel, Rebecca S; Chen, Herbert
2015-04-01
The CaPTHUS model was reported to have a positive predictive value of 100 % to correctly predict single-gland disease in patients with primary hyperparathyroidism, thus obviating the need for intraoperative parathyroid hormone (ioPTH) testing. We sought to apply the CaPTHUS scoring model in our patient population and assess its utility in predicting long-term biochemical cure. We retrospective reviewed all parathyroidectomies for primary hyperparathyroidism performed at our university hospital from 2003 to 2012. We routinely perform ioPTH testing. Biochemical cure was defined as a normal calcium level at 6 months. A total of 1,421 patients met the inclusion criteria: 78 % of patients had a single adenoma at the time of surgery, 98 % had a normal serum calcium at 1 week postoperatively, and 96 % had a normal serum calcium level 6 months postoperatively. Using the CaPTHUS scoring model, 307 patients (22.5 %) had a score of ≥ 3, with a positive predictive value of 91 % for single adenoma. A CaPTHUS score of ≥ 3 had a positive predictive value of 98 % for biochemical cure at 1 week as well as at 6 months. In our population, where ioPTH testing is used routinely to guide use of bilateral exploration, patients with a preoperative CaPTHUS score of ≥ 3 had good long-term biochemical cure rates. However, the model only predicted adenoma in 91 % of cases. If minimally invasive parathyroidectomy without ioPTH testing had been done for these patients, the cure rate would have dropped from 98 % to an unacceptable 89 %. Even in these patients with high CaPTHUS scores, multigland disease is present in almost 10 %, and ioPTH testing is necessary.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
Ferguson, Jake M; Ponciano, José M
2014-02-01
Predicting population extinction risk is a fundamental application of ecological theory to the practice of conservation biology. Here, we compared the prediction performance of a wide array of stochastic, population dynamics models against direct observations of the extinction process from an extensive experimental data set. By varying a series of biological and statistical assumptions in the proposed models, we were able to identify the assumptions that affected predictions about population extinction. We also show how certain autocorrelation structures can emerge due to interspecific interactions, and that accounting for the stochastic effect of these interactions can improve predictions of the extinction process. We conclude that it is possible to account for the stochastic effects of community interactions on extinction when using single-species time series. © 2013 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
Rainbow trout-based assays for estrogenicity are currently being used for development of predictive models based upon quantitative structure activity relationships. A predictive model based on a single species raises the question of whether this information is valid for other spe...
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-12-09
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.
STRUM: structure-based prediction of protein stability changes upon single-point mutation.
Quan, Lijun; Lv, Qiang; Zhang, Yang
2016-10-01
Mutations in human genome are mainly through single nucleotide polymorphism, some of which can affect stability and function of proteins, causing human diseases. Several methods have been proposed to predict the effect of mutations on protein stability; but most require features from experimental structure. Given the fast progress in protein structure prediction, this work explores the possibility to improve the mutation-induced stability change prediction using low-resolution structure modeling. We developed a new method (STRUM) for predicting stability change caused by single-point mutations. Starting from wild-type sequences, 3D models are constructed by the iterative threading assembly refinement (I-TASSER) simulations, where physics- and knowledge-based energy functions are derived on the I-TASSER models and used to train STRUM models through gradient boosting regression. STRUM was assessed by 5-fold cross validation on 3421 experimentally determined mutations from 150 proteins. The Pearson correlation coefficient (PCC) between predicted and measured changes of Gibbs free-energy gap, ΔΔG, upon mutation reaches 0.79 with a root-mean-square error 1.2 kcal/mol in the mutation-based cross-validations. The PCC reduces if separating training and test mutations from non-homologous proteins, which reflects inherent correlations in the current mutation sample. Nevertheless, the results significantly outperform other state-of-the-art methods, including those built on experimental protein structures. Detailed analyses show that the most sensitive features in STRUM are the physics-based energy terms on I-TASSER models and the conservation scores from multiple-threading template alignments. However, the ΔΔG prediction accuracy has only a marginal dependence on the accuracy of protein structure models as long as the global fold is correct. These data demonstrate the feasibility to use low-resolution structure modeling for high-accuracy stability change prediction upon point mutations. http://zhanglab.ccmb.med.umich.edu/STRUM/ CONTACT: qiang@suda.edu.cn and zhng@umich.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
STRUM: structure-based prediction of protein stability changes upon single-point mutation
Quan, Lijun; Lv, Qiang; Zhang, Yang
2016-01-01
Motivation: Mutations in human genome are mainly through single nucleotide polymorphism, some of which can affect stability and function of proteins, causing human diseases. Several methods have been proposed to predict the effect of mutations on protein stability; but most require features from experimental structure. Given the fast progress in protein structure prediction, this work explores the possibility to improve the mutation-induced stability change prediction using low-resolution structure modeling. Results: We developed a new method (STRUM) for predicting stability change caused by single-point mutations. Starting from wild-type sequences, 3D models are constructed by the iterative threading assembly refinement (I-TASSER) simulations, where physics- and knowledge-based energy functions are derived on the I-TASSER models and used to train STRUM models through gradient boosting regression. STRUM was assessed by 5-fold cross validation on 3421 experimentally determined mutations from 150 proteins. The Pearson correlation coefficient (PCC) between predicted and measured changes of Gibbs free-energy gap, ΔΔG, upon mutation reaches 0.79 with a root-mean-square error 1.2 kcal/mol in the mutation-based cross-validations. The PCC reduces if separating training and test mutations from non-homologous proteins, which reflects inherent correlations in the current mutation sample. Nevertheless, the results significantly outperform other state-of-the-art methods, including those built on experimental protein structures. Detailed analyses show that the most sensitive features in STRUM are the physics-based energy terms on I-TASSER models and the conservation scores from multiple-threading template alignments. However, the ΔΔG prediction accuracy has only a marginal dependence on the accuracy of protein structure models as long as the global fold is correct. These data demonstrate the feasibility to use low-resolution structure modeling for high-accuracy stability change prediction upon point mutations. Availability and Implementation: http://zhanglab.ccmb.med.umich.edu/STRUM/ Contact: qiang@suda.edu.cn and zhng@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27318206
Improving the Representation of Snow Crystal Properties Within a Single-Moment Microphysics Scheme
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, S. R.
2010-01-01
As computational resources continue their expansion, weather forecast models are transitioning to the use of parameterizations that predict the evolution of hydrometeors and their microphysical processes, rather than estimating the bulk effects of clouds and precipitation that occur on a sub-grid scale. These parameterizations are referred to as single-moment, bulk water microphysics schemes, as they predict the total water mass among hydrometeors in a limited number of classes. Although the development of single moment microphysics schemes have often been driven by the need to predict the structure of convective storms, they may also provide value in predicting accumulations of snowfall. Predicting the accumulation of snowfall presents unique challenges to forecasters and microphysics schemes. In cases where surface temperatures are near freezing, accumulated depth often depends upon the snowfall rate and the ability to overcome an initial warm layer. Precipitation efficiency relates to the dominant ice crystal habit, as dendrites and plates have relatively large surface areas for the accretion of cloud water and ice, but are only favored within a narrow range of ice supersaturation and temperature. Forecast models and their parameterizations must accurately represent the characteristics of snow crystal populations, such as their size distribution, bulk density and fall speed. These properties relate to the vertical distribution of ice within simulated clouds, the temperature profile through latent heat release, and the eventual precipitation rate measured at the surface. The NASA Goddard, single-moment microphysics scheme is available to the operational forecast community as an option within the Weather Research and Forecasting (WRF) model. The NASA Goddard scheme predicts the occurrence of up to six classes of water mass: vapor, cloud ice, cloud water, rain, snow and either graupel or hail.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia.
Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia
Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600
Wu, Hua’an; Zhou, Meng
2017-01-01
High accuracy in water demand predictions is an important basis for the rational allocation of city water resources and forms the basis for sustainable urban development. The shortage of water resources in Chongqing, the youngest central municipality in Southwest China, has significantly increased with the population growth and rapid economic development. In this paper, a new grey water-forecasting model (GWFM) was built based on the data characteristics of water consumption. The parameter estimation and error checking methods of the GWFM model were investigated. Then, the GWFM model was employed to simulate the water demands of Chongqing from 2009 to 2015 and forecast it in 2016. The simulation and prediction errors of the GWFM model was checked, and the results show the GWFM model exhibits better simulation and prediction precisions than those of the classical Grey Model with one variable and single order equation GM(1,1) for short and the frequently-used Discrete Grey Model with one variable and single order equation, DGM(1,1) for short. Finally, the water demand in Chongqing from 2017 to 2022 was forecasted, and some corresponding control measures and recommendations were provided based on the prediction results to ensure a viable water supply and promote the sustainable development of the Chongqing economy. PMID:29140266
Ben Hassen, Manel; Bartholomé, Jérôme; Valè, Giampiero; Cao, Tuong-Vi; Ahmadi, Nourollah
2018-05-09
Developing rice varieties adapted to alternate wetting and drying water management is crucial for the sustainability of irrigated rice cropping systems. Here we report the first study exploring the feasibility of breeding rice for adaptation to alternate wetting and drying using genomic prediction methods that account for genotype by environment interactions. Two breeding populations (a reference panel of 284 accessions and a progeny population of 97 advanced lines) were evaluated under alternate wetting and drying and continuous flooding management systems. The predictive ability of genomic prediction for response variables (index of relative performance and the slope of the joint regression) and for multi-environment genomic prediction models were compared. For the three traits considered (days to flowering, panicle weight and nitrogen-balance index), significant genotype by environment interactions were observed in both populations. In cross validation, predictive ability for the index was on average lower (0.31) than that of the slope of the joint regression (0.64) whatever the trait considered. Similar results were found for progeny validation. Both cross-validation and progeny validation experiments showed that the performance of multi-environment models predicting unobserved phenotypes of untested entrees was similar to the performance of single environment models with differences in predictive ability ranging from -6% to 4% depending on the trait and on the statistical model concerned. The predictive ability of multi-environment models predicting unobserved phenotypes of entrees evaluated under both water management systems outperformed single environment models by an average of 30%. Practical implications for breeding rice for adaptation to alternate wetting and drying system are discussed. Copyright © 2018, G3: Genes, Genomes, Genetics.
Life prediction and constitutive models for engine hot section anisotropic materials program
NASA Technical Reports Server (NTRS)
Nissley, D. M.; Meyer, T. G.; Walker, K. P.
1992-01-01
This report presents a summary of results from a 7 year program designed to develop generic constitutive and life prediction approaches and models for nickel-based single crystal gas turbine airfoils. The program was composed of a base program and an optional program. The base program addressed the high temperature coated single crystal regime above the airfoil root platform. The optional program investigated the low temperature uncoated single crystal regime below the airfoil root platform including the notched conditions of the airfoil attachment. Both base and option programs involved experimental and analytical efforts. Results from uniaxial constitutive and fatigue life experiments of coated and uncoated PWA 1480 single crystal material formed the basis for the analytical modeling effort. Four single crystal primary orientations were used in the experiments: group of zone axes (001), group of zone axes (011), group of zone axes (111), and group of zone axes (213). Specific secondary orientations were also selected for the notched experiments in the optional program. Constitutive models for an overlay coating and PWA 1480 single crystal materials were developed based on isothermal hysteresis loop data and verified using thermomechanical (TMF) hysteresis loop data. A fatigue life approach and life models were developed for TMF crack initiation of coated PWA 1480. A life model was developed for smooth and notched fatigue in the option program. Finally, computer software incorporating the overlay coating and PWA 1480 constitutive and life models was developed.
Do Recognition and Priming Index a Unitary Knowledge Base? Comment on Shanks et al. (2003)
ERIC Educational Resources Information Center
Runger, Dennis; Nagy, Gabriel; Frensch, Peter A.
2009-01-01
Whether sequence learning entails a single or multiple memory systems is a moot issue. Recently, D. R. Shanks, L. Wilkinson, and S. Channon advanced a single-system model that predicts a perfect correlation between true (i.e., error free) response time priming and recognition. The Shanks model is contrasted with a dual-process model that…
Application of biodynamic imaging for personalized chemotherapy in canine lymphoma
NASA Astrophysics Data System (ADS)
Custead, Michelle R.
Biodynamic imaging (BDI) is a novel phenotypic cancer profiling technology which characterizes changes in cellular and subcellular motion in living tumor tissue samples following in vitro or ex vivo treatment with chemotherapeutics. The ability of BDI to predict clinical response to single-agent doxorubicin chemotherapy was tested in ten dogs with naturally-occurring non-Hodgkin's lymphomas (NHL). Pre-treatment tumor biopsy samples were obtained from all dogs and treated with doxorubicin (10 muM) ex vivo. BDI captured cellular and subcellular motility measures on all biopsy samples at baseline and at regular intervals for 9 hours following drug application. All dogs subsequently received treatment with a standard single-agent doxorubicin protocol. Objective response (OR) to doxorubicin and progression-free survival time (PFST) following chemotherapy were recorded for all dogs. The dynamic biomarkers measured by BDI were entered into a multivariate logistic model to determine the extent to which BDI predicted OR and PFST following doxorubicin therapy. The model showed that the sensitivity, specificity, and accuracy of BDI for predicting treatment outcome were 95%, 91%, and 93%, respectively. To account for possible over-fitting of data to the predictive model, cross-validation with a one-left-out analysis was performed, and the adjusted sensitivity, specificity, and accuracy following this analysis were 93%, 87%, and 91%, respectively. These findings suggest that BDI can predict, with high accuracy, treatment outcome following single-agent doxorubicin chemotherapy in a relevant spontaneous canine cancer model, and is a promising novel technology for advancing personalized cancer medicine.
Zabor, Emily C; Coit, Daniel; Gershenwald, Jeffrey E; McMasters, Kelly M; Michaelson, James S; Stromberg, Arnold J; Panageas, Katherine S
2018-02-22
Prognostic models are increasingly being made available online, where they can be publicly accessed by both patients and clinicians. These online tools are an important resource for patients to better understand their prognosis and for clinicians to make informed decisions about treatment and follow-up. The goal of this analysis was to highlight the possible variability in multiple online prognostic tools in a single disease. To demonstrate the variability in survival predictions across online prognostic tools, we applied a single validation dataset to three online melanoma prognostic tools. Data on melanoma patients treated at Memorial Sloan Kettering Cancer Center between 2000 and 2014 were retrospectively collected. Calibration was assessed using calibration plots and discrimination was assessed using the C-index. In this demonstration project, we found important differences across the three models that led to variability in individual patients' predicted survival across the tools, especially in the lower range of predictions. In a validation test using a single-institution data set, calibration and discrimination varied across the three models. This study underscores the potential variability both within and across online tools, and highlights the importance of using methodological rigor when developing a prognostic model that will be made publicly available online. The results also reinforce that careful development and thoughtful interpretation, including understanding a given tool's limitations, are required in order for online prognostic tools that provide survival predictions to be a useful resource for both patients and clinicians.
Mokhtarzadeh, Hossein; Perraton, Luke; Fok, Laurence; Muñoz, Mario A; Clark, Ross; Pivonka, Peter; Bryant, Adam L
2014-09-22
The aim of this paper was to compare the effect of different optimisation methods and different knee joint degrees of freedom (DOF) on muscle force predictions during a single legged hop. Nineteen subjects performed single-legged hopping manoeuvres and subject-specific musculoskeletal models were developed to predict muscle forces during the movement. Muscle forces were predicted using static optimisation (SO) and computed muscle control (CMC) methods using either 1 or 3 DOF knee joint models. All sagittal and transverse plane joint angles calculated using inverse kinematics or CMC in a 1 DOF or 3 DOF knee were well-matched (RMS error<3°). Biarticular muscles (hamstrings, rectus femoris and gastrocnemius) showed more differences in muscle force profiles when comparing between the different muscle prediction approaches where these muscles showed larger time delays for many of the comparisons. The muscle force magnitudes of vasti, gluteus maximus and gluteus medius were not greatly influenced by the choice of muscle force prediction method with low normalised root mean squared errors (<48%) observed in most comparisons. We conclude that SO and CMC can be used to predict lower-limb muscle co-contraction during hopping movements. However, care must be taken in interpreting the magnitude of force predicted in the biarticular muscles and the soleus, especially when using a 1 DOF knee. Despite this limitation, given that SO is a more robust and computationally efficient method for predicting muscle forces than CMC, we suggest that SO can be used in conjunction with musculoskeletal models that have a 1 or 3 DOF knee joint to study the relative differences and the role of muscles during hopping activities in future studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi
2014-01-01
The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction.
Constitutive and life modeling of single crystal blade alloys for root attachment analysis
NASA Technical Reports Server (NTRS)
Meyer, T. G.; Mccarthy, G. J.; Favrow, L. H.; Anton, D. L.; Bak, Joe
1988-01-01
Work to develop fatigue life prediction and constitutive models for uncoated attachment regions of single crystal gas turbine blades is described. At temperatures relevant to attachment regions, deformation is dominated by slip on crystallographic planes. However, fatigue crack initiation and early crack growth are not always observed to be crystallographic. The influence of natural occurring microporosity will be investigated by testing both hot isostatically pressed and conventionally cast PWA 1480 single crystal specimens. Several differnt specimen configurations and orientations relative to the natural crystal axes are being tested to investigate the influence of notch acuity and the material's anisotropy. Global and slip system stresses in the notched regions were determined from three dimensional stress analyses and will be used to develop fatigue life prediction models consistent with the observed lives and crack characteristics.
A comparison of arcjet plume properties to model predictions
NASA Technical Reports Server (NTRS)
Cappelli, M. A.; Liebeskind, J. G.; Hanson, R. K.; Butler, G. W.; King, D. Q.
1993-01-01
This paper describes an experimental study of the plasma plume properties of a 1 kW class hydrogen arcjet thruster and the comparison of measured temperature and velocity field to model predictions. The experiments are based on laser-induced fluorescence excitation of the Balmer-alpha transition. The model is based on a single-fluid magnetohydrodynamic description of the flow originally developed to predict arcjet thruster performance. Excellent agreement between model predictions and experimental velocity is found, despite the complex nature of the flow. Measured and predicted exit plane temperatures are in disagreement by as much as 2000K over a range of operating conditions. The possible sources for this discrepancy are discussed.
Effectiveness of repeated examination to diagnose enterobiasis in nursery school groups.
Remm, Mare; Remm, Kalle
2009-09-01
The aim of this study was to estimate the benefit from repeated examinations in the diagnosis of enterobiasis in nursery school groups, and to test the effectiveness of individual-based risk predictions using different methods. A total of 604 children were examined using double, and 96 using triple, anal swab examinations. The questionnaires for parents, structured observations, and interviews with supervisors were used to identify factors of possible infection risk. In order to model the risk of enterobiasis at individual level, a similarity-based machine learning and prediction software Constud was compared with data mining methods in the Statistica 8 Data Miner software package. Prevalence according to a single examination was 22.5%; the increase as a result of double examinations was 8.2%. Single swabs resulted in an estimated prevalence of 20.1% among children examined 3 times; double swabs increased this by 10.1%, and triple swabs by 7.3%. Random forest classification, boosting classification trees, and Constud correctly predicted about 2/3 of the results of the second examination. Constud estimated a mean prevalence of 31.5% in groups. Constud was able to yield the highest overall fit of individual-based predictions while boosting classification tree and random forest models were more effective in recognizing Enterobius positive persons. As a rule, the actual prevalence of enterobiasis is higher than indicated by a single examination. We suggest using either the values of the mean increase in prevalence after double examinations compared to single examinations or group estimations deduced from individual-level modelled risk predictions.
Effectiveness of Repeated Examination to Diagnose Enterobiasis in Nursery School Groups
Remm, Kalle
2009-01-01
The aim of this study was to estimate the benefit from repeated examinations in the diagnosis of enterobiasis in nursery school groups, and to test the effectiveness of individual-based risk predictions using different methods. A total of 604 children were examined using double, and 96 using triple, anal swab examinations. The questionnaires for parents, structured observations, and interviews with supervisors were used to identify factors of possible infection risk. In order to model the risk of enterobiasis at individual level, a similarity-based machine learning and prediction software Constud was compared with data mining methods in the Statistica 8 Data Miner software package. Prevalence according to a single examination was 22.5%; the increase as a result of double examinations was 8.2%. Single swabs resulted in an estimated prevalence of 20.1% among children examined 3 times; double swabs increased this by 10.1%, and triple swabs by 7.3%. Random forest classification, boosting classification trees, and Constud correctly predicted about 2/3 of the results of the second examination. Constud estimated a mean prevalence of 31.5% in groups. Constud was able to yield the highest overall fit of individual-based predictions while boosting classification tree and random forest models were more effective in recognizing Enterobius positive persons. As a rule, the actual prevalence of enterobiasis is higher than indicated by a single examination. We suggest using either the values of the mean increase in prevalence after double examinations compared to single examinations or group estimations deduced from individual-level modelled risk predictions. PMID:19724696
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less
Validation of Single-Item Screening Measures for Provider Burnout in a Rural Health Care Network.
Waddimba, Anthony C; Scribani, Melissa; Nieves, Melinda A; Krupa, Nicole; May, John J; Jenkins, Paul
2016-06-01
We validated three single-item measures for emotional exhaustion (EE) and depersonalization (DP) among rural physician/nonphysician practitioners. We linked cross-sectional survey data (on provider demographics, satisfaction, resilience, and burnout) with administrative information from an integrated health care network (1 academic medical center, 6 community hospitals, 31 clinics, and 19 school-based health centers) in an eight-county underserved area of upstate New York. In total, 308 physicians and advanced-practice clinicians completed a self-administered, multi-instrument questionnaire (65.1% response rate). Significant proportions of respondents reported high EE (36.1%) and DP (9.9%). In multivariable linear mixed models, scores on EE/DP subscales of the Maslach Burnout Inventory were regressed on each single-item measure. The Physician Work-Life Study's single-item measure (classifying 32.8% of respondents as burning out/completely burned out) was correlated with EE and DP (Spearman's ρ = .72 and .41, p < .0001; Kruskal-Wallis χ(2) = 149.9 and 56.5, p < .0001, respectively). In multivariable models, it predicted high EE (but neither low EE nor low/high DP). EE/DP single items were correlated with parent subscales (Spearman's ρ = .89 and .81, p < .0001; Kruskal-Wallis χ(2) = 230.98 and 197.84, p < .0001, respectively). In multivariable models, the EE item predicted high/low EE, whereas the DP item predicted only low DP. Therefore, the three single-item measures tested varied in effectiveness as screeners for EE/DP dimensions of burnout. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Kim, Chan Moon; Parnichkun, Manukid
2017-11-01
Coagulation is an important process in drinking water treatment to attain acceptable treated water quality. However, the determination of coagulant dosage is still a challenging task for operators, because coagulation is nonlinear and complicated process. Feedback control to achieve the desired treated water quality is difficult due to lengthy process time. In this research, a hybrid of k-means clustering and adaptive neuro-fuzzy inference system ( k-means-ANFIS) is proposed for the settled water turbidity prediction and the optimal coagulant dosage determination using full-scale historical data. To build a well-adaptive model to different process states from influent water, raw water quality data are classified into four clusters according to its properties by a k-means clustering technique. The sub-models are developed individually on the basis of each clustered data set. Results reveal that the sub-models constructed by a hybrid k-means-ANFIS perform better than not only a single ANFIS model, but also seasonal models by artificial neural network (ANN). The finally completed model consisting of sub-models shows more accurate and consistent prediction ability than a single model of ANFIS and a single model of ANN based on all five evaluation indices. Therefore, the hybrid model of k-means-ANFIS can be employed as a robust tool for managing both treated water quality and production costs simultaneously.
Modeling Transverse Cracking in Laminates With a Single Layer of Elements Per Ply
NASA Technical Reports Server (NTRS)
Van Der Meer, Frans P.; Davila, Carlos G.
2012-01-01
The objective of the present paper is to investigate the ability of mesolevel X-FEM models with a single layer of elements per ply to capture accurately all aspects of matrix cracking. In particular, we examine whether the model can predict the insitu ply thickness effect on crack initiation and propagation, the crack density as a function of strain, the strain for crack saturation, and the interaction between delamination and transverse cracks. Results reveal that the simplified model does not capture correctly the shear-lag relaxation of the stress field on either side of a crack, which leads to an overprediction of the crack density. It is also shown, however, that after onset of delamination many of the inserted matrix cracks close again, and that the density of open cracks becomes similar to the density predicted by the detailed model. The degree to which the spurious cracks affect the global response is quantified and the reliability of the mesolevel approach with a single layer of elements per ply is discussed.
The potential of large studies for building genetic risk prediction models
NCI scientists have developed a new paradigm to assess hereditary risk prediction in common diseases, such as prostate cancer. This genetic risk prediction concept is based on polygenic analysis—the study of a group of common DNA sequences, known as singl
Sasakawa, Tomoki; Masui, Kenichi; Kazama, Tomiei; Iwasaki, Hiroshi
2016-08-01
Rocuronium concentration prediction using pharmacokinetic (PK) models would be useful for controlling rocuronium effects because neuromuscular monitoring throughout anesthesia can be difficult. This study assessed whether six different compartmental PK models developed from data obtained after bolus administration only could predict the measured plasma concentration (Cp) values of rocuronium delivered by bolus followed by continuous infusion. Rocuronium Cp values from 19 healthy subjects who received a bolus dose followed by continuous infusion in a phase III multicenter trial in Japan were used retrospectively as evaluation datasets. Six different compartmental PK models of rocuronium were used to simulate rocuronium Cp time course values, which were compared with measured Cp values. Prediction error (PE) derivatives of median absolute PE (MDAPE), median PE (MDPE), wobble, divergence absolute PE, and divergence PE were used to assess inaccuracy, bias, intra-individual variability, and time-related trends in APE and PE values. MDAPE and MDPE values were acceptable only for the Magorian and Kleijn models. The divergence PE value for the Kleijn model was lower than -10 %/h, indicating unstable prediction over time. The Szenohradszky model had the lowest divergence PE (-2.7 %/h) and wobble (5.4 %) values with negative bias (MDPE = -25.9 %). These three models were developed using the mixed-effects modeling approach. The Magorian model showed the best PE derivatives among the models assessed. A PK model developed from data obtained after single-bolus dosing can predict Cp values during bolus and continuous infusion. Thus, a mixed-effects modeling approach may be preferable in extrapolating such data.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2016-09-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation
NASA Astrophysics Data System (ADS)
Peters, A.; Lantermann, U.; el Moctar, O.
2015-12-01
The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
Whole season compared to growth-stage resolved temperature trends: implications for US maize yield
NASA Astrophysics Data System (ADS)
Butler, E. E.; Mueller, N. D.; Huybers, P. J.
2014-12-01
The effect of temperature on maize yield has generally been considered using a single value for the entire growing season. We compare the effect of temperature trends on yield between two distinct models: a single temperature sensitivity for the whole season and a variable sensitivity across four distinct agronomic development stages. The more resolved variable-sensitivity model indicates roughly a factor of two greater influence of temperature on yield than that implied by the single-sensitivity model. The largest discrepancies occur in silking, which is demonstrated to be the most sensitive stage in the variable-sensitivity model. For instance, whereas median yields are observed to be only 53% of typical values during the hottest 1% of silking-stage temperatures, the single-sensitivity model over predicts median yields of 68% whereas the variable-sensitivity model more correctly predicts median yields of 61%. That the variable sensitivity model is also not capable of capturing the full extent of yield losses suggests that further refinement to represent the non-linear response would be useful. Results from the variable sensitivity model also indicate that management decisions regarding planting times, which have generally shifted toward earlier dates, have led to greater yield benefit than that implied by the single-sensitivity model. Together, the variation of both temperature trends and yield variability within growing stages calls for closer attention to how changes in management interact with changes in climate to ultimately affect yields.
Response to a pure tone in a nonlinear mechanical-electrical-acoustical model of the cochlea.
Meaud, Julien; Grosh, Karl
2012-03-21
In this article, a nonlinear mathematical model is developed based on the physiology of the cochlea of the guinea pig. The three-dimensional intracochlear fluid dynamics are coupled to a micromechanical model of the organ of Corti and to electrical potentials in the cochlear ducts and outer hair cells (OHC). OHC somatic electromotility is modeled by linearized piezoelectric relations whereas the OHC hair-bundle mechanoelectrical transduction current is modeled as a nonlinear function of the hair-bundle deflection. The steady-state response of the cochlea to a single tone is simulated in the frequency domain using an alternating frequency time scheme. Compressive nonlinearity, harmonic distortion, and DC shift on the basilar membrane (BM), tectorial membrane (TM), and OHC potentials are predicted using a single set of parameters. The predictions of the model are verified by comparing simulations to available in vivo experimental data for basal cochlear mechanics. In particular, the model predicts more amplification on the reticular lamina (RL) side of the cochlear partition than on the BM, which replicates recent measurements. Moreover, small harmonic distortion and DC shifts are predicted on the BM, whereas more significant harmonic distortion and DC shifts are predicted in the RL and TM displacements and in the OHC potentials. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Durairaj, Chandrasekar; Shen, Jie; Cherukury, Madhu
2014-08-01
To develop a mechanism based translational pharmacokinetic-pharmacodynamic (PKPD) model in preclinical species and to predict the intraocular pressure (IOP) following drug treatment in patients with glaucoma or ocular hypertension (OHT). Baseline diurnal IOP of normotensive albino rabbits, beagle dogs and patients with glaucoma or OHT was collected from literature. In addition, diurnal IOP of patients treated with brimonidine or Xalatan® were also obtained from literature. Healthy normotensive New Zealand rabbits were topically treated with a single drop of 0.15% brimonidine tartrate and normotensive beagle dogs were treated with a single drop of Xalatan®. At pre-determined time intervals, IOP was measured and aqueous humor samples were obtained from a satellite group of animals. Population based PKPD modeling was performed to describe the IOP data and the chosen model was extended to predict the IOP in patients. Baseline IOP clearly depicts a distinctive circadian rhythm in rabbits versus human. An aqueous humor dynamics based physiological model was developed to describe the baseline diurnal IOP across species. Model was extended to incorporate the effect of drug administration on baseline IOP in rabbits and dogs. The translational model with substituted human aqueous humor dynamic parameters predicted IOP in patients following drug treatment. A physiology based mechanistic PKPD model was developed to describe the baseline and post-treatment IOP in animals. The preclinical PKPD model was successfully translated to predict IOP in patients with glaucoma or OHT and can be applied in assisting dose and treatment selection and predicting outcome of glaucoma clinical trials.
Modeling Wind Wave Evolution from Deep to Shallow Water
2014-09-30
results are very promising (see Figure 2). However, for the sake of efficiency, non-hydrostatic models assume a single-valued free surface in the...1996) are ongoing. Figure 3 Smoothed-Particle Hydrodynamics ( SPH ) simulations of waves breaking over an artificial reef in the laboratory (see... surface as predicted by the SPH model (see Dalrymple & Rogers, 2006). The agreement in the breaker dynamics predicted by the model and seen in the
NASA Astrophysics Data System (ADS)
Maskal, Alan B.
Spacer grids maintain the structural integrity of the fuel rods within fuel bundles of nuclear power plants. They can also improve flow characteristics within the nuclear reactor core. However, spacer grids add reactor coolant pressure losses, which require estimation and engineering into the design. Several mathematical models and computer codes were developed over decades to predict spacer grid pressure loss. Most models use generalized characteristics, measured by older, less precise equipment. The study of OECD/US-NRC BWR Full-Size Fine Mesh Bundle Tests (BFBT) provides updated and detailed experimental single and two-phase results, using technically advanced flow measurements for a wide range of boundary conditions. This thesis compares the predictions from the mathematical models to the BFBT experimental data by utilizing statistical formulae for accuracy and precision. This thesis also analyzes the effects of BFBT flow characteristics on spacer grids. No single model has been identified as valid for all flow conditions. However, some models' predictions perform better than others within a range of flow conditions, based on the accuracy and precision of the models' predictions. This study also demonstrates that pressure and flow quality have a significant effect on two-phase flow spacer grid models' biases.
Predicting the stability of nanodevices
NASA Astrophysics Data System (ADS)
Lin, Z. Z.; Yu, W. F.; Wang, Y.; Ning, X. J.
2011-05-01
A simple model based on the statistics of single atoms is developed to predict the stability or lifetime of nanodevices without empirical parameters. Under certain conditions, the model produces the Arrhenius law and the Meyer-Neldel compensation rule. Compared with the classical molecular-dynamics simulations for predicting the stability of monatomic carbon chain at high temperature, the model is proved to be much more accurate than the transition state theory. Based on the ab initio calculation of the static potential, the model can give out a corrected lifetime of monatomic carbon and gold chains at higher temperature, and predict that the monatomic chains are very stable at room temperature.
Predicting the nature of supernova progenitors
NASA Astrophysics Data System (ADS)
Groh, Jose H.
2017-09-01
Stars more massive than about 8 solar masses end their lives as a supernova (SN), an event of fundamental importance Universe-wide. The physical properties of massive stars before the SN event are very uncertain, both from theoretical and observational perspectives. In this article, I briefly review recent efforts to predict the nature of stars before death, in particular, by performing coupled stellar evolution and atmosphere modelling of single stars in the pre-SN stage. These models are able to predict the high-resolution spectrum and broadband photometry, which can then be directly compared with the observations of core-collapse SN progenitors. The predictions for the spectral types of massive stars before death can be surprising. Depending on the initial mass and rotation, single star models indicate that massive stars die as red supergiants, yellow hypergiants, luminous blue variables and Wolf-Rayet stars of the WN and WO subtypes. I finish by assessing the detectability of SN Ibc progenitors. This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'.
The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...
Predicting fire spread in Arizona's oak chaparral
A. W. Lindenmuth; James R. Davis
1973-01-01
Five existing fire models, both experimental and theoretical, did not adequately predict rate-of-spread (ROS) when tested on single- and multiclump fires in oak chaparral in Arizona. A statistical model developed using essentially the same input variables but weighted differently accounted for 81 percent ofthe variation in ROS. A chemical coefficient that accounts for...
A number of mathematical models have been developed to predict activated carbon column performance using single-solute isotherm data as inputs. Many assumptions are built into these models to account for kinetics of adsorption and competition for adsorption sites. This work...
High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs
NASA Technical Reports Server (NTRS)
Whitfield, C. E.; Mani, R.; Gliebe, P. R.
1990-01-01
The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.
High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs
NASA Astrophysics Data System (ADS)
Whitfield, C. E.; Mani, R.; Gliebe, P. R.
1990-07-01
The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.
NASA Astrophysics Data System (ADS)
Kanemura, Shinya; Kaneta, Kunio; Machida, Naoki; Odori, Shinya; Shindou, Tetsuo
2016-07-01
In the composite Higgs models, originally proposed by Georgi and Kaplan, the Higgs boson is a pseudo Nambu-Goldstone boson (pNGB) of spontaneous breaking of a global symmetry. In the minimal version of such models, global SO(5) symmetry is spontaneously broken to SO(4), and the pNGBs form an isospin doublet field, which corresponds to the Higgs doublet in the Standard Model (SM). Predicted coupling constants of the Higgs boson can in general deviate from the SM predictions, depending on the compositeness parameter. The deviation pattern is determined also by the detail of the matter sector. We comprehensively study how the model can be tested via measuring single and double production processes of the Higgs boson at the LHC and future electron-positron colliders. The possibility to distinguish the matter sector among the minimal composite Higgs models is also discussed. In addition, we point out differences in the cross section of double Higgs boson production from the prediction in other new physics models.
CREME96 and Related Error Rate Prediction Methods
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.
2012-01-01
Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and Analysis of Cosmic Ray Effects in Electronics). The Single Event Figure of Merit method was also revised to use the solar minimum galactic cosmic ray spectrum and extended to circular orbits down to 200 km at any inclination. More recently a series of commercial codes was developed by TRAD (Test & Radiations) which includes the OMERE code which calculates single event effects. There are other error rate prediction methods which use Monte Carlo techniques. In this chapter the analytic methods for estimating the environment within spacecraft will be discussed.
The modelling of heat, mass and solute transport in solidification systems
NASA Technical Reports Server (NTRS)
Voller, V. R.; Brent, A. D.; Prakash, C.
1989-01-01
The aim of this paper is to explore the range of possible one-phase models of binary alloy solidification. Starting from a general two-phase description, based on the two-fluid model, three limiting cases are identified which result in one-phase models of binary systems. Each of these models can be readily implemented in standard single phase flow numerical codes. Differences between predictions from these models are examined. In particular, the effects of the models on the predicted macro-segregation patterns are evaluated.
Motion compensation via redundant-wavelet multihypothesis.
Fowler, James E; Cui, Suxia; Wang, Yonghui
2006-10-01
Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Terranova, Nadia; Germani, Massimiliano; Del Bene, Francesca; Magni, Paolo
2013-08-01
In clinical oncology, combination treatments are widely used and increasingly preferred over single drug administrations. A better characterization of the interaction between drug effects and the selection of synergistic combinations represent an open challenge in drug development process. To this aim, preclinical studies are routinely performed, even if they are only qualitatively analyzed due to the lack of generally applicable mathematical models. This paper presents a new pharmacokinetic-pharmacodynamic model that, starting from the well-known single agent Simeoni TGI model, is able to describe tumor growth in xenograft mice after the co-administration of two anticancer agents. Due to the drug action, tumor cells are divided in two groups: damaged and not damaged ones. The damaging rate has two terms proportional to drug concentrations (as in the single drug administration model) and one interaction term proportional to their product. Six of the eight pharmacodynamic parameters assume the same value as in the corresponding single drug models. Only one parameter summarizes the interaction, and it can be used to compute two important indexes that are a clear way to score the synergistic/antagonistic interaction among drug effects. The model was successfully applied to four new compounds co-administered with four drugs already available on the market for the treatment of three different tumor cell lines. It also provided reliable predictions of different combination regimens in which the same drugs were administered at different doses/schedules. A good and quantitative measurement of the intensity and nature of interaction between drug effects, as well as the capability to correctly predict new combination arms, suggest the use of this generally applicable model for supporting the experiment optimal design and the prioritization of different therapies.
Enhancing emotional-based target prediction
NASA Astrophysics Data System (ADS)
Gosnell, Michael; Woodley, Robert
2008-04-01
This work extends existing agent-based target movement prediction to include key ideas of behavioral inertia, steady states, and catastrophic change from existing psychological, sociological, and mathematical work. Existing target prediction work inherently assumes a single steady state for target behavior, and attempts to classify behavior based on a single emotional state set. The enhanced, emotional-based target prediction maintains up to three distinct steady states, or typical behaviors, based on a target's operating conditions and observed behaviors. Each steady state has an associated behavioral inertia, similar to the standard deviation of behaviors within that state. The enhanced prediction framework also allows steady state transitions through catastrophic change and individual steady states could be used in an offline analysis with additional modeling efforts to better predict anticipated target reactions.
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.
2002-01-01
A generalized reliability model was developed for use in the design of structural components made from brittle, homogeneous anisotropic materials such as single crystals. The model is based on the Weibull distribution and incorporates a variable strength distribution and any equivalent stress failure criteria. In addition to the reliability model, an energy based failure criterion for elastically anisotropic materials was formulated. The model is different from typical Weibull-based models in that it accounts for strength anisotropy arising from fracture toughness anisotropy and thereby allows for strength and reliability predictions of brittle, anisotropic single crystals subjected to multiaxial stresses. The model is also applicable to elastically isotropic materials exhibiting strength anisotropy due to an anisotropic distribution of flaws. In order to develop and experimentally verify the model, the uniaxial and biaxial strengths of a single crystal nickel aluminide were measured. The uniaxial strengths of the <100> and <110> crystal directions were measured in three and four-point flexure. The biaxial strength was measured by subjecting <100> plates to a uniform pressure in a test apparatus that was developed and experimentally verified. The biaxial strengths of the single crystal plates were estimated by extending and verifying the displacement solution for a circular, anisotropic plate to the case of a variable radius and thickness. The best correlation between the experimental strength data and the model predictions occurred when an anisotropic stress analysis was combined with the normal stress criterion and the strength parameters associated with the <110> crystal direction.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-01-01
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033
D.R. Weise; E. Koo; X. Zhou; S. Mahalingam
2011-01-01
Observed fire spread rates from 240 laboratory fires in horizontally-oriented single-species live fuel beds were compared to predictions from various implementations and modifications of the Rothermel rate of spread model and a physical fire spread model developed by Pagni and Koo. Packing ratio of the laboratory fuel beds was generally greater than that observed in...
Zee-Babu type model with U (1 )Lμ-Lτ gauge symmetry
NASA Astrophysics Data System (ADS)
Nomura, Takaaki; Okada, Hiroshi
2018-05-01
We extend the Zee-Babu model, introducing local U (1 )Lμ-Lτ symmetry with several singly charged bosons. We find a predictive neutrino mass texture in a simple hypothesis in which mixings among singly charged bosons are negligible. Also, lepton-flavor violations are less constrained compared with the original model. Then, we explore the testability of the model, focusing on doubly charged boson physics at the LHC and the International Linear Collider.
Nevers, Meredith B.; Whitman, Richard L.
2011-01-01
Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.
Prediction of Coronary Artery Disease Risk Based on Multiple Longitudinal Biomarkers
Yang, Lili; Yu, Menggang; Gao, Sujuan
2016-01-01
In the last decade, few topics in the area of cardiovascular disease (CVD) research have received as much attention as risk prediction. One of the well documented risk factors for CVD is high blood pressure (BP). Traditional CVD risk prediction models consider BP levels measured at a single time and such models form the basis for current clinical guidelines for CVD prevention. However, in clinical practice, BP levels are often observed and recorded in a longitudinal fashion. Information on BP trajectories can be powerful predictors for CVD events. We consider joint modeling of time to coronary artery disease and individual longitudinal measures of systolic and diastolic BPs in a primary care cohort with up to 20 years of follow-up. We applied novel prediction metrics to assess the predictive performance of joint models. Predictive performances of proposed joint models and other models were assessed via simulations and illustrated using the primary care cohort. PMID:26439685
Decohesion Elements using Two and Three-Parameter Mixed-Mode Criteria
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Camanho, Pedro P.
2001-01-01
An eight-node decohesion element implementing different criteria to predict delamination growth under mixed-mode loading is proposed. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a softening law to track the damage state of the interface. The power law criterion and a three-parameter mixed-mode criterion are used to predict delamination growth. The accuracy of the predictions is evaluated in single mode delamination and in the mixed-mode bending tests.
Image Discrimination Models Predict Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Rohaly, A. M.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1994-01-01
Object detection involves looking for one of a large set of object sub-images in a large set of background images. Image discrimination models only predict the probability that an observer will detect a difference between two images. In a recent study based on only six different images, we found that discrimination models can predict the relative detectability of objects in those images, suggesting that these simpler models may be useful in some object detection applications. Here we replicate this result using a new, larger set of images. Fifteen images of a vehicle in an other-wise natural setting were altered to remove the vehicle and mixed with the original image in a proportion chosen to make the target neither perfectly recognizable nor unrecognizable. The target was also rotated about a vertical axis through its center and mixed with the background. Sixteen observers rated these 30 target images and the 15 background-only images for the presence of a vehicle. The likelihoods of the observer responses were computed from a Thurstone scaling model with the assumption that the detectabilities are proportional to the predictions of an image discrimination model. Three image discrimination models were used: a cortex transform model, a single channel model with a contrast sensitivity function filter, and the Root-Mean-Square (RMS) difference of the digital target and background-only images. As in the previous study, the cortex transform model performed best; the RMS difference predictor was second best; and last, but still a reasonable predictor, was the single channel model. Image discrimination models can predict the relative detectabilities of objects in natural backgrounds.
Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)
NASA Astrophysics Data System (ADS)
OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.
2016-02-01
Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.
Body Fat Percentage Prediction Using Intelligent Hybrid Approaches
Shao, Yuehjen E.
2014-01-01
Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804
Estimating Single-Event Logic Cross Sections in Advanced Technologies
NASA Astrophysics Data System (ADS)
Harrington, R. C.; Kauppila, J. S.; Warren, K. M.; Chen, Y. P.; Maharrey, J. A.; Haeffner, T. D.; Loveless, T. D.; Bhuva, B. L.; Bounasser, M.; Lilja, K.; Massengill, L. W.
2017-08-01
Reliable estimation of logic single-event upset (SEU) cross section is becoming increasingly important for predicting the overall soft error rate. As technology scales and single-event transient (SET) pulse widths shrink to widths on the order of the setup-and-hold time of flip-flops, the probability of latching an SET as an SEU must be reevaluated. In this paper, previous assumptions about the relationship of SET pulsewidth to the probability of latching an SET are reconsidered and a model for transient latching probability has been developed for advanced technologies. A method using the improved transient latching probability and SET data is used to predict logic SEU cross section. The presented model has been used to estimate combinational logic SEU cross sections in 32-nm partially depleted silicon-on-insulator (SOI) technology given experimental heavy-ion SET data. Experimental SEU data show good agreement with the model presented in this paper.
Theoretical model of hardness anisotropy in brittle materials
NASA Astrophysics Data System (ADS)
Gao, Faming
2012-07-01
Anisotropy is prominent in the hardness test of single crystals. However, the anisotropic nature is not demonstrated quantitatively in previous hardness model. In this work, it is found that the electron transition energy per unit volume in the glide region and the orientation of glide region play critical roles in determining hardness value and hardness anisotropy for a single crystal material. We express the mathematical definition of hardness anisotropy through simple algebraic relations. The calculated Knoop hardnesses of the single crystals are in good agreement with observations. This theory, extended to polycrystalline materials by including hall-petch effect and quantum size effect, predicts that the polycrystalline diamond with low angle grain boundaries can be harder than single-crystal bulk diamond. Combining first-principles technique and the formula of hardness anisotropy the hardness of monoclinic M-carbon, orthorhombic W-carbon, Z-carbon, and T-carbon are predicted.
Numerical study of single and two interacting turbulent plumes in atmospheric cross flow
NASA Astrophysics Data System (ADS)
Mokhtarzadeh-Dehghan, M. R.; König, C. S.; Robins, A. G.
The paper presents a numerical study of two interacting full-scale dry plumes issued into neutral boundary layer cross flow. The study simulates plumes from a mechanical draught cooling tower. The plumes are placed in tandem or side-by-side. Results are first presented for plumes with a density ratio of 0.74 and plume-to-crosswind speed ratio of 2.33, for which data from a small-scale wind tunnel experiment were available and were used to assess the accuracy of the numerical results. Further results are then presented for the more physically realistic density ratio of 0.95, maintaining the same speed ratio. The sensitivity of the results with respect to three turbulence models, namely, the standard k- ɛ model, the RNG k- ɛ model and the Differential Flux Model (DFM) is presented. Comparisons are also made between the predicted rise height and the values obtained from existing integral models. The formation of two counter-rotating vortices is well predicted. The results show good agreement for the rise height predicted by different turbulence models, but the DFM predicts temperature profiles more accurately. The values of predicted rise height are also in general agreement. However, discrepancies between the present results for the rise height for single and multiple plumes and the values obtained from known analytical relations are apparent and possible reasons for these are discussed.
Noise and diffusion of a vibrated self-propelled granular particle
NASA Astrophysics Data System (ADS)
Walsh, Lee; Wagner, Caleb G.; Schlossberg, Sarah; Olson, Christopher; Baskaran, Aparna; Menon, Narayanan
Granular materials are an important physical realization of active matter. In vibration-fluidized granular matter, both diffusion and self-propulsion derive from the same collisional forcing, unlike many other active systems where there is a clean separation between the origin of single-particle mobility and the coupling to noise. Here we present experimental studies of single-particle motion in a vibrated granular monolayer, along with theoretical analysis that compares grain motion at short and long time scales to the assumptions and predictions, respectively, of the active Brownian particle (ABP) model. The results demonstrate that despite the unique relation between noise and propulsion, granular media do show the generic features predicted by the ABP model and indicate that this is a valid framework to predict collective phenomena. Additionally, our scheme of analysis for validating the inputs and outputs of the model can be applied to other granular and non-granular systems.
Using Speculative Execution to Automatically Hide I/O Latency
2001-12-07
sion of the Lempel - Ziv algorithm and the Finite multi-order context models (FMOC) that originated from prediction-by-partial-match data compressors...allowed the cancellation of a single hint at a time.) 2.2.4 Predicting future data needs In order to take advantage of any of the algorithms described...modelling techniques generally used for data compression to perform probabilistic prediction of an application’s next page fault (or, in an object-oriented
A weak-scattering model for turbine-tone haystacking
NASA Astrophysics Data System (ADS)
McAlpine, A.; Powles, C. J.; Tester, B. J.
2013-08-01
Noise and emissions are critical technical issues in the development of aircraft engines. This necessitates the development of accurate models to predict the noise radiated from aero-engines. Turbine tones radiated from the exhaust nozzle of a turbofan engine propagate through turbulent jet shear layers which causes scattering of sound. In the far-field, measurements of the tones may exhibit spectral broadening, where owing to scattering, the tones are no longer narrow band peaks in the spectrum. This effect is known colloquially as 'haystacking'. In this article a comprehensive analytical model to predict spectral broadening for a tone radiated through a circular jet, for an observer in the far field, is presented. This model extends previous work by the authors which considered the prediction of spectral broadening at far-field observer locations outside the cone of silence. The modelling uses high-frequency asymptotic methods and a weak-scattering assumption. A realistic shear layer velocity profile and turbulence characteristics are included in the model. The mathematical formulation which details the spectral broadening, or haystacking, of a single-frequency, single azimuthal order turbine tone is outlined. In order to validate the model, predictions are compared with experimental results, albeit only at polar angle equal to 90°. A range of source frequencies from 4 to 20kHz, and jet velocities from 20 to 60ms-1, are examined for validation purposes. The model correctly predicts how the spectral broadening is affected when the source frequency and jet velocity are varied.
GeneSilico protein structure prediction meta-server.
Kurowski, Michal A; Bujnicki, Janusz M
2003-07-01
Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta.
GeneSilico protein structure prediction meta-server
Kurowski, Michal A.; Bujnicki, Janusz M.
2003-01-01
Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta. PMID:12824313
Comparison of modeled backscatter with SAR data at P-band
NASA Technical Reports Server (NTRS)
Wang, Yong; Davis, Frank W.; Melack, John M.
1992-01-01
In recent years several analytical models were developed to predict microwave scattering by trees and forest canopies. These models contribute to the understanding of radar backscatter over forested regions to the extent that they capture the basic interactions between microwave radiation and tree canopies, understories, and ground layers as functions of incidence angle, wavelength, and polarization. The Santa Barbara microwave model backscatter model for woodland (i.e. with discontinuous tree canopies) combines a single-tree backscatter model and a gap probability model. Comparison of model predictions with synthetic aperture radar (SAR) data and L-band (lambda = 0.235 m) is promising, but much work is still needed to test the validity of model predictions at other wavelengths. The validity of the model predictions at P-band (lambda = 0.68 m) for woodland stands at our Mt. Shasta test site was tested.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Rottler, Jörg; Plotkin, Steven S.
2016-01-01
Mechanical unfolding of a single domain of loop-truncated superoxide dismutase protein has been simulated via force spectroscopy techniques with both all-atom (AA) models and several coarse-grained models having different levels of resolution: A Gō model containing all heavy atoms in the protein (HA-Gō), the associative memory, water mediated, structure and energy model (AWSEM) which has 3 interaction sites per amino acid, and a Gō model containing only one interaction site per amino acid at the Cα position (Cα-Gō). To systematically compare results across models, the scales of time, energy, and force had to be suitably renormalized in each model. Surprisingly, the HA-Gō model gives the softest protein, exhibiting much smaller force peaks than all other models after the above renormalization. Clustering to render a structural taxonomy as the protein unfolds showed that the AA, HA-Gō, and Cα-Gō models exhibit a single pathway for early unfolding, which eventually bifurcates repeatedly to multiple branches only after the protein is about half-unfolded. The AWSEM model shows a single dominant unfolding pathway over the whole range of unfolding, in contrast to all other models. TM alignment, clustering analysis, and native contact maps show that the AWSEM pathway has however the most structural similarity to the AA model at high nativeness, but the least structural similarity to the AA model at low nativeness. In comparison to the AA model, the sequence of native contact breakage is best predicted by the HA-Gō model. All models consistently predict a similar unfolding mechanism for early force-induced unfolding events, but diverge in their predictions for late stage unfolding events when the protein is more significantly disordered. PMID:27898663
Habibi, Mona; Rottler, Jörg; Plotkin, Steven S
2016-11-01
Mechanical unfolding of a single domain of loop-truncated superoxide dismutase protein has been simulated via force spectroscopy techniques with both all-atom (AA) models and several coarse-grained models having different levels of resolution: A Gō model containing all heavy atoms in the protein (HA-Gō), the associative memory, water mediated, structure and energy model (AWSEM) which has 3 interaction sites per amino acid, and a Gō model containing only one interaction site per amino acid at the Cα position (Cα-Gō). To systematically compare results across models, the scales of time, energy, and force had to be suitably renormalized in each model. Surprisingly, the HA-Gō model gives the softest protein, exhibiting much smaller force peaks than all other models after the above renormalization. Clustering to render a structural taxonomy as the protein unfolds showed that the AA, HA-Gō, and Cα-Gō models exhibit a single pathway for early unfolding, which eventually bifurcates repeatedly to multiple branches only after the protein is about half-unfolded. The AWSEM model shows a single dominant unfolding pathway over the whole range of unfolding, in contrast to all other models. TM alignment, clustering analysis, and native contact maps show that the AWSEM pathway has however the most structural similarity to the AA model at high nativeness, but the least structural similarity to the AA model at low nativeness. In comparison to the AA model, the sequence of native contact breakage is best predicted by the HA-Gō model. All models consistently predict a similar unfolding mechanism for early force-induced unfolding events, but diverge in their predictions for late stage unfolding events when the protein is more significantly disordered.
Reevaluating the two-representation model of numerical magnitude processing.
Jiang, Ting; Zhang, Wenfeng; Wen, Wen; Zhu, Haiting; Du, Han; Zhu, Xiangru; Gao, Xuefei; Zhang, Hongchuan; Dong, Qi; Chen, Chuansheng
2016-01-01
One debate in mathematical cognition centers on the single-representation model versus the two-representation model. Using an improved number Stroop paradigm (i.e., systematically manipulating physical size distance), in the present study we tested the predictions of the two models for number magnitude processing. The results supported the single-representation model and, more importantly, explained how a design problem (failure to manipulate physical size distance) and an analytical problem (failure to consider the interaction between congruity and task-irrelevant numerical distance) might have contributed to the evidence used to support the two-representation model. This study, therefore, can help settle the debate between the single-representation and two-representation models.
Noise from Supersonic Coaxial Jets. Part 1; Mean Flow Predictions
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Morris, Philip J.
1997-01-01
Recent theories for supersonic jet noise have used an instability wave noise generation model to predict radiated noise. This model requires a known mean flow that has typically been described by simple analytic functions for single jet mean flows. The mean flow of supersonic coaxial jets is not described easily in terms of analytic functions. To provide these profiles at all axial locations, a numerical scheme is developed to calculate the mean flow properties of a coaxial jet. The Reynolds-averaged, compressible, parabolic boundary layer equations are solved using a mixing length turbulence model. Empirical correlations are developed to account for the effects of velocity and temperature ratios and Mach number on the shear layer spreading. Both normal velocity profile and inverted velocity profile coaxial jets are considered. The mixing length model is modified in each case to obtain reasonable results when the two stream jet merges into a single fully developed jet. The mean flow calculations show both good qualitative and quantitative agreement with measurements in single and coaxial jet flows.
Liang, D.; Xu, X.; Tsang, L.; Andreadis, K.M.; Josberger, E.G.
2008-01-01
A model for the microwave emissions of multilayer dry snowpacks, based on dense media radiative transfer (DMRT) theory with the quasicrystalline approximation (QCA), provides more accurate results when compared to emissions determined by a homogeneous snowpack and other scattering models. The DMRT model accounts for adhesive aggregate effects, which leads to dense media Mie scattering by using a sticky particle model. With the multilayer model, we examined both the frequency and polarization dependence of brightness temperatures (Tb's) from representative snowpacks and compared them to results from a single-layer model and found that the multilayer model predicts higher polarization differences, twice as much, and weaker frequency dependence. We also studied the temporal evolution of Tb from multilayer snowpacks. The difference between Tb's at 18.7 and 36.5 GHz can be S K lower than the single-layer model prediction in this paper. By using the snowpack observations from the Cold Land Processes Field Experiment as input for both multi- and single-layer models, it shows that the multilayer Tb's are in better agreement with the data than the single-layer model. With one set of physical parameters, the multilayer QCA/DMRT model matched all four channels of Tb observations simultaneously, whereas the single-layer model could only reproduce vertically polarized Tb's. Also, the polarization difference and frequency dependence were accurately matched by the multilayer model using the same set of physical parameters. Hence, algorithms for the retrieval of snowpack depth or water equivalent should be based on multilayer scattering models to achieve greater accuracy. ?? 2008 IEEE.
Inverse and Predictive Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syracuse, Ellen Marie
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Validation of a probabilistic post-fire erosion model
Pete Robichaud; William J. Elliot; Sarah A. Lewis; Mary Ellen Miller
2016-01-01
Post-fire increases of runoff and erosion often occur and land managers need tools to be able to project the increased risk. The Erosion Risk Management Tool (ERMiT) uses the Water Erosion Prediction Project (WEPP) model as the underlying processor. ERMiT predicts the probability of a given amount of hillslope sediment delivery from a single rainfall or...
Manoharan, Prabu; Chennoju, Kiranmai; Ghoshal, Nanda
2015-07-01
BACE1 is an attractive target in Alzheimer's disease (AD) treatment. A rational drug design effort for the inhibition of BACE1 is actively pursued by researchers in both academic and pharmaceutical industries. This continued effort led to the steady accumulation of BACE1 crystal structures, co-complexed with different classes of inhibitors. This wealth of information is used in this study to develop target specific proteochemometric models and these models are exploited for predicting the prospective BACE1 inhibitors. The models developed in this study have performed excellently in predicting the computationally generated poses, separately obtained from single and ensemble docking approaches. The simple protein-ligand contact (SPLC) model outperforms other sophisticated high end models, in virtual screening performance, developed during this study. In an attempt to account for BACE1 protein active site flexibility information in predictive models, we included the change in the area of solvent accessible surface and the change in the volume of solvent accessible surface in our models. The ensemble and single receptor docking results obtained from this study indicate that the structural water mediated interactions improve the virtual screening results. Also, these waters are essential for recapitulating bioactive conformation during docking study. The proteochemometric models developed in this study can be used for the prediction of BACE1 inhibitors, during the early stage of AD drug discovery.
A linear shock cell model for jets of arbitrary exit geometry
NASA Technical Reports Server (NTRS)
Morris, P. J.; Bhat, T. R. S.; Chen, G.
1989-01-01
The shock cell structures of single supersonic non-ideally expanded jets with arbitrary exit geometry are studied. Both vortex sheets and realistic mean profiles are considered for the jet shear layer. The boundary element method is used to predict the shock spacing and screech tones in a vortex sheet model of a single jet. This formulation enables the calculations to be performed only on the vortex sheet. This permits the efficient and convenient study of complicated jet geometries. Results are given for circular, elliptic and rectangular jets and the results are compared with analysis and experiment. The agreement between the predictions and measurements is very good but depends on the assumptions made to predict the geometry of the fully expanded jet. A finite diffference technique is used to examine the effect of finite mixing layer thickness for a single jet. The finite thickness of the mixing layer is found to decrease the shock spacing by approximately 20 percent over the length of the jet potential core.
NASA Astrophysics Data System (ADS)
Yu, Chao; Kang, Guozheng; Kan, Qianhua
2015-09-01
Based on the experimental observations on the anisotropic cyclic deformation of super-elastic NiTi shape memory alloy single crystals done by Gall and Maier (2002), a crystal plasticity based micromechanical constitutive model is constructed to describe such anisotropic cyclic deformation. To model the internal stress caused by the unmatched inelastic deformation between the austenite and martensite phases on the plastic deformation of austenite phase, 24 induced martensite variants are assumed to be ellipsoidal inclusions with anisotropic elasticity and embedded in the austenite matrix. The homogeneous stress fields in the austenite matrix and each induced martensite variant are obtained by using the Mori-Tanaka homogenization method. Two different inelastic mechanisms, i.e., martensite transformation and transformation-induced plasticity, and their interactions are considered in the proposed model. Following the assumption of instantaneous domain growth (Cherkaoui et al., 1998), the Helmholtz free energy of a representative volume element of a NiTi shape memory single crystal is established and the thermodynamic driving forces of the internal variables are obtained from the dissipative inequalities. The capability of the proposed model to describe the anisotropic cyclic deformation of super-elastic NiTi single crystals is first verified by comparing the predicted results with the experimental ones. It is concluded that the proposed model can capture the main quantitative features observed in the experiments. And then, the proposed model is further used to predict the uniaxial and multiaxial transformation ratchetting of a NiTi single crystal.
A two-component rain model for the prediction of attenuation and diversity improvement
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A new model was developed to predict attenuation statistics for a single Earth-satellite or terrestrial propagation path. The model was extended to provide predictions of the joint occurrences of specified or higher attenuation values on two closely spaced Earth-satellite paths. The joint statistics provide the information required to obtain diversity gain or diversity advantage estimates. The new model is meteorologically based. It was tested against available Earth-satellite beacon observations and terrestrial path measurements. The model employs the rain climate region descriptions of the Global rain model. The rms deviation between the predicted and observed attenuation values for the terrestrial path data was 35 percent, a result consistent with the expectations of the Global model when the rain rate distribution for the path is not used in the calculation. Within the United States the rms deviation between measurement and prediction was 36 percent but worldwide it was 79 percent.
Frontal Theta Links Prediction Errors to Behavioral Adaptation in Reinforcement Learning
Cavanagh, James F.; Frank, Michael J.; Klein, Theresa J.; Allen, John J.B.
2009-01-01
Investigations into action monitoring have consistently detailed a fronto-central voltage deflection in the Event-Related Potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the Feedback Related Negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Medio-frontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations: with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice. PMID:19969093
On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction
NASA Astrophysics Data System (ADS)
Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish
2016-04-01
A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Assessing Model Characterization of Single Source ...
Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci
Genomic selection in a commercial winter wheat population.
He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong
2016-03-01
Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.
Zhou, L; Lund, M S; Wang, Y; Su, G
2014-08-01
This study investigated genomic predictions across Nordic Holstein and Nordic Red using various genomic relationship matrices. Different sources of information, such as consistencies of linkage disequilibrium (LD) phase and marker effects, were used to construct the genomic relationship matrices (G-matrices) across these two breeds. Single-trait genomic best linear unbiased prediction (GBLUP) model and two-trait GBLUP model were used for single-breed and two-breed genomic predictions. The data included 5215 Nordic Holstein bulls and 4361 Nordic Red bulls, which was composed of three populations: Danish Red, Swedish Red and Finnish Ayrshire. The bulls were genotyped with 50 000 SNP chip. Using the two-breed predictions with a joint Nordic Holstein and Nordic Red reference population, accuracies increased slightly for all traits in Nordic Red, but only for some traits in Nordic Holstein. Among the three subpopulations of Nordic Red, accuracies increased more for Danish Red than for Swedish Red and Finnish Ayrshire. This is because closer genetic relationships exist between Danish Red and Nordic Holstein. Among Danish Red, individuals with higher genomic relationship coefficients with Nordic Holstein showed more increased accuracies in the two-breed predictions. Weighting the two-breed G-matrices by LD phase consistencies, marker effects or both did not further improve accuracies of the two-breed predictions. © 2014 Blackwell Verlag GmbH.
Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi
2014-01-01
The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction. PMID:27379306
Kagan, Leonid; Gershkovich, Pavel; Wasan, Kishor M; Mager, Donald E
2011-06-01
The time course of tissue distribution of amphotericin B (AmB) has not been sufficiently characterized despite its therapeutic importance and an apparent disconnect between plasma pharmacokinetics and clinical outcomes. The goals of this work were to develop and evaluate a physiologically based pharmacokinetic (PBPK) model to characterize the disposition properties of AmB administered as deoxycholate formulation in healthy rats and to examine the utility of the PBPK model for interspecies scaling of AmB pharmacokinetics. AmB plasma and tissue concentration-time data, following single and multiple intravenous administration of Fungizone® to rats, from several publications were combined for construction of the model. Physiological parameters were fixed to literature values. Various structural models for single organs were evaluated, and the whole-body PBPK model included liver, spleen, kidney, lung, heart, gastrointestinal tract, plasma, and remainder compartments. The final model resulted in a good simultaneous description of both single and multiple dose data sets. Incorporation of three subcompartments for spleen and kidney tissues was required for capturing a prolonged half-life in these organs. The predictive performance of the final PBPK model was assessed by evaluating its utility in predicting pharmacokinetics of AmB in mice and humans. Clearance and permeability-surface area terms were scaled with body weight. The model demonstrated good predictions of plasma AmB concentration-time profiles for both species. This modeling framework represents an important basis that may be further utilized for characterization of formulation- and disease-related factors in AmB pharmacokinetics and pharmacodynamics.
Attentional Control via Parallel Target-Templates in Dual-Target Search
Barrett, Doug J. K.; Zobay, Oliver
2014-01-01
Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this ‘dual-target cost’ may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793
Prediction of near-term breast cancer risk using a Bayesian belief network
NASA Astrophysics Data System (ADS)
Zheng, Bin; Ramalingam, Pandiyarajan; Hariharan, Harishwaran; Leader, Joseph K.; Gur, David
2013-03-01
Accurately predicting near-term breast cancer risk is an important prerequisite for establishing an optimal personalized breast cancer screening paradigm. In previous studies, we investigated and tested the feasibility of developing a unique near-term breast cancer risk prediction model based on a new risk factor associated with bilateral mammographic density asymmetry between the left and right breasts of a woman using a single feature. In this study we developed a multi-feature based Bayesian belief network (BBN) that combines bilateral mammographic density asymmetry with three other popular risk factors, namely (1) age, (2) family history, and (3) average breast density, to further increase the discriminatory power of our cancer risk model. A dataset involving "prior" negative mammography examinations of 348 women was used in the study. Among these women, 174 had breast cancer detected and verified in the next sequential screening examinations, and 174 remained negative (cancer-free). A BBN was applied to predict the risk of each woman having cancer detected six to 18 months later following the negative screening mammography. The prediction results were compared with those using single features. The prediction accuracy was significantly increased when using the BBN. The area under the ROC curve increased from an AUC=0.70 to 0.84 (p<0.01), while the positive predictive value (PPV) and negative predictive value (NPV) also increased from a PPV=0.61 to 0.78 and an NPV=0.65 to 0.75, respectively. This study demonstrates that a multi-feature based BBN can more accurately predict the near-term breast cancer risk than with a single feature.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Strath, Scott J; Kate, Rohit J; Keenan, Kevin G; Welch, Whitney A; Swartz, Ann M
2016-01-01
To develop and test time series single site and multi-site placement models, we used wrist, hip and ankle processed accelerometer data to estimate energy cost and type of physical activity in adults. Ninety-nine subjects in three age groups (18–39, 40–64, 65 + years) performed 11 activities while wearing three triaxial accelereometers: one each on the non-dominant wrist, hip, and ankle. During each activity net oxygen cost (METs) was assessed. The time series of accelerometer signals were represented in terms of uniformly discretized values called bins. Support Vector Machine was used for activity classification with bins and every pair of bins used as features. Bagged decision tree regression was used for net metabolic cost prediction. To evaluate model performance we employed the jackknife leave-one-out cross validation method. Single accelerometer and multi-accelerometer site model estimates across and within age group revealed similar accuracy, with a bias range of −0.03 to 0.01 METs, bias percent of −0.8 to 0.3%, and a rMSE range of 0.81–1.04 METs. Multi-site accelerometer location models improved activity type classification over single site location models from a low of 69.3% to a maximum of 92.8% accuracy. For each accelerometer site location model, or combined site location model, percent accuracy classification decreased as a function of age group, or when young age groups models were generalized to older age groups. Specific age group models on average performed better than when all age groups were combined. A time series computation show promising results for predicting energy cost and activity type. Differences in prediction across age group, a lack of generalizability across age groups, and that age group specific models perform better than when all ages are combined needs to be considered as analytic calibration procedures to detect energy cost and type are further developed. PMID:26449155
De Kauwe, Martin G; Medlyn, Belinda E; Walker, Anthony P; Zaehle, Sönke; Asao, Shinichi; Guenet, Bertrand; Harper, Anna B; Hickler, Thomas; Jain, Atul K; Luo, Yiqi; Lu, Xingjie; Luus, Kristina; Parton, William J; Shu, Shijie; Wang, Ying-Ping; Werner, Christian; Xia, Jianyang; Pendall, Elise; Morgan, Jack A; Ryan, Edmund M; Carrillo, Yolima; Dijkstra, Feike A; Zelikova, Tamara J; Norby, Richard J
2017-09-01
Multifactor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date, such models have only been tested against single-factor experiments. We applied 10 TBMs to the multifactor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1 ). Comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the N cycle models, N availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO 2 -induced water savings to extend the growing season length. Observed interactive (CO 2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change. © 2017 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.
Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less
Measurement and Modeling of Ultrasonic Pitch/catch Grain Noise
NASA Astrophysics Data System (ADS)
Margetan, F. J.; Gray, T. A.; Thompson, R. B.
2008-02-01
Ultrasonic grain noise arises from the scattering of sound waves by microstructural boundaries, and can limit the detection of weakly-reflecting internal defects in metals. In some cases of practical interest, such as focused-transducer inspections of aircraft engine components, so-called "single scattering" or "independent scatterer" models have proven to be reasonably accurate in predicting grain noise characteristics. In pulse/echo inspections it is difficult to experimentally assess the relative contributions of single scattering and multiple scattering, because both can generally contribute to the backscattered noise seen at any given observation time. For pitch/catch inspections, however, it is relatively easy to construct inspection geometries for which single-scattered noise should be insignificant, and hence any observed noise is presumably due to multiple scattering. This concept is demonstrated using pitch/catch shear-wave measurements performed on a well-characterized stainless-steel specimen. The inspection geometry allows us to control the overlap volume of the intersecting radiation fields of the two transducers. As we proceed from maximally overlapping fields to zero overlap, the single-scattering contribution to the observed grain noise is expected to decrease. Measurements are compared to the predictions of a single-scatterer model, and the relative contributions of single and multiple scattering to the observed grain noise are estimated.
De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.; ...
2017-02-01
Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less
Zoellner, Jamie M; Porter, Kathleen J; Chen, Yvonnes; Hedrick, Valisa E; You, Wen; Hickman, Maja; Estabrooks, Paul A
2017-05-01
Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13-20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6-38%) and behaviour (average 30%, range 6-55%) were significant. Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Chen, Charles; Porth, Ilga; El-Kassaby, Yousry A
2015-05-09
Genomic selection (GS) in forestry can substantially reduce the length of breeding cycle and increase gain per unit time through early selection and greater selection intensity, particularly for traits of low heritability and late expression. Affordable next-generation sequencing technologies made it possible to genotype large numbers of trees at a reasonable cost. Genotyping-by-sequencing was used to genotype 1,126 Interior spruce trees representing 25 open-pollinated families planted over three sites in British Columbia, Canada. Four imputation algorithms were compared (mean value (MI), singular value decomposition (SVD), expectation maximization (EM), and a newly derived, family-based k-nearest neighbor (kNN-Fam)). Trees were phenotyped for several yield and wood attributes. Single- and multi-site GS prediction models were developed using the Ridge Regression Best Linear Unbiased Predictor (RR-BLUP) and the Generalized Ridge Regression (GRR) to test different assumption about trait architecture. Finally, using PCA, multi-trait GS prediction models were developed. The EM and kNN-Fam imputation methods were superior for 30 and 60% missing data, respectively. The RR-BLUP GS prediction model produced better accuracies than the GRR indicating that the genetic architecture for these traits is complex. GS prediction accuracies for multi-site were high and better than those of single-sites while multi-site predictability produced the lowest accuracies reflecting type-b genetic correlations and deemed unreliable. The incorporation of genomic information in quantitative genetics analyses produced more realistic heritability estimates as half-sib pedigree tended to inflate the additive genetic variance and subsequently both heritability and gain estimates. Principle component scores as representatives of multi-trait GS prediction models produced surprising results where negatively correlated traits could be concurrently selected for using PCA2 and PCA3. The application of GS to open-pollinated family testing, the simplest form of tree improvement evaluation methods, was proven to be effective. Prediction accuracies obtained for all traits greatly support the integration of GS in tree breeding. While the within-site GS prediction accuracies were high, the results clearly indicate that single-site GS models ability to predict other sites are unreliable supporting the utilization of multi-site approach. Principle component scores provided an opportunity for the concurrent selection of traits with different phenotypic optima.
Entanglement and quantum superposition induced by a single photon
NASA Astrophysics Data System (ADS)
Lü, Xin-You; Zhu, Gui-Lei; Zheng, Li-Li; Wu, Ying
2018-03-01
We predict the occurrence of single-photon-induced entanglement and quantum superposition in a hybrid quantum model, introducing an optomechanical coupling into the Rabi model. Originally, it comes from the photon-dependent quantum property of the ground state featured by the proposed hybrid model. It is associated with a single-photon-induced quantum phase transition, and is immune to the A2 term of the spin-field interaction. Moreover, the obtained quantum superposition state is actually a squeezed cat state, which can significantly enhance precision in quantum metrology. This work offers an approach to manipulate entanglement and quantum superposition with a single photon, which might have potential applications in the engineering of new single-photon quantum devices, and also fundamentally broaden the regime of cavity QED.
NASA Technical Reports Server (NTRS)
Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.
2012-01-01
A multiscale modeling methodology that combines the predictive capability of discrete dislocation plasticity and the computational efficiency of continuum crystal plasticity is developed. Single crystal configurations of different grain sizes modeled with periodic boundary conditions are analyzed using discrete dislocation plasticity (DD) to obtain grain size-dependent stress-strain predictions. These relationships are mapped into crystal plasticity parameters to develop a multiscale DD/CP model for continuum level simulations. A polycrystal model of a structurally-graded microstructure is developed, analyzed and used as a benchmark for comparison between the multiscale DD/CP model and the DD predictions. The multiscale DD/CP model follows the DD predictions closely up to an initial peak stress and then follows a strain hardening path that is parallel but somewhat offset from the DD predictions. The difference is believed to be from a combination of the strain rate in the DD simulation and the inability of the DD/CP model to represent non-monotonic material response.
Aeroacoustics of advanced propellers
NASA Technical Reports Server (NTRS)
Groeneweg, John F.
1990-01-01
The aeroacoustics of advanced, high speed propellers (propfans) are reviewed from the perspective of NASA research conducted in support of the Advanced Turboprop Program. Aerodynamic and acoustic components of prediction methods for near and far field noise are summarized for both single and counterrotation propellers in uninstalled and configurations. Experimental results from tests at both takeoff/approach and cruise conditions are reviewed with emphasis on: (1) single and counterrotation model tests in the NASA Lewis 9 by 15 (low speed) and 8 by 6 (high speed) wind tunnels, and (2) full scale flight tests of a 9 ft (2.74 m) diameter single rotation wing mounted tractor and a 11.7 ft (3.57 m) diameter counterrotation aft mounted pusher propeller. Comparisons of model data projected to flight with full scale flight data show good agreement validating the scale model wind tunnel approach. Likewise, comparisons of measured and predicted noise level show excellent agreement for both single and counterrotation propellers. Progress in describing angle of attack and installation effects is also summarized. Finally, the aeroacoustic issues associated with ducted propellers (very high bypass fans) are discussed.
Vaegter, Katarina Kebbon; Lakic, Tatevik Ghukasyan; Olovsson, Matts; Berglund, Lars; Brodin, Thomas; Holte, Jan
2017-03-01
To construct a prediction model for live birth after in vitro fertilization/intracytoplasmic sperm injection (IVF/ICSI) treatment and single-embryo transfer (SET) after 2 days of embryo culture. Prospective observational cohort study. University-affiliated private infertility center. SET in 8,451 IVF/ICSI treatments in 5,699 unselected consecutive couples during 1999-2014. A total of 100 basal patient characteristics and treatment data were analyzed for associations with live birth after IVF/ICSI (adjusted for repeated treatments) and subsequently combined for prediction model construction. Live birth rate (LBR) and performance of live birth prediction model. Embryo score, treatment history, ovarian sensitivity index (OSI; number of oocytes/total dose of FSH administered), female age, infertility cause, endometrial thickness, and female height were all independent predictors of live birth. A prediction model (training data set; n = 5,722) based on these variables showed moderate discrimination, but predicted LBR with high accuracy in subgroups of patients, with LBR estimates ranging from <10% to >40%. Outcomes were similar in an internal validation data set (n = 2,460). Based on 100 variables prospectively recorded during a 15-year period, a model for live birth prediction after strict SET was constructed and showed excellent calibration in internal validation. For the first time, female height qualified as a predictor of live birth after IVF/ICSI. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
A microstructurally based model of solder joints under conditions of thermomechanical fatigue
NASA Astrophysics Data System (ADS)
Frear, D. R.; Burchett, S. N.; Rashid, M. M.
The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue. We present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.
Predictive information processing in music cognition. A critical review.
Rohrmeier, Martin A; Koelsch, Stefan
2012-02-01
Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.
A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia
NASA Astrophysics Data System (ADS)
Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.
2017-08-01
In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.
Model predictive and reallocation problem for CubeSat fault recovery and attitude control
NASA Astrophysics Data System (ADS)
Franchi, Loris; Feruglio, Lorenzo; Mozzillo, Raffaele; Corpino, Sabrina
2018-01-01
In recent years, thanks to the increase of the know-how on machine-learning techniques and the advance of the computational capabilities of on-board processing, expensive computing algorithms, such as Model Predictive Control, have begun to spread in space applications even on small on-board processor. The paper presents an algorithm for an optimal fault recovery of a 3U CubeSat, developed in MathWorks Matlab & Simulink environment. This algorithm involves optimization techniques aiming at obtaining the optimal recovery solution, and involves a Model Predictive Control approach for the attitude control. The simulated system is a CubeSat in Low Earth Orbit: the attitude control is performed with three magnetic torquers and a single reaction wheel. The simulation neglects the errors in the attitude determination of the satellite, and focuses on the recovery approach and control method. The optimal recovery approach takes advantage of the properties of magnetic actuation, which gives the possibility of the redistribution of the control action when a fault occurs on a single magnetic torquer, even in absence of redundant actuators. In addition, the paper presents the results of the implementation of Model Predictive approach to control the attitude of the satellite.
A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke.
Smith, Eric E; Shobha, Nandavar; Dai, David; Olson, DaiWai M; Reeves, Mathew J; Saver, Jeffrey L; Hernandez, Adrian F; Peterson, Eric D; Fonarow, Gregg C; Schwamm, Lee H
2013-01-28
We aimed to derive and validate a single risk score for predicting death from ischemic stroke (IS), intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH). Data from 333 865 stroke patients (IS, 82.4%; ICH, 11.2%; SAH, 2.6%; uncertain type, 3.8%) in the Get With The Guidelines-Stroke database were used. In-hospital mortality varied greatly according to stroke type (IS, 5.5%; ICH, 27.2%; SAH, 25.1%; unknown type, 6.0%; P<0.001). The patients were randomly divided into derivation (60%) and validation (40%) samples. Logistic regression was used to determine the independent predictors of mortality and to assign point scores for a prediction model in the overall population and in the subset with the National Institutes of Health Stroke Scale (NIHSS) recorded (37.1%). The c statistic, a measure of how well the models discriminate the risk of death, was 0.78 in the overall validation sample and 0.86 in the model including NIHSS. The model with NIHSS performed nearly as well in each stroke type as in the overall model including all types (c statistics for IS alone, 0.85; for ICH alone, 0.83; for SAH alone, 0.83; uncertain type alone, 0.86). The calibration of the model was excellent, as demonstrated by plots of observed versus predicted mortality. A single prediction score for all stroke types can be used to predict risk of in-hospital death following stroke admission. Incorporation of NIHSS information substantially improves this predictive accuracy.
In silico prediction of splice-altering single nucleotide variants in the human genome.
Jian, Xueqiu; Boerwinkle, Eric; Liu, Xiaoming
2014-12-16
In silico tools have been developed to predict variants that may have an impact on pre-mRNA splicing. The major limitation of the application of these tools to basic research and clinical practice is the difficulty in interpreting the output. Most tools only predict potential splice sites given a DNA sequence without measuring splicing signal changes caused by a variant. Another limitation is the lack of large-scale evaluation studies of these tools. We compared eight in silico tools on 2959 single nucleotide variants within splicing consensus regions (scSNVs) using receiver operating characteristic analysis. The Position Weight Matrix model and MaxEntScan outperformed other methods. Two ensemble learning methods, adaptive boosting and random forests, were used to construct models that take advantage of individual methods. Both models further improved prediction, with outputs of directly interpretable prediction scores. We applied our ensemble scores to scSNVs from the Catalogue of Somatic Mutations in Cancer database. Analysis showed that predicted splice-altering scSNVs are enriched in recurrent scSNVs and known cancer genes. We pre-computed our ensemble scores for all potential scSNVs across the human genome, providing a whole genome level resource for identifying splice-altering scSNVs discovered from large-scale sequencing studies.
Predicting the nature of supernova progenitors.
Groh, Jose H
2017-10-28
Stars more massive than about 8 solar masses end their lives as a supernova (SN), an event of fundamental importance Universe-wide. The physical properties of massive stars before the SN event are very uncertain, both from theoretical and observational perspectives. In this article, I briefly review recent efforts to predict the nature of stars before death, in particular, by performing coupled stellar evolution and atmosphere modelling of single stars in the pre-SN stage. These models are able to predict the high-resolution spectrum and broadband photometry, which can then be directly compared with the observations of core-collapse SN progenitors. The predictions for the spectral types of massive stars before death can be surprising. Depending on the initial mass and rotation, single star models indicate that massive stars die as red supergiants, yellow hypergiants, luminous blue variables and Wolf-Rayet stars of the WN and WO subtypes. I finish by assessing the detectability of SN Ibc progenitors.This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'. © 2017 The Author(s).
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We are investigating the application of classical reliability performance metrics combined with standard single event upset (SEU) analysis data. We expect to relate SEU behavior to system performance requirements. Our proposed methodology will provide better prediction of SEU responses in harsh radiation environments with confidence metrics. single event upset (SEU), single event effect (SEE), field programmable gate array devises (FPGAs)
NASA Astrophysics Data System (ADS)
Burov, E.; Guillou-Frottier, L.
2005-05-01
Current debates on the existence of mantle plumes largely originate from interpretations of supposed signatures of plume-induced surface topography that are compared with predictions of geodynamic models of plume-lithosphere interactions. These models often inaccurately predict surface evolution: in general, they assume a fixed upper surface and consider the lithosphere as a single viscous layer. In nature, the surface evolution is affected by the elastic-brittle-ductile deformation, by a free upper surface and by the layered structure of the lithosphere. We make a step towards reconciling mantle- and tectonic-scale studies by introducing a tectonically realistic continental plate model in large-scale plume-lithosphere interaction. This model includes (i) a natural free surface boundary condition, (ii) an explicit elastic-viscous(ductile)-plastic(brittle) rheology and (iii) a stratified structure of continental lithosphere. The numerical experiments demonstrate a number of important differences from predictions of conventional models. In particular, this relates to plate bending, mechanical decoupling of crustal and mantle layers and tension-compression instabilities, which produce transient topographic signatures such as uplift and subsidence at large (>500 km) and small scale (300-400, 200-300 and 50-100 km). The mantle plumes do not necessarily produce detectable large-scale topographic highs but often generate only alternating small-scale surface features that could otherwise be attributed to regional tectonics. A single large-wavelength deformation, predicted by conventional models, develops only for a very cold and thick lithosphere. Distinct topographic wavelengths or temporarily spaced events observed in the East African rift system, as well as over French Massif Central, can be explained by a single plume impinging at the base of the continental lithosphere, without evoking complex asthenospheric upwelling.
Efficient Global Aerodynamic Modeling from Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2012-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
Personalized Modeling for Prediction with Decision-Path Models
Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.
2015-01-01
Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570
NASA Astrophysics Data System (ADS)
Tian, Lin-Lin; Zhao, Ning; Song, Yi-Lei; Zhu, Chun-Ling
2018-05-01
This work is devoted to perform systematic sensitivity analysis of different turbulence models and various inflow boundary conditions in predicting the wake flow behind a horizontal axis wind turbine represented by an actuator disc (AD). The tested turbulence models are the standard k-𝜀 model and the Reynolds Stress Model (RSM). A single wind turbine immersed in both uniform flows and in modeled atmospheric boundary layer (ABL) flows is studied. Simulation results are validated against the field experimental data in terms of wake velocity and turbulence intensity.
A new possible picture of the hadron structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokrovsky, Yury E.
A new chiral-scale invariant version of the bag model (CSB) is developed and applied to calculations of masses and radii for single bag states. The mass formula of the CSB model contains no free parameters and connects masses and radii of the bags with fundamental QCD scales, namely with {lambda}{sub QCD},
ERIC Educational Resources Information Center
Godbout, Natacha; Sabourin, Stephane; Lussier, Yvan
2009-01-01
This study compared the usefulness of single- and multiple-indicator strategies in a model examining the role of child sexual abuse (CSA) to predict later marital satisfaction through attachment and psychological distress. The sample included 1,092 women and men from a nonclinical population in cohabiting or marital relationships. The single-item…
2012-01-01
Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
Srinivas, Nuggehally R
2016-01-01
In the present age of polypharmacy, limited sampling strategy becomes important to verify if drug levels are within the prescribed threshold limits from efficacy and safety considerations. The need to establish reliable single time concentration dependent models to predict exposure becomes important from cost and time perspectives. A simple unweighted linear regression model was developed to describe the relationship between Cmax versus AUC for fexofenadine, losartan, EXP3174, itraconazole and hydroxyitraconazole. The fold difference, defined as the quotient of the observed and predicted AUC values, were evaluated along with statistical comparison of the predicted versus observed values. The correlation between Cmax versus AUC was well established for all the five drugs with a correlation coefficient (r) ranging from 0.9130 to 0.9997. Majority of the predicted values for all the five drugs (77%) were contained within a narrow boundary of 0.75- to 1.5-fold difference. The r values for observed versus predicted AUC were 0.9653 (n = 145), 0.8342 (n = 76), 0.9524 (n = 88), 0.9339 (n = 89) and 0.9452 (n = 66) for fexofenadine, losartan, EXP3174, itraconazole and hydroxyitraconazole, respectively. Cmax versus AUC relationships were established for all drugs and were amenable for limited sampling strategy for AUC prediction. However, fexofenadine, EXP3174 and hydroxyitraconazole may be most relevant for AUC prediction by a single time concentration as judged by the various criteria applied in this study.
Sakurai Prize: Extended Higgs Sectors--phenomenology and future prospects
NASA Astrophysics Data System (ADS)
Gunion, John
2017-01-01
The discovery of a spin-0 state at 125 GeV with properties close to those predicted for the single Higgs boson of the Standard Model does not preclude the existence of additional Higgs bosons. In this talk, models with extended Higgs sectors are reviewed, including two-Higgs-doublet models with and without an extra singlet Higgs field and supersymmetric models. Special emphasis is given to the limit in which the couplings and properties of one of the Higgs bosons of the extended Higgs sector are very close to those predicted for the single Standard Model Higgs boson while the other Higgs bosons are relatively light, perhaps even having masses close to or below the SM-like 125 GeV state. Constraints on this type of scenario given existing data are summarized and prospects for observing these non-SM-like Higgs bosons are discussed. Supported by the Department of Energy.
NASA Astrophysics Data System (ADS)
Panda, D. K.; Lenka, T. R.
2017-06-01
An enhancement mode p-GaN gate AlGaN/GaN HEMT is proposed and a physics based virtual source charge model with Landauer approach for electron transport has been developed using Verilog-A and simulated using Cadence Spectre, in order to predict device characteristics such as threshold voltage, drain current and gate capacitance. The drain current model incorporates important physical effects such as velocity saturation, short channel effects like DIBL (drain induced barrier lowering), channel length modulation (CLM), and mobility degradation due to self-heating. The predicted I d-V ds, I d-V gs, and C-V characteristics show an excellent agreement with the experimental data for both drain current and capacitance which validate the model. The developed model was then utilized to design and simulate a single-pole single-throw (SPST) RF switch.
Field Telemetry of Blade-rotor Coupled Torsional Vibration at Matuura Power Station Number 1 Unit
NASA Technical Reports Server (NTRS)
Isii, Kuniyoshi; Murakami, Hideaki; Otawara, Yasuhiko; Okabe, Akira
1991-01-01
The quasi-modal reduction technique and finite element model (FEM) were used to construct an analytical model for the blade-rotor coupled torsional vibration of a steam turbine generator of the Matuura Power Station. A single rotor test was executed in order to evaluate umbrella vibration characteristics. Based on the single rotor test results and the quasi-modal procedure, the total rotor system was analyzed to predict coupled torsional frequencies. Finally, field measurement of the vibration of the last stage buckets was made, which confirmed that the double synchronous resonance was 124.2 Hz, meaning that the machine can be safely operated. The measured eigen values are very close to the predicted value. The single rotor test and this analytical procedure thus proved to be a valid technique to estimate coupled torsional vibration.
Rapid recipe formulation for plasma etching of new materials
NASA Astrophysics Data System (ADS)
Chopra, Meghali; Zhang, Zizhuo; Ekerdt, John; Bonnecaze, Roger T.
2016-03-01
A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.
A Hybrid Model for Predicting the Prevalence of Schistosomiasis in Humans of Qianjiang City, China
Wang, Ying; Lu, Zhouqin; Tian, Lihong; Tan, Li; Shi, Yun; Nie, Shaofa; Liu, Li
2014-01-01
Backgrounds/Objective Schistosomiasis is still a major public health problem in China, despite the fact that the government has implemented a series of strategies to prevent and control the spread of the parasitic disease. Advanced warning and reliable forecasting can help policymakers to adjust and implement strategies more effectively, which will lead to the control and elimination of schistosomiasis. Our aim is to explore the application of a hybrid forecasting model to track the trends of the prevalence of schistosomiasis in humans, which provides a methodological basis for predicting and detecting schistosomiasis infection in endemic areas. Methods A hybrid approach combining the autoregressive integrated moving average (ARIMA) model and the nonlinear autoregressive neural network (NARNN) model to forecast the prevalence of schistosomiasis in the future four years. Forecasting performance was compared between the hybrid ARIMA-NARNN model, and the single ARIMA or the single NARNN model. Results The modelling mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model was 0.1869×10−4, 0.0029, 0.0419 with a corresponding testing error of 0.9375×10−4, 0.0081, 0.9064, respectively. These error values generated with the hybrid model were all lower than those obtained from the single ARIMA or NARNN model. The forecasting values were 0.75%, 0.80%, 0.76% and 0.77% in the future four years, which demonstrated a no-downward trend. Conclusion The hybrid model has high quality prediction accuracy in the prevalence of schistosomiasis, which provides a methodological basis for future schistosomiasis monitoring and control strategies in the study area. It is worth attempting to utilize the hybrid detection scheme in other schistosomiasis-endemic areas including other infectious diseases. PMID:25119882
Sound scattering by several zooplankton groups. II. Scattering models.
Stanton, T K; Chu, D; Wiebe, P H
1998-01-01
Mathematical scattering models are derived and compared with data from zooplankton from several gross anatomical groups--fluidlike, elastic shelled, and gas bearing. The models are based upon the acoustically inferred boundary conditions determined from laboratory backscattering data presented in part I of this series [Stanton et al., J. Acoust. Soc. Am. 103, 225-235 (1998)]. The models use a combination of ray theory, modal-series solution, and distorted wave Born approximation (DWBA). The formulations, which are inherently approximate, are designed to include only the dominant scattering mechanisms as determined from the experiments. The models for the fluidlike animals (euphausiids in this case) ranged from the simplest case involving two rays, which could qualitatively describe the structure of target strength versus frequency for single pings, to the most complex case involving a rough inhomogeneous asymmetrically tapered bent cylinder using the DWBA-based formulation which could predict echo levels over all angles of incidence (including the difficult region of end-on incidence). The model for the elastic shelled body (gastropods in this case) involved development of an analytical model which takes into account irregularities and discontinuities of the shell. The model for gas-bearing animals (siphonophores) is a hybrid model which is composed of the summation of the exact solution to the gas sphere and the approximate DWBA-based formulation for arbitrarily shaped fluidlike bodies. There is also a simplified ray-based model for the siphonophore. The models are applied to data involving single pings, ping-to-ping variability, and echoes averaged over many pings. There is reasonable qualitative agreement between the predictions and single ping data, and reasonable quantitative agreement between the predictions and variability and averages of echo data.
User Selection Criteria of Airspace Designs in Flexible Airspace Management
NASA Technical Reports Server (NTRS)
Lee, Hwasoo E.; Lee, Paul U.; Jung, Jaewoo; Lai, Chok Fung
2011-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Model Forecast Skill and Sensitivity to Initial Conditions in the Seasonal Sea Ice Outlook
NASA Technical Reports Server (NTRS)
Blanchard-Wrigglesworth, E.; Cullather, R. I.; Wang, W.; Zhang, J.; Bitz, C. M.
2015-01-01
We explore the skill of predictions of September Arctic sea ice extent from dynamical models participating in the Sea Ice Outlook (SIO). Forecasts submitted in August, at roughly 2 month lead times, are skillful. However, skill is lower in forecasts submitted to SIO, which began in 2008, than in hindcasts (retrospective forecasts) of the last few decades. The multimodel mean SIO predictions offer slightly higher skill than the single-model SIO predictions, but neither beats a damped persistence forecast at longer than 2 month lead times. The models are largely unsuccessful at predicting each other, indicating a large difference in model physics and/or initial conditions. Motivated by this, we perform an initial condition sensitivity experiment with four SIO models, applying a fixed -1 m perturbation to the initial sea ice thickness. The significant range of the response among the models suggests that different model physics make a significant contribution to forecast uncertainty.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Caporaso, Nicola; Whitworth, Martin B; Grebby, Stephen; Fisk, Ian D
2018-04-01
Hyperspectral imaging (HSI) is a novel technology for the food sector that enables rapid non-contact analysis of food materials. HSI was applied for the first time to whole green coffee beans, at a single seed level, for quantitative prediction of sucrose, caffeine and trigonelline content. In addition, the intra-bean distribution of coffee constituents was analysed in Arabica and Robusta coffees on a large sample set from 12 countries, using a total of 260 samples. Individual green coffee beans were scanned by reflectance HSI (980-2500nm) and then the concentration of sucrose, caffeine and trigonelline analysed with a reference method (HPLC-MS). Quantitative prediction models were subsequently built using Partial Least Squares (PLS) regression. Large variations in sucrose, caffeine and trigonelline were found between different species and origin, but also within beans from the same batch. It was shown that estimation of sucrose content is possible for screening purposes (R 2 =0.65; prediction error of ~0.7% w/w coffee, with observed range of ~6.5%), while the performance of the PLS model was better for caffeine and trigonelline prediction (R 2 =0.85 and R 2 =0.82, respectively; prediction errors of 0.2 and 0.1%, on a range of 2.3 and 1.1% w/w coffee, respectively). The prediction error is acceptable mainly for laboratory applications, with the potential application to breeding programmes and for screening purposes for the food industry. The spatial distribution of coffee constituents was also successfully visualised for single beans and this enabled mapping of the analytes across the bean structure at single pixel level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Thermal history regulates methylbutenol basal emission rate in Pinus ponderosa.
Gray, Dennis W; Goldstein, Allen H; Lerdau, Manuel T
2006-07-01
Methylbutenol (MBO) is a 5-carbon alcohol that is emitted by many pines in western North America, which may have important impacts on the tropospheric chemistry of this region. In this study, we document seasonal changes in basal MBO emission rates and test several models predicting these changes based on thermal history. These models represent extensions of the ISO G93 model that add a correction factor C(basal), allowing MBO basal emission rates to change as a function of thermal history. These models also allow the calculation of a new emission parameter E(standard30), which represents the inherent capacity of a plant to produce MBO, independent of current or past environmental conditions. Most single-component models exhibited large departures in early and late season, and predicted day-to-day changes in basal emission rate with temporal offsets of up to 3 d relative to measured basal emission rates. Adding a second variable describing thermal history at a longer time scale improved early and late season model performance while retaining the day-to-day performance of the parent single-component model. Out of the models tested, the T(amb),T(max7) model exhibited the best combination of day-to-day and seasonal predictions of basal MBO emission rates.
Huang, Lihan
2016-07-01
Clostridium perfringens type A is a significant public health threat and its spores may germinate, outgrow, and multiply during cooling of cooked meats. This study applies a new C. perfringens growth model in the USDA Integrated Pathogen Modeling Program-Dynamic Prediction (IPMP Dynamic Prediction) Dynamic Prediction to predict the growth from spores of C. perfringens in cooked uncured meat and poultry products using isothermal, dynamic heating, and cooling data reported in the literature. The residual errors of predictions (observation-prediction) are analyzed, and the root-mean-square error (RMSE) calculated. For isothermal and heating profiles, each data point in growth curves is compared. The mean residual errors (MRE) of predictions range from -0.40 to 0.02 Log colony forming units (CFU)/g, with a RMSE of approximately 0.6 Log CFU/g. For cooling, the end point predictions are conservative in nature, with an MRE of -1.16 Log CFU/g for single-rate cooling and -0.66 Log CFU/g for dual-rate cooling. The RMSE is between 0.6 and 0.7 Log CFU/g. Compared with other models reported in the literature, this model makes more accurate and fail-safe predictions. For cooling, the percentage for accurate and fail-safe predictions is between 97.6% and 100%. Under criterion 1, the percentage of accurate predictions is 47.5% for single-rate cooling and 66.7% for dual-rate cooling, while the fail-dangerous predictions are between 0% and 2.4%. This study demonstrates that IPMP Dynamic Prediction can be used by food processors and regulatory agencies as a tool to predict the growth of C. perfringens in uncured cooked meats and evaluate the safety of cooked or heat-treated uncured meat and poultry products exposed to cooling deviations or to develop customized cooling schedules. This study also demonstrates the need for more accurate data collection during cooling. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Goodwin, Graham. C.; Medioli, Adrian. M.
2013-08-01
Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.
Dynamics of Multistable States during Ongoing and Evoked Cortical Activity
Mazzucato, Luca
2015-01-01
Single-trial analyses of ensemble activity in alert animals demonstrate that cortical circuits dynamics evolve through temporal sequences of metastable states. Metastability has been studied for its potential role in sensory coding, memory, and decision-making. Yet, very little is known about the network mechanisms responsible for its genesis. It is often assumed that the onset of state sequences is triggered by an external stimulus. Here we show that state sequences can be observed also in the absence of overt sensory stimulation. Analysis of multielectrode recordings from the gustatory cortex of alert rats revealed ongoing sequences of states, where single neurons spontaneously attain several firing rates across different states. This single-neuron multistability represents a challenge to existing spiking network models, where typically each neuron is at most bistable. We present a recurrent spiking network model that accounts for both the spontaneous generation of state sequences and the multistability in single-neuron firing rates. Each state results from the activation of neural clusters with potentiated intracluster connections, with the firing rate in each cluster depending on the number of active clusters. Simulations show that the model's ensemble activity hops among the different states, reproducing the ongoing dynamics observed in the data. When probed with external stimuli, the model predicts the quenching of single-neuron multistability into bistability and the reduction of trial-by-trial variability. Both predictions were confirmed in the data. Together, these results provide a theoretical framework that captures both ongoing and evoked network dynamics in a single mechanistic model. PMID:26019337
Wu, C D; Wang, L; Hu, C X; He, M H
2013-01-01
The single-solute and bisolute sorption behaviour of phenol and trichloroethylene, two organic compounds with different structures, onto cetyltrimethylammonium bromide (CTAB)-montmorillonite was studied. The monolayer Langmuir model (MLM) and empirical Freundlich model (EFM) were applied to the single-solute sorption of phenol or trichloroethylene from water onto monolayer or multilayer CTAB-montmorillonite. The parameters contained in the MLM and EFM were determined for each solute by fitting to the single-solute isotherm data, and subsequently utilized in binary sorption. The extended Langmuir model (ELM) coupled with the single-solute MLM and the ideal adsorbed solution theory (IAST) coupled with the single-solute EFM were used to predict the binary sorption of phenol and trichloroethylene onto CTAB-montmorillonite. It was found that the EFM was better than the MLM at describing single-solute sorption from water onto CTAB-montmorillonite, and the IAST was better than the ELM at describing the binary sorption from water onto CTAB-montmorillonite.
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
Single Droplet Combustion of Decane in Microgravity: Experiments and Numerical Modeling
NASA Technical Reports Server (NTRS)
Dietrich, D. L.; Struk, P. M.; Ikegam, M.; Xu, G.
2004-01-01
This paper presents experimental data on single droplet combustion of decane in microgravity and compares the results to a numerical model. The primary independent experiment variables are the ambient pressure and oxygen mole fraction, pressure, droplet size (over a relatively small range) and ignition energy. The droplet history (D(sup 2) history) is non-linear with the burning rate constant increasing throughout the test. The average burning rate constant, consistent with classical theory, increased with increasing ambient oxygen mole fraction and was nearly independent of pressure, initial droplet size and ignition energy. The flame typically increased in size initially, and then decreased in size, in response to the shrinking droplet. The flame standoff increased linearly for the majority of the droplet lifetime. The flame surrounding the droplet extinguished at a finite droplet size at lower ambient pressures and an oxygen mole fraction of 0.15. The extinction droplet size increased with decreasing pressure. The model is transient and assumes spherical symmetry, constant thermo-physical properties (specific heat, thermal conductivity and species Lewis number) and single step chemistry. The model includes gas-phase radiative loss and a spherically symmetric, transient liquid phase. The model accurately predicts the droplet and flame histories of the experiments. Good agreement requires that the ignition in the experiment be reasonably approximated in the model and that the model accurately predict the pre-ignition vaporization of the droplet. The model does not accurately predict the dependence of extinction droplet diameter on pressure, a result of the simplified chemistry in the model. The transient flame behavior suggests the potential importance of fuel vapor accumulation. The model results, however, show that the fractional mass consumption rate of fuel in the flame relative to fuel vaporized is close to 1.0 for all but the lowest ambient oxygen mole fractions.
Oke, Tobi A; Hager, Heather A
2017-01-01
The fate of Northern peatlands under climate change is important because of their contribution to global carbon (C) storage. Peatlands are maintained via greater plant productivity (especially of Sphagnum species) than decomposition, and the processes involved are strongly mediated by climate. Although some studies predict that warming will relax constraints on decomposition, leading to decreased C sequestration, others predict increases in productivity and thus increases in C sequestration. We explored the lack of congruence between these predictions using single-species and integrated species distribution models as proxies for understanding the environmental correlates of North American Sphagnum peatland occurrence and how projected changes to the environment might influence these peatlands under climate change. Using Maximum entropy and BIOMOD modelling platforms, we generated single and integrated species distribution models for four common Sphagnum species in North America under current climate and a 2050 climate scenario projected by three general circulation models. We evaluated the environmental correlates of the models and explored the disparities in niche breadth, niche overlap, and climate suitability among current and future models. The models consistently show that Sphagnum peatland distribution is influenced by the balance between soil moisture deficit and temperature of the driest quarter-year. The models identify the east and west coasts of North America as the core climate space for Sphagnum peatland distribution. The models show that, at least in the immediate future, the area of suitable climate for Sphagnum peatland could expand. This result suggests that projected warming would be balanced effectively by the anticipated increase in precipitation, which would increase Sphagnum productivity.
Oke, Tobi A.; Hager, Heather A.
2017-01-01
The fate of Northern peatlands under climate change is important because of their contribution to global carbon (C) storage. Peatlands are maintained via greater plant productivity (especially of Sphagnum species) than decomposition, and the processes involved are strongly mediated by climate. Although some studies predict that warming will relax constraints on decomposition, leading to decreased C sequestration, others predict increases in productivity and thus increases in C sequestration. We explored the lack of congruence between these predictions using single-species and integrated species distribution models as proxies for understanding the environmental correlates of North American Sphagnum peatland occurrence and how projected changes to the environment might influence these peatlands under climate change. Using Maximum entropy and BIOMOD modelling platforms, we generated single and integrated species distribution models for four common Sphagnum species in North America under current climate and a 2050 climate scenario projected by three general circulation models. We evaluated the environmental correlates of the models and explored the disparities in niche breadth, niche overlap, and climate suitability among current and future models. The models consistently show that Sphagnum peatland distribution is influenced by the balance between soil moisture deficit and temperature of the driest quarter-year. The models identify the east and west coasts of North America as the core climate space for Sphagnum peatland distribution. The models show that, at least in the immediate future, the area of suitable climate for Sphagnum peatland could expand. This result suggests that projected warming would be balanced effectively by the anticipated increase in precipitation, which would increase Sphagnum productivity. PMID:28426754
A 100-3000 GHz model of thermal dust emission observed by Planck, DIRBE and IRAS
NASA Astrophysics Data System (ADS)
Meisner, Aaron M.; Finkbeiner, Douglas P.
2015-01-01
We apply the Finkbeiner et al. (1999) two-component thermal dust emission model to the Planck HFI maps. This parametrization of the far-infrared dust spectrum as the sum of two modified blackbodies serves as an important alternative to the commonly adopted single modified blackbody (MBB) dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. (1999) based on FIRAS and DIRBE. We also derive full-sky 6.1' resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.1' FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration (2013) single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anistropy on small angular scales. We have recently released maps and associated software utilities for obtaining thermal dust emission and reddening predictions using our Planck-based two-component model.
SAbPred: a structure-based antibody prediction server
Dunbar, James; Krawczyk, Konrad; Leem, Jinwoo; Marks, Claire; Nowak, Jaroslaw; Regep, Cristian; Georges, Guy; Kelm, Sebastian; Popovic, Bojana; Deane, Charlotte M.
2016-01-01
SAbPred is a server that makes predictions of the properties of antibodies focusing on their structures. Antibody informatics tools can help improve our understanding of immune responses to disease and aid in the design and engineering of therapeutic molecules. SAbPred is a single platform containing multiple applications which can: number and align sequences; automatically generate antibody variable fragment homology models; annotate such models with estimated accuracy alongside sequence and structural properties including potential developability issues; predict paratope residues; and predict epitope patches on protein antigens. The server is available at http://opig.stats.ox.ac.uk/webapps/sabpred. PMID:27131379
Solid rocket booster performance evaluation model. Volume 1: Engineering description
NASA Technical Reports Server (NTRS)
1974-01-01
The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.
Identification of consensus biomarkers for predicting non-genotoxic hepatocarcinogens
Huang, Shan-Han; Tung, Chun-Wei
2017-01-01
The assessment of non-genotoxic hepatocarcinogens (NGHCs) is currently relying on two-year rodent bioassays. Toxicogenomics biomarkers provide a potential alternative method for the prioritization of NGHCs that could be useful for risk assessment. However, previous studies using inconsistently classified chemicals as the training set and a single microarray dataset concluded no consensus biomarkers. In this study, 4 consensus biomarkers of A2m, Ca3, Cxcl1, and Cyp8b1 were identified from four large-scale microarray datasets of the one-day single maximum tolerated dose and a large set of chemicals without inconsistent classifications. Machine learning techniques were subsequently applied to develop prediction models for NGHCs. The final bagging decision tree models were constructed with an average AUC performance of 0.803 for an independent test. A set of 16 chemicals with controversial classifications were reclassified according to the consensus biomarkers. The developed prediction models and identified consensus biomarkers are expected to be potential alternative methods for prioritization of NGHCs for further experimental validation. PMID:28117354
Localized magnetism in liquid Al80Mn20 alloys: A first-principles investigation
NASA Astrophysics Data System (ADS)
Jakse, N.; LeBacq, O.; Pasturel, A.
2006-04-01
We present first-principles investigations of the formation of magnetic moments in liquid Al80Mn20 alloys as a function of temperature. We predict the existence of large magnetic moments on Mn atoms which are close to that of the single-impurity limit. The wide distribution of moments can be understood in terms of fluctuations in the local environment. Our calculations also predict that thermal expansion effects within the single-impurity model mainly explain the striking increase of magnetism with temperature.
A Multi-wavenumber Theory for Eddy Diffusivities: Applications to the DIMES Region
NASA Astrophysics Data System (ADS)
Chen, R.; Gille, S. T.; McClean, J.; Flierl, G.; Griesel, A.
2014-12-01
Climate models are sensitive to the representation of ocean mixing processes. This has motivated recent efforts to collect observations aimed at improving mixing estimates and parameterizations. The US/UK field program Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES), begun in 2009, is providing such estimates upstream of and within the Drake Passage. This region is characterized by topography, and strong zonal jets. In previous studies, mixing length theories, based on the assumption that eddies are dominated by a single wavenumber and phase speed, were formulated to represent the estimated mixing patterns in jets. However, in spite of the success of the single wavenumber theory in some other scenarios, it does not effectively predict the vertical structures of observed eddy diffusivities in the DIMES area. Considering that eddy motions encompass a wide range of wavenumbers, which all contribute to mixing, in this study we formulated a multi-wavenumber theory to predict eddy mixing rates. We test our theory for a domain encompassing the entire Southern Ocean. We estimated eddy diffusivities and mixing lengths from one million numerical floats in a global eddying model. These float-based mixing estimates were compared with the predictions from both the single-wavenumber and the multi-wavenumber theories. Our preliminary results in the DIMES area indicate that, compared to the single-wavenumber theory, the multi-wavenumber theory better predicts the vertical mixing structures in the vast areas where the mean flow is weak; however in the intense jet region, both theories have similar predictive skill.
Dynamic Modeling, Controls, and Testing for Electrified Aircraft
NASA Technical Reports Server (NTRS)
Connolly, Joseph; Stalcup, Erik
2017-01-01
Electrified aircraft have the potential to provide significant benefits for efficiency and emissions reductions. To assess these potential benefits, modeling tools are needed to provide rapid evaluation of diverse concepts and to ensure safe operability and peak performance over the mission. The modeling challenge for these vehicles is the ability to show significant benefits over the current highly refined aircraft systems. The STARC-ABL (single-aisle turbo-electric aircraft with an aft boundary layer propulsor) is a new test proposal that builds upon previous N3-X team hybrid designs. This presentation describes the STARC-ABL concept, the NASA Electric Aircraft Testbed (NEAT) which will allow testing of the STARC-ABL powertrain, and the related modeling and simulation efforts to date. Modeling and simulation includes a turbofan simulation, Numeric Propulsion System Simulation (NPSS), which has been integrated with NEAT; and a power systems and control model for predicting testbed performance and evaluating control schemes. Model predictions provide good comparisons with testbed data for an NPSS-integrated test of the single-string configuration of NEAT.
Docking and scoring protein interactions: CAPRI 2009.
Lensink, Marc F; Wodak, Shoshana J
2010-11-15
Protein docking algorithms are assessed by evaluating blind predictions performed during 2007-2009 in Rounds 13-19 of the community-wide experiment on critical assessment of predicted interactions (CAPRI). We evaluated the ability of these algorithms to sample docking poses and to single out specific association modes in 14 targets, representing 11 distinct protein complexes. These complexes play important biological roles in RNA maturation, G-protein signal processing, and enzyme inhibition and function. One target involved protein-RNA interactions not previously considered in CAPRI, several others were hetero-oligomers, or featured multiple interfaces between the same protein pair. For most targets, predictions started from the experimentally determined structures of the free (unbound) components, or from models built from known structures of related or similar proteins. To succeed they therefore needed to account for conformational changes and model inaccuracies. In total, 64 groups and 12 web-servers submitted docking predictions of which 4420 were evaluated. Overall our assessment reveals that 67% of the groups, more than ever before, produced acceptable models or better for at least one target, with many groups submitting multiple high- and medium-accuracy models for two to six targets. Forty-one groups including four web-servers participated in the scoring experiment with 1296 evaluated models. Scoring predictions also show signs of progress evidenced from the large proportion of correct models submitted. But singling out the best models remains a challenge, which also adversely affects the ability to correctly rank docking models. With the increased interest in translating abstract protein interaction networks into realistic models of protein assemblies, the growing CAPRI community is actively developing more efficient and reliable docking and scoring methods for everyone to use. © 2010 Wiley-Liss, Inc.
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
Comparison of simulator fidelity model predictions with in-simulator evaluation data
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.
1983-01-01
A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.
Calculation of Non-Bonded Forces Due to Sliding of Bundled Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Frankland, S. J. V.; Bandorawalla, T.; Gates, T. S.
2003-01-01
An important consideration for load transfer in bundles of single-walled carbon nanotubes is the nonbonded (van der Waals) forces between the nanotubes and their effect on axial sliding of the nanotubes relative to each other. In this research, the non-bonded forces in a bundle of seven hexagonally packed (10,10) single-walled carbon nanotubes are represented as an axial force applied to the central nanotube. A simple model, based on momentum balance, is developed to describe the velocity response of the central nanotube to the applied force. The model is verified by comparing its velocity predictions with molecular dynamics simulations that were performed on the bundle with different force histories applied to the central nanotube. The model was found to quantitatively predict the nanotube velocities obtained from the molecular dynamics simulations. Both the model and the simulations predict a threshold force at which the nanotube releases from the bundle. This force converts to a shear yield strength of 10.5-11.0 MPa for (10,10) nanotubes in a bundle.
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
NASA Astrophysics Data System (ADS)
Stas, Michiel; Dong, Qinghan; Heremans, Stien; Zhang, Beier; Van Orshoven, Jos
2016-08-01
This paper compares two machine learning techniques to predict regional winter wheat yields. The models, based on Boosted Regression Trees (BRT) and Support Vector Machines (SVM), are constructed of Normalized Difference Vegetation Indices (NDVI) derived from low resolution SPOT VEGETATION satellite imagery. Three types of NDVI-related predictors were used: Single NDVI, Incremental NDVI and Targeted NDVI. BRT and SVM were first used to select features with high relevance for predicting the yield. Although the exact selections differed between the prefectures, certain periods with high influence scores for multiple prefectures could be identified. The same period of high influence stretching from March to June was detected by both machine learning methods. After feature selection, BRT and SVM models were applied to the subset of selected features for actual yield forecasting. Whereas both machine learning methods returned very low prediction errors, BRT seems to slightly but consistently outperform SVM.
Comparison of two gas chromatograph models and analysis of binary data
NASA Technical Reports Server (NTRS)
Keba, P. S.; Woodrow, P. T.
1972-01-01
The overall objective of the gas chromatograph system studies is to generate fundamental design criteria and techniques to be used in the optimum design of the system. The particular tasks currently being undertaken are the comparison of two mathematical models of the chromatograph and the analysis of binary system data. The predictions of two mathematical models, an equilibrium absorption model and a non-equilibrium absorption model exhibit the same weaknesses in their inability to predict chromatogram spreading for certain systems. The analysis of binary data using the equilibrium absorption model confirms that, for the systems considered, superposition of predicted single component behaviors is a first order representation of actual binary data. Composition effects produce non-idealities which limit the rigorous validity of superposition.
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Jang, Sumin; Choubey, Sandeep; Furchtgott, Leon; Zou, Ling-Nan; Doyle, Adele; Menon, Vilas; Loew, Ethan B; Krostag, Anne-Rachel; Martinez, Refugio A; Madisen, Linda; Levi, Boaz P; Ramanathan, Sharad
2017-01-01
The complexity of gene regulatory networks that lead multipotent cells to acquire different cell fates makes a quantitative understanding of differentiation challenging. Using a statistical framework to analyze single-cell transcriptomics data, we infer the gene expression dynamics of early mouse embryonic stem (mES) cell differentiation, uncovering discrete transitions across nine cell states. We validate the predicted transitions across discrete states using flow cytometry. Moreover, using live-cell microscopy, we show that individual cells undergo abrupt transitions from a naïve to primed pluripotent state. Using the inferred discrete cell states to build a probabilistic model for the underlying gene regulatory network, we further predict and experimentally verify that these states have unique response to perturbations, thus defining them functionally. Our study provides a framework to infer the dynamics of differentiation from single cell transcriptomics data and to build predictive models of the gene regulatory networks that drive the sequence of cell fate decisions during development. DOI: http://dx.doi.org/10.7554/eLife.20487.001 PMID:28296635
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques
2016-10-01
Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.
Protein (multi-)location prediction: utilizing interdependencies via a generative model
Shatkay, Hagit
2015-01-01
Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505
Protein (multi-)location prediction: utilizing interdependencies via a generative model.
Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit
2015-06-15
Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.
Beyond Atomic Sizes and Hume-Rothery Rules: Understanding and Predicting High-Entropy Alloys
Troparevsky, M. Claudia; Morris, James R.; Daene, Markus; ...
2015-09-03
High-entropy alloys constitute a new class of materials that provide an excellent combination of strength, ductility, thermal stability, and oxidation resistance. Although they have attracted extensive attention due to their potential applications, little is known about why these compounds are stable or how to predict which combination of elements will form a single phase. Here, we present a review of the latest research done on these alloys focusing on the theoretical models devised during the last decade. We discuss semiempirical methods based on the Hume-Rothery rules and stability criteria based on enthalpies of mixing and size mismatch. To provide insightsmore » into the electronic and magnetic properties of high-entropy alloys, we show the results of first-principles calculations of the electronic structure of the disordered solid-solution phase based on both Korringa Kohn Rostoker coherent potential approximation and large supercell models of example face-centered cubic and body-centered cubic systems. Furthermore, we discuss in detail a model based on enthalpy considerations that can predict which elemental combinations are most likely to form a single-phase high-entropy alloy. The enthalpies are evaluated via first-principles high-throughput density functional theory calculations of the energies of formation of binary compounds, and therefore it requires no experimental or empirically derived input. Finally, the model correctly accounts for the specific combinations of metallic elements that are known to form single-phase alloys while rejecting similar combinations that have been tried and shown not to be single phase.« less
Gofrit, Ofer N; Orvieto, Marcelo A; Zorn, Kevin C; Steinberg, Gary D; Zagaja, Gregory P; Shalhav, Arieh L
2009-02-01
Single renal unit models are invaluable for studies in renal physiology, transplantation and response to ischemic injury. Glomerular filtration rate (GFR) is commonly used for evaluation of renal function. Measuring the GFR involves relatively complicated and expensive systems. In this study we determined whether serum creatinine (Scr) can predict the GFR in this model. Right laparoscopic nephrectomy was performed in 46 female pigs weighing 25 kg-30 kg. Twelve days later the left kidney was exposed to various periods of warm ischemia (30, 60, 90, and 120 minutes). Scr and GFR (using the iohexol clearance method) were determined preoperatively and at postoperative days 1, 3, 8, 15, 22 and 29. A total of 244 pairs of Scr and GFR values were analyzed to determine a formula for predicting GFR (pGFR) from Scr. Scr range was 1.2 mg/dl -29 mg/dl and GFR range was 1.8 ml/min -180.5 ml/min. The empiric formula deduced from the database for calculating pGFR from Scr was: pGFR = (217 divided by Scr) minus 0.2. pGFR correlated well with the actual GFR (R(2) = 0.85). The graphs for pGFR were almost indistinguishable from the graphs for actual GFR in every single animal. The results and conclusions of the experiments using either actual or predicted GFR were identical. We conclude that in a single renal unit porcine model using ischemia as the insult to the kidney, expensive actual measurements of GFR can be reliably replaced by Scr based calculated GFR.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu
2018-02-01
A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232
2016-08-21
Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less
Buragohain, Poly; Garg, Ankit; Feng, Song; Lin, Peng; Sreedeep, S
2018-09-01
The concept of sponge city has become very popular with major thrust on design of waste containment systems such as biofilter and green roofs. Factors that may influence pollutant ions retention in these systems will be soil type and also their interactions. The study investigated single and competitive interaction of copper in two soils and its influence on the fate prediction. Freundlich and Langmuir nonlinear isotherms were selected to quantify the retention results. Series of numerical simulations were conducted to model 1 D advection-dispersion transport for the two soils and analyse the role of isotherms. The results indicated that contaminant fate prediction of copper-soil interaction based on the two non-linear isotherms was different for both single and that in competition. Retardation factor obtained from Freundlich (R F ) isotherm predicts more than Langmuir (R La ). This observation is more explicit at the higher range of equilibrium concentration. Fate prediction based on retardation value obtained from retention isotherms exhibited some anomalous trends contradicting the experimental findings due to inherent assumptions in governing equations. The necessity to have an approximate assessment of contaminant concentration in the field to effectively use contaminant retention results for accurate fate prediction is highlighted here. The study is important for modellers in design or analysis of biolfilter system (sponge city), where multiple ions tend to exist in waste water. Copyright © 2018 Elsevier B.V. All rights reserved.
Tensile Strength of Carbon Nanotubes Under Realistic Temperature and Strain Rate
NASA Technical Reports Server (NTRS)
Wei, Chen-Yu; Cho, Kyeong-Jae; Srivastava, Deepak; Biegel, Bryan (Technical Monitor)
2002-01-01
Strain rate and temperature dependence of the tensile strength of single-wall carbon nanotubes has been investigated with molecular dynamics simulations. The tensile failure or yield strain is found to be strongly dependent on the temperature and strain rate. A transition state theory based predictive model is developed for the tensile failure of nanotubes. Based on the parameters fitted from high-strain rate and temperature dependent molecular dynamics simulations, the model predicts that a defect free micrometer long single-wall nanotube at 300 K, stretched with a strain rate of 1%/hour, fails at about 9 plus or minus 1% tensile strain. This is in good agreement with recent experimental findings.
Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico
2017-12-08
In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zoellner, Jamie M.; Porter, Kathleen J.; Chen, Yvonnes; Hedrick, Valisa E.; You, Wen; Hickman, Maja; Estabrooks, Paul A.
2017-01-01
Objective Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Design Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Main Outcome Measures Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. Results TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13–20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6–38%) and behaviour (average 30%, range 6–55%) were significant. Conclusion Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases. PMID:28165771
Srinivas, N R
2016-08-01
Linear regression models utilizing a single time point (Cmax) has been reported for pravastatin and simvastatin. A new model was developed for the prediction of AUC of statins that utilized the slopes of the above 2 models, with pharmacokinetic (Cmax) and a pharmacodynamic (IC50 value) components for the statins. The prediction of AUCs for various statins (pravastatin, atorvastatin, simvastatin and rosuvastatin) was carried out using the newly developed dual pharmacokinetic and pharmacodynamic model. Generally, the AUC predictions were contained within 0.5 to 2-fold difference of the observed AUC suggesting utility of the new models. The root mean square error predictions were<45% for the 2 models. On the basis of the present work, it is feasible to utilize both pharmacokinetic (Cmax) and pharmacodynamic (IC50) data for effectively predicting the AUC for statins. Such a new concept as described in the work may have utility in both drug discovery and development stages. © Georg Thieme Verlag KG Stuttgart · New York.
Buchanan, Verica; Lu, Yafeng; McNeese, Nathan; Steptoe, Michael; Maciejewski, Ross; Cooke, Nancy
2017-03-01
Historically, domains such as business intelligence would require a single analyst to engage with data, develop a model, answer operational questions, and predict future behaviors. However, as the problems and domains become more complex, organizations are employing teams of analysts to explore and model data to generate knowledge. Furthermore, given the rapid increase in data collection, organizations are struggling to develop practices for intelligence analysis in the era of big data. Currently, a variety of machine learning and data mining techniques are available to model data and to generate insights and predictions, and developments in the field of visual analytics have focused on how to effectively link data mining algorithms with interactive visuals to enable analysts to explore, understand, and interact with data and data models. Although studies have explored the role of single analysts in the visual analytics pipeline, little work has explored the role of teamwork and visual analytics in the analysis of big data. In this article, we present an experiment integrating statistical models, visual analytics techniques, and user experiments to study the role of teamwork in predictive analytics. We frame our experiment around the analysis of social media data for box office prediction problems and compare the prediction performance of teams, groups, and individuals. Our results indicate that a team's performance is mediated by the team's characteristics such as openness of individual members to others' positions and the type of planning that goes into the team's analysis. These findings have important implications for how organizations should create teams in order to make effective use of information from their analytic models.
Time-Dependent Traveling Wave Tube Model for Intersymbol Interference Investigations
2001-06-01
band is 5.7 degrees. C. Differences between broadband and single-tone excitations The TWT characteristics are compared when excited by single-tones...direct description of the effects of the TWT on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency...of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT
Landscape models of brook trout abundance and distribution in lotic habitat with field validation
McKenna, James E.; Johnson, James H.
2011-01-01
Brook trout Salvelinus fontinalis are native fish in decline owing to environmental changes. Predictions of their potential distribution and a better understanding of their relationship to habitat conditions would enhance the management and conservation of this valuable species. We used over 7,800 brook trout observations throughout New York State and georeferenced, multiscale landscape condition data to develop four regionally specific artificial neural network models to predict brook trout abundance in rivers and streams. Land cover data provided a general signature of human activity, but other habitat variables were resistant to anthropogenic changes (i.e., changing on a geological time scale). The resulting models predict the potential for any stream to support brook trout. The models were validated by holding 20% of the data out as a test set and by comparison with additional field collections from a variety of habitat types. The models performed well, explaining more than 90% of data variability. Errors were often associated with small spatial displacements of predicted values. When compared with the additional field collections (39 sites), 92% of the predictions were off by only a single class from the field-observed abundances. Among “least-disturbed” field collection sites, all predictions were correct or off by a single abundance class, except for one where brown trout Salmo trutta were present. Other degrading factors were evident at most sites where brook trout were absent or less abundant than predicted. The most important habitat variables included landscape slope, stream and drainage network sizes, water temperature, and extent of forest cover. Predicted brook trout abundances were applied to all New York streams, providing a synoptic map of the distribution of brook trout habitat potential. These fish models set benchmarks of best potential for streams to support brook trout under broad-scale human influences and can assist with planning and identification of protection or rehabilitation sites.
NASA Astrophysics Data System (ADS)
Joshi, Pranit Satish; Mahapatra, Pallab Sinha; Pattamatta, Arvind
2017-12-01
Experiments and numerical simulation of natural convection heat transfer with nanosuspensions are presented in this work. The investigations are carried out for three different types of nanosuspensions: namely, spherical-based (alumina/water), tubular-based (multi-walled carbon nanotube/water), and flake-based (graphene/water). A comparison with in-house experiments is made for all the three nanosuspensions at different volume fractions and for the Rayleigh numbers in the range of 7 × 105-1 × 107. Different models such as single component homogeneous, single component non-homogeneous, and multicomponent non-homogeneous are used in the present study. From the present numerical investigation, it is observed that for lower volume fractions (˜0.1%) of nanosuspensions considered, single component models are in close agreement with the experimental results. Single component models which are based on the effective properties of the nanosuspensions alone can predict heat transfer characteristics very well within the experimental uncertainty. Whereas for higher volume fractions (˜0.5%), the multi-component model predicts closer results to the experimental observation as it incorporates drag-based slip force which becomes prominent. The enhancement observed at lower volume fractions for non-spherical particles is attributed to the percolation chain formation, which perturbs the boundary layer and thereby increases the local Nusselt number values.
Knowledge-driven genomic interactions: an application in ovarian cancer.
Kim, Dokyoon; Li, Ruowang; Dudek, Scott M; Frase, Alex T; Pendergrass, Sarah A; Ritchie, Marylyn D
2014-01-01
Effective cancer clinical outcome prediction for understanding of the mechanism of various types of cancer has been pursued using molecular-based data such as gene expression profiles, an approach that has promise for providing better diagnostics and supporting further therapies. However, clinical outcome prediction based on gene expression profiles varies between independent data sets. Further, single-gene expression outcome prediction is limited for cancer evaluation since genes do not act in isolation, but rather interact with other genes in complex signaling or regulatory networks. In addition, since pathways are more likely to co-operate together, it would be desirable to incorporate expert knowledge to combine pathways in a useful and informative manner. Thus, we propose a novel approach for identifying knowledge-driven genomic interactions and applying it to discover models associated with cancer clinical phenotypes using grammatical evolution neural networks (GENN). In order to demonstrate the utility of the proposed approach, an ovarian cancer data from the Cancer Genome Atlas (TCGA) was used for predicting clinical stage as a pilot project. We identified knowledge-driven genomic interactions associated with cancer stage from single knowledge bases such as sources of pathway-pathway interaction, but also knowledge-driven genomic interactions across different sets of knowledge bases such as pathway-protein family interactions by integrating different types of information. Notably, an integration model from different sources of biological knowledge achieved 78.82% balanced accuracy and outperformed the top models with gene expression or single knowledge-based data types alone. Furthermore, the results from the models are more interpretable because they are framed in the context of specific biological pathways or other expert knowledge. The success of the pilot study we have presented herein will allow us to pursue further identification of models predictive of clinical cancer survival and recurrence. Understanding the underlying tumorigenesis and progression in ovarian cancer through the global view of interactions within/between different biological knowledge sources has the potential for providing more effective screening strategies and therapeutic targets for many types of cancer.
Simulation model of a single-stage lithium bromide-water absorption cooling unit
NASA Technical Reports Server (NTRS)
Miao, D.
1978-01-01
A computer model of a LiBr-H2O single-stage absorption machine was developed. The model, utilizing a given set of design data such as water-flow rates and inlet or outlet temperatures of these flow rates but without knowing the interior characteristics of the machine (heat transfer rates and surface areas), can be used to predict or simulate off-design performance. Results from 130 off-design cases for a given commercial machine agree with the published data within 2 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada Rodas, Ernesto A.; Neu, Richard W.
A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less
Estrada Rodas, Ernesto A.; Neu, Richard W.
2017-09-11
A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less
Separation of time scales in one-dimensional directed nucleation-growth processes
NASA Astrophysics Data System (ADS)
Pierobon, Paolo; Miné-Hattab, Judith; Cappello, Giovanni; Viovy, Jean-Louis; Lagomarsino, Marco Cosentino
2010-12-01
Proteins involved in homologous recombination such as RecA and hRad51 polymerize on single- and double-stranded DNA according to a nucleation-growth kinetics, which can be monitored by single-molecule in vitro assays. The basic models currently used to extract biochemical rates rely on ensemble averages and are typically based on an underlying process of bidirectional polymerization, in contrast with the often observed anisotropic polymerization of similar proteins. For these reasons, if one considers single-molecule experiments, the available models are useful to understand observations only in some regimes. In particular, recent experiments have highlighted a steplike polymerization kinetics. The classical model of one-dimensional nucleation growth, the Kolmogorov-Avrami-Mehl-Johnson (KAMJ) model, predicts the correct polymerization kinetics only in some regimes and fails to predict the steplike behavior. This work illustrates by simulations and analytical arguments the limitation of applicability of the KAMJ description and proposes a minimal model for the statistics of the steps based on the so-called stick-breaking stochastic process. We argue that this insight might be useful to extract information on the time and length scales involved in the polymerization kinetics.
Comparisons of the Maxwell and CLL gas/surface interaction models using DSMC
NASA Technical Reports Server (NTRS)
Hedahl, Marc O.; Wilmoth, Richard G.
1995-01-01
The behavior of two different models of gas-surface interactions is studied using the Direct Simulation Monte Carlo (DSMC) method. The DSMC calculations examine differences in predictions of aerodynamic forces and heat transfer between the Maxwell and the Cercignani-Lampis-Lord (CLL) models for flat plate configurations at freestream conditions corresponding to a 140 km orbit around Venus. The size of the flat plate represents one of the solar panels on the Magellan spacecraft, and the freestream conditions correspond to those experienced during aerobraking maneuvers. Results are presented for both a single flat plate and a two-plate configuration as a function of angle of attack and gas-surface accommodation coefficients. The two-plate system is not representative of the Magellan geometry but is studied to explore possible experiments that might be used to differentiate between the two gas-surface interaction models. The Maxwell and CLL models produce qualitatively similar results for the aerodynamic forces and heat transfer on a single flat plate. However, the flow fields produced with the two models are qualitatively different for both the single-plate and two-plate calculations. These differences in the flowfield lead to predictions of the angle of attack for maximum heat transfer in a two plate configuration that are distinctly different for the two gas-surface interactions models.
Seasonal Atmospheric and Oceanic Predictions
NASA Technical Reports Server (NTRS)
Roads, John; Rienecker, Michele (Technical Monitor)
2003-01-01
Several projects associated with dynamical, statistical, single column, and ocean models are presented. The projects include: 1) Regional Climate Modeling; 2) Statistical Downscaling; 3) Evaluation of SCM and NSIPP AGCM Results at the ARM Program Sites; and 4) Ocean Forecasts.
Inferring Binary and Trinary Stellar Populations in Photometric and Astrometric Surveys
NASA Astrophysics Data System (ADS)
Widmark, Axel; Leistedt, Boris; Hogg, David W.
2018-04-01
Multiple stellar systems are ubiquitous in the Milky Way but are often unresolved and seen as single objects in spectroscopic, photometric, and astrometric surveys. However, modeling them is essential for developing a full understanding of large surveys such as Gaia and connecting them to stellar and Galactic models. In this paper, we address this problem by jointly fitting the Gaia and Two Micron All Sky Survey photometric and astrometric data using a data-driven Bayesian hierarchical model that includes populations of binary and trinary systems. This allows us to classify observations into singles, binaries, and trinaries, in a robust and efficient manner, without resorting to external models. We are able to identify multiple systems and, in some cases, make strong predictions for the properties of their unresolved stars. We will be able to compare such predictions with Gaia Data Release 4, which will contain astrometric identification and analysis of binary systems.
Prediction of Weather Impacted Airport Capacity using Ensemble Learning
NASA Technical Reports Server (NTRS)
Wang, Yao Xun
2011-01-01
Ensemble learning with the Bagging Decision Tree (BDT) model was used to assess the impact of weather on airport capacities at selected high-demand airports in the United States. The ensemble bagging decision tree models were developed and validated using the Federal Aviation Administration (FAA) Aviation System Performance Metrics (ASPM) data and weather forecast at these airports. The study examines the performance of BDT, along with traditional single Support Vector Machines (SVM), for airport runway configuration selection and airport arrival rates (AAR) prediction during weather impacts. Testing of these models was accomplished using observed weather, weather forecast, and airport operation information at the chosen airports. The experimental results show that ensemble methods are more accurate than a single SVM classifier. The airport capacity ensemble method presented here can be used as a decision support model that supports air traffic flow management to meet the weather impacted airport capacity in order to reduce costs and increase safety.
NASA Technical Reports Server (NTRS)
Adams, D. F.; Mahishi, J. M.
1982-01-01
The axisymmetric finite element model and associated computer program developed for the analysis of crack propagation in a composite consisting of a single broken fiber in an annular sheath of matrix material was extended to include a constant displacement boundary condition during an increment of crack propagation. The constant displacement condition permits the growth of a stable crack, as opposed to the catastropic failure in an earlier version. The finite element model was refined to respond more accurately to the high stresses and steep stress gradients near the broken fiber end. The accuracy and effectiveness of the conventional constant strain axisymmetric element for crack problems was established by solving the classical problem of a penny-shaped crack in a thick cylindrical rod under axial tension. The stress intensity factors predicted by the present finite element model are compared with existing continuum results.
PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction
Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.
2008-01-01
A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945
NASA Astrophysics Data System (ADS)
Walcott, Sam
2013-03-01
Interactions between the proteins actin and myosin drive muscle contraction. Properties of a single myosin interacting with an actin filament are largely known, but a trillion myosins work together in muscle. We are interested in how single-molecule properties relate to ensemble function. Myosin's reaction rates depend on force, so ensemble models keep track of both molecular state and force on each molecule. These models make subtle predictions, e.g. that myosin, when part of an ensemble, moves actin faster than when isolated. This acceleration arises because forces between molecules speed reaction kinetics. Experiments support this prediction and allow parameter estimates. A model based on this analysis describes experiments from single molecule to ensemble. In vivo, actin is regulated by proteins that, when present, cause the binding of one myosin to speed the binding of its neighbors; binding becomes cooperative. Although such interactions preclude the mean field approximation, a set of linear ODEs describes these ensembles under simplified experimental conditions. In these experiments cooperativity is strong, with the binding of one molecule affecting ten neighbors on either side. We progress toward a description of myosin ensembles under physiological conditions.
NASA Astrophysics Data System (ADS)
Hodgson, Murray; Wareing, Andrew
2008-01-01
A combined beam-tracing and transfer-matrix model for predicting steady-state sound-pressure levels in rooms with multilayer bounding surfaces was used to compare the effect of extended- and local-reaction surfaces, and the accuracy of the local-reaction approximation. Three rooms—an office, a corridor and a workshop—with one or more multilayer test surfaces were considered. The test surfaces were a single-glass panel, a double-drywall panel, a carpeted floor, a suspended-acoustical ceiling, a double-steel panel, and glass fibre on a hard backing. Each test surface was modeled as of extended or of local reaction. Sound-pressure levels were predicted and compared to determine the significance of the surface-reaction assumption. The main conclusions were that the difference between modeling a room surface as of extended or of local reaction is not significant when the surface is a single plate or a single layer of material (solid or porous) with a hard backing. The difference is significant when the surface consists of multilayers of solid or porous material and includes a layer of fluid with a large thickness relative to the other layers. The results are partially explained by considering the surface-reflection coefficients at the first-reflection angles.
Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojtanowicz, A.K.; Kuru, E.
1993-12-01
An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less
Ding, H; Chen, C; Zhang, X
2016-01-01
The linear solvation energy relationship (LSER) was applied to predict the adsorption coefficient (K) of synthetic organic compounds (SOCs) on single-walled carbon nanotubes (SWCNTs). A total of 40 log K values were used to develop and validate the LSER model. The adsorption data for 34 SOCs were collected from 13 published articles and the other six were obtained in our experiment. The optimal model composed of four descriptors was developed by a stepwise multiple linear regression (MLR) method. The adjusted r(2) (r(2)adj) and root mean square error (RMSE) were 0.84 and 0.49, respectively, indicating good fitness. The leave-one-out cross-validation Q(2) ([Formula: see text]) was 0.79, suggesting the robustness of the model was satisfactory. The external Q(2) ([Formula: see text]) and RMSE (RMSEext) were 0.72 and 0.50, respectively, showing the model's strong predictive ability. Hydrogen bond donating interaction (bB) and cavity formation and dispersion interactions (vV) stood out as the two most influential factors controlling the adsorption of SOCs onto SWCNTs. The equilibrium concentration would affect the fitness and predictive ability of the model, while the coefficients varied slightly.
Predictive Feedback and Feedforward Control for Systems with Unknown Disturbances
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Eure, Kenneth W.
1998-01-01
Predictive feedback control has been successfully used in the regulation of plate vibrations when no reference signal is available for feedforward control. However, if a reference signal is available it may be used to enhance regulation by incorporating a feedforward path in the feedback controller. Such a controller is known as a hybrid controller. This paper presents the theory and implementation of the hybrid controller for general linear systems, in particular for structural vibration induced by acoustic noise. The generalized predictive control is extended to include a feedforward path in the multi-input multi-output case and implemented on a single-input single-output test plant to achieve plate vibration regulation. There are cases in acoustic-induce vibration where the disturbance signal is not available to be used by the hybrid controller, but a disturbance model is available. In this case the disturbance model may be used in the feedback controller to enhance performance. In practice, however, neither the disturbance signal nor the disturbance model is available. This paper presents the theory of identifying and incorporating the noise model into the feedback controller. Implementations are performed on a test plant and regulation improvements over the case where no noise model is used are demonstrated.
A microstructurally based model of solder joints under conditions of thermomechanical fatigue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frear, D.R.; Burchett, S.N.; Rashid, M.M.
The thermomechanical fatigue failure of solder joints in increasingly becoming an important reliability issue. In this paper we present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. Themore » single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.« less
Pile group program for full material modeling and progressive failure.
DOT National Transportation Integrated Search
2008-12-01
Strain wedge (SW) model formulation has been used, in previous work, to evaluate the response of a single pile or a group of piles (including its : pile cap) in layered soils to lateral loading. The SW model approach provides appropriate prediction f...
Application of a stochastic snowmelt model for probabilistic decisionmaking
NASA Technical Reports Server (NTRS)
Mccuen, R. H.
1983-01-01
A stochastic form of the snowmelt runoff model that can be used for probabilistic decision-making was developed. The use of probabilistic streamflow predictions instead of single valued deterministic predictions leads to greater accuracy in decisions. While the accuracy of the output function is important in decisionmaking, it is also important to understand the relative importance of the coefficients. Therefore, a sensitivity analysis was made for each of the coefficients.
ERIC Educational Resources Information Center
Serry, Tanya Anne; Castles, Anne; Mensah, Fiona K.; Bavin, Edith L.; Eadie, Patricia; Pezic, Angela; Prior, Margot; Bretherton, Lesley; Reilly, Sheena
2015-01-01
The paper reports on a study designed to develop a risk model that can best predict single-word spelling in seven-year-old children when they were aged 4 and 5. Test measures, personal characteristics and environmental influences were all considered as variables from a community sample of 971 children. Strong concurrent correlations were found…
NASA Astrophysics Data System (ADS)
Munsky, Brian
2015-03-01
MAPK signal-activated transcription plays central roles in myriad biological processes including stress adaptation responses and cell fate decisions. Recent single-cell and single-molecule experiments have advanced our ability to quantify the spatial, temporal, and stochastic fluctuations for such signals and their downstream effects on transcription regulation. This talk explores how integrating such experiments with discrete stochastic computational analyses can yield quantitative and predictive understanding of transcription regulation in both space and time. We use single-molecule mRNA fluorescence in situ hybridization (smFISH) experiments to reveal locations and numbers of multiple endogenous mRNA species in 100,000's of individual cells, at different times and under different genetic and environmental perturbations. We use finite state projection methods to precisely and efficiently compute the full joint probability distributions of these mRNA, which capture measured spatial, temporal and correlative fluctuations. By combining these experimental and computational tools with uncertainty quantification, we systematically compare models of varying complexity and select those which give optimally precise and accurate predictions in new situations. We use these tools to explore two MAPK-activated gene regulation pathways. In yeast adaptation to osmotic shock, we analyze Hog1 kinase activation of transcription for three different genes STL1 (osmotic stress), CTT1 (oxidative stress) and HSP12 (heat shock). In human osteosarcoma cells under serum induction, we analyze ERK activation of c-Fos transcription.
NASA Astrophysics Data System (ADS)
Poppett, Claire; Allington-Smith, Jeremy
2010-07-01
We investigate the FRD performance of a 150 μm core fibre for its suitability to the SIDE project.1 This work builds on our previous work2 (Paper 1) where we examined the dependence of FRD on length in fibres with a core size of 100 μm and proposed a new multi-component model to explain the results. In order to predict the FRD characteristics of a fibre, the most commonly used model is an adaptation of the Gloge8model by Carrasco and Parry3 which quantifies the the number of scattering defects within an optical bre using a single parameter, d0. The model predicts many trends which are seen experimentally, for example, a decrease in FRD as core diameter increases, and also as wavelength increases. However the model also predicts a strong dependence on FRD with length that is not seen experimentally. By adapting the single fibre model to include a second fibre, we can quantify the amount of FRD due to stress caused by the method of termination. By fitting the model to experimental data we find that polishing the fibre causes a small increase in stress to be induced in the end of the fibre compared to a simple cleave technique.
Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.
2009-01-01
An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.
The Prediction of Noise Due to Jet Turbulence Convecting Past Flight Vehicle Trailing Edges
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2014-01-01
High intensity acoustic radiation occurs when turbulence convects past airframe trailing edges. A mathematical model is developed to predict this acoustic radiation. The model is dependent on the local flow and turbulent statistics above the trailing edge of the flight vehicle airframe. These quantities are dependent on the jet and flight vehicle Mach numbers and jet temperature. A term in the model approximates the turbulent statistics of single-stream heated jet flows and is developed based upon measurement. The developed model is valid for a wide range of jet Mach numbers, jet temperature ratios, and flight vehicle Mach numbers. The model predicts traditional trailing edge noise if the jet is not interacting with the airframe. Predictions of mean-flow quantities and the cross-spectrum of static pressure near the airframe trailing edge are compared with measurement. Finally, predictions of acoustic intensity are compared with measurement and the model is shown to accurately capture the phenomenon.
A study of sound generation in subsonic rotors, volume 1
NASA Technical Reports Server (NTRS)
Chalupnik, J. D.; Clark, L. T.
1975-01-01
A model for the prediction of wake related sound generation by a single airfoil is presented. It is assumed that the net force fluctuation on an airfoil may be expressed in terms of the net momentum fluctuation in the near wake of the airfoil. The forcing function for sound generation depends on the spectra of the two point velocity correlations in the turbulent region near the airfoil trailing edge. The spectra of the two point velocity correlations were measured for the longitudinal and transverse components of turbulence in the wake of a 91.4 cm chord airfoil. A scaling procedure was developed using the turbulent boundary layer thickness. The model was then used to predict the radiated sound from a 5.1 cm chord airfoil. Agreement between the predicted and measured sound radiation spectra was good. The single airfoil results were extended to a rotor geometry, and various aerodynamic parameters were studied.
Single drug biomarker prediction for ER- breast cancer outcome from chemotherapy.
Chen, Yong-Zi; Kim, Youngchul; Soliman, Hatem H; Ying, GuoGuang; Lee, Jae K
2018-06-01
ER-negative breast cancer includes most aggressive subtypes of breast cancer such as triple negative (TN) breast cancer. Excluded from hormonal and targeted therapies effectively used for other subtypes of breast cancer, standard chemotherapy is one of the primary treatment options for these patients. However, as ER- patients have shown highly heterogeneous responses to different chemotherapies, it has been difficult to select most beneficial chemotherapy treatments for them. In this study, we have simultaneously developed single drug biomarker models for four standard chemotherapy agents: paclitaxel (T), 5-fluorouracil (F), doxorubicin (A) and cyclophosphamide (C) to predict responses and survival of ER- breast cancer patients treated with combination chemotherapies. We then flexibly combined these individual drug biomarkers for predicting patient outcomes of two independent cohorts of ER- breast cancer patients who were treated with different drug combinations of neoadjuvant chemotherapy. These individual and combined drug biomarker models significantly predicted chemotherapy response for 197 ER- patients in the Hatzis cohort (AUC = 0.637, P = 0.002) and 69 ER- patients in the Hess cohort (AUC = 0.635, P = 0.056). The prediction was also significant for the TN subgroup of both cohorts (AUC = 0.60, 0.72, P = 0.043, 0.009). In survival analysis, our predicted responder patients showed significantly improved survival with a >17 months longer median PFS than the predicted non-responder patients for both ER- and TN subgroups (log-rank test P -value = 0.018 and 0.044). This flexible prediction capability based on single drug biomarkers may allow us to even select new drug combinations most beneficial to individual patients with ER- breast cancer. © 2018 The authors.
Cell-model prediction of the melting of a Lennard-Jones solid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holian, B.L.
The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.
Effects of complex life cycles on genetic diversity: cyclical parthenogenesis.
Rouger, R; Reichel, K; Malrieu, F; Masson, J P; Stoeckel, S
2016-11-01
Neutral patterns of population genetic diversity in species with complex life cycles are difficult to anticipate. Cyclical parthenogenesis (CP), in which organisms undergo several rounds of clonal reproduction followed by a sexual event, is one such life cycle. Many species, including crop pests (aphids), human parasites (trematodes) or models used in evolutionary science (Daphnia), are cyclical parthenogens. It is therefore crucial to understand the impact of such a life cycle on neutral genetic diversity. In this paper, we describe distributions of genetic diversity under conditions of CP with various clonal phase lengths. Using a Markov chain model of CP for a single locus and individual-based simulations for two loci, our analysis first demonstrates that strong departures from full sexuality are observed after only a few generations of clonality. The convergence towards predictions made under conditions of full clonality during the clonal phase depends on the balance between mutations and genetic drift. Second, the sexual event of CP usually resets the genetic diversity at a single locus towards predictions made under full sexuality. However, this single recombination event is insufficient to reshuffle gametic phases towards full-sexuality predictions. Finally, for similar levels of clonality, CP and acyclic partial clonality (wherein a fixed proportion of individuals are clonally produced within each generation) differentially affect the distribution of genetic diversity. Overall, this work provides solid predictions of neutral genetic diversity that may serve as a null model in detecting the action of common evolutionary or demographic processes in cyclical parthenogens (for example, selection or bottlenecks).
NASA Astrophysics Data System (ADS)
Alipour, M.; Kibler, K. M.
2017-12-01
Despite advances in flow prediction, managers of ungauged rivers located within broad regions of sparse hydrometeorologic observation still lack prescriptive methods robust to the data challenges of such regions. We propose a multi-objective streamflow prediction framework for regions of minimum observation to select models that balance runoff efficiency with choice of accurate parameter values. We supplement sparse observed data with uncertain or low-resolution information incorporated as `soft' a priori parameter estimates. The performance of the proposed framework is tested against traditional single-objective and constrained single-objective calibrations in two catchments in a remote area of southwestern China. We find that the multi-objective approach performs well with respect to runoff efficiency in both catchments (NSE = 0.74 and 0.72), within the range of efficiencies returned by other models (NSE = 0.67 - 0.78). However, soil moisture capacity estimated by the multi-objective model resonates with a priori estimates (parameter residuals of 61 cm versus 289 and 518 cm for maximum soil moisture capacity in one catchment, and 20 cm versus 246 and 475 cm in the other; parameter residuals of 0.48 versus 0.65 and 0.7 for soil moisture distribution shape factor in one catchment, and 0.91 versus 0.79 and 1.24 in the other). Thus, optimization to a multi-criteria objective function led to very different representations of soil moisture capacity as compared to models selected by single-objective calibration, without compromising runoff efficiency. These different soil moisture representations may translate into considerably different hydrological behaviors. The proposed approach thus offers a preliminary step towards greater process understanding in regions of severe data limitations. For instance, the multi-objective framework may be an adept tool to discern between models of similar efficiency to select models that provide the "right answers for the right reasons". Managers may feel more confident to utilize such models to predict flows in fully ungauged areas.
Using built environment characteristics to predict walking for exercise
Lovasi, Gina S; Moudon, Anne V; Pearson, Amber L; Hurvitz, Philip M; Larson, Eric B; Siscovick, David S; Berke, Ethan M; Lumley, Thomas; Psaty, Bruce M
2008-01-01
Background Environments conducive to walking may help people avoid sedentary lifestyles and associated diseases. Recent studies developed walkability models combining several built environment characteristics to optimally predict walking. Developing and testing such models with the same data could lead to overestimating one's ability to predict walking in an independent sample of the population. More accurate estimates of model fit can be obtained by splitting a single study population into training and validation sets (holdout approach) or through developing and evaluating models in different populations. We used these two approaches to test whether built environment characteristics near the home predict walking for exercise. Study participants lived in western Washington State and were adult members of a health maintenance organization. The physical activity data used in this study were collected by telephone interview and were selected for their relevance to cardiovascular disease. In order to limit confounding by prior health conditions, the sample was restricted to participants in good self-reported health and without a documented history of cardiovascular disease. Results For 1,608 participants meeting the inclusion criteria, the mean age was 64 years, 90 percent were white, 37 percent had a college degree, and 62 percent of participants reported that they walked for exercise. Single built environment characteristics, such as residential density or connectivity, did not significantly predict walking for exercise. Regression models using multiple built environment characteristics to predict walking were not successful at predicting walking for exercise in an independent population sample. In the validation set, none of the logistic models had a C-statistic confidence interval excluding the null value of 0.5, and none of the linear models explained more than one percent of the variance in time spent walking for exercise. We did not detect significant differences in walking for exercise among census areas or postal codes, which were used as proxies for neighborhoods. Conclusion None of the built environment characteristics significantly predicted walking for exercise, nor did combinations of these characteristics predict walking for exercise when tested using a holdout approach. These results reflect a lack of neighborhood-level variation in walking for exercise for the population studied. PMID:18312660
Zhu, Yu; Xia, Jie-lai; Wang, Jing
2009-09-01
Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.
A novel auto-tuning PID control mechanism for nonlinear systems.
Cetin, Meric; Iplikci, Serdar
2015-09-01
In this paper, a novel Runge-Kutta (RK) discretization-based model-predictive auto-tuning proportional-integral-derivative controller (RK-PID) is introduced for the control of continuous-time nonlinear systems. The parameters of the PID controller are tuned using RK model of the system through prediction error-square minimization where the predicted information of tracking error provides an enhanced tuning of the parameters. Based on the model-predictive control (MPC) approach, the proposed mechanism provides necessary PID parameter adaptations while generating additive correction terms to assist the initially inadequate PID controller. Efficiency of the proposed mechanism has been tested on two experimental real-time systems: an unstable single-input single-output (SISO) nonlinear magnetic-levitation system and a nonlinear multi-input multi-output (MIMO) liquid-level system. RK-PID has been compared to standard PID, standard nonlinear MPC (NMPC), RK-MPC and conventional sliding-mode control (SMC) methods in terms of control performance, robustness, computational complexity and design issue. The proposed mechanism exhibits acceptable tuning and control performance with very small steady-state tracking errors, and provides very short settling time for parameter convergence. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Masso, Majid; Vaisman, Iosif I
2014-01-01
The AUTO-MUTE 2.0 stand-alone software package includes a collection of programs for predicting functional changes to proteins upon single residue substitutions, developed by combining structure-based features with trained statistical learning models. Three of the predictors evaluate changes to protein stability upon mutation, each complementing a distinct experimental approach. Two additional classifiers are available, one for predicting activity changes due to residue replacements and the other for determining the disease potential of mutations associated with nonsynonymous single nucleotide polymorphisms (nsSNPs) in human proteins. These five command-line driven tools, as well as all the supporting programs, complement those that run our AUTO-MUTE web-based server. Nevertheless, all the codes have been rewritten and substantially altered for the new portable software, and they incorporate several new features based on user feedback. Included among these upgrades is the ability to perform three highly requested tasks: to run "big data" batch jobs; to generate predictions using modified protein data bank (PDB) structures, and unpublished personal models prepared using standard PDB file formatting; and to utilize NMR structure files that contain multiple models.
Attrition of fluid cracking catalyst in fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boerefijn, R.; Ghadiri, M.
1996-12-31
Particle attrition in fluid catalytic cracking units causes loss of catalyst, which could amount to a few tonnes per day! The dependence of attrition on the process conditions and catalyst properties is therefore of great industrial interest, but it is however not well established at present. The process of attrition in the jetting area of fluidised beds is addressed and the attrition test method of Forsythe & Hertwig is analysed in this paper. This method is used commonly to assess the attrition propensity of FCC powder, whereby the attrition rate in a single jet at very high orifice velocity (300more » m s{sup -1}) is measured. There has been some concern on the relevance of this method to attrition in FCC units. Therefore, a previously-developed model of attrition in the jetting region is employed in an attempt to establish a solid basis of interpretation of the Forsythe & Hertwig test and its application as an industrial standard test. The model consists of two parts. The first part predicts the solids flow patterns in the jet region, simulating numerically the Forsythe & Hertwig test. The second part models the breakage of single particles upon impact. Combining these two models, thus linking single particle mechanical properties to macroscopic flow phenomena, results in the modelling of the attrition rate of particles entrained into a single high speed jet. High speed video recordings are made of a single jet in a two-dimensional fluidised bed, at up to 40500 frames per second, in order to quantify some of the model parameters. Digital analysis of the video images yields values for particle velocities and entrainment rates in the jet, which can be compared to model predictions. 15 refs., 8 figs.« less
Object detection in natural backgrounds predicted by discrimination performance and models
NASA Technical Reports Server (NTRS)
Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.
Wind power application research on the fusion of the determination and ensemble prediction
NASA Astrophysics Data System (ADS)
Lan, Shi; Lina, Xu; Yuzhu, Hao
2017-07-01
The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.
Wall-resolved spectral cascade-transport turbulence model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Shaver, D. R.; Lahey, R. T.
A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less
Wall-resolved spectral cascade-transport turbulence model
Brown, C. S.; Shaver, D. R.; Lahey, R. T.; ...
2017-07-08
A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less
Dynamic Predictive Model for Growth of Bacillus cereus from Spores in Cooked Beans.
Juneja, Vijay K; Mishra, Abhinav; Pradhan, Abani K
2018-02-01
Kinetic growth data for Bacillus cereus grown from spores were collected in cooked beans under several isothermal conditions (10 to 49°C). Samples were inoculated with approximately 2 log CFU/g heat-shocked (80°C for 10 min) spores and stored at isothermal temperatures. B. cereus populations were determined at appropriate intervals by plating on mannitol-egg yolk-polymyxin agar and incubating at 30°C for 24 h. Data were fitted into Baranyi, Huang, modified Gompertz, and three-phase linear primary growth models. All four models were fitted to the experimental growth data collected at 13 to 46°C. Performances of these models were evaluated based on accuracy and bias factors, the coefficient of determination ( R 2 ), and the root mean square error. Based on these criteria, the Baranyi model best described the growth data, followed by the Huang, modified Gompertz, and three-phase linear models. The maximum growth rates of each primary model were fitted as a function of temperature using the modified Ratkowsky model. The high R 2 values (0.95 to 0.98) indicate that the modified Ratkowsky model can be used to describe the effect of temperature on the growth rates for all four primary models. The acceptable prediction zone (APZ) approach also was used for validation of the model with observed data collected during single and two-step dynamic cooling temperature protocols. When the predictions using the Baranyi model were compared with the observed data using the APZ analysis, all 24 observations for the exponential single rate cooling were within the APZ, which was set between -0.5 and 1 log CFU/g; 26 of 28 predictions for the two-step cooling profiles also were within the APZ limits. The developed dynamic model can be used to predict potential B. cereus growth from spores in beans under various temperature conditions or during extended chilling of cooked beans.
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
Belay, T K; Dagnachew, B S; Kowalski, Z M; Ådnøy, T
2017-08-01
Fourier transform mid-infrared (FT-MIR) spectra of milk are commonly used for phenotyping of traits of interest through links developed between the traits and milk FT-MIR spectra. Predicted traits are then used in genetic analysis for ultimate phenotypic prediction using a single-trait mixed model that account for cows' circumstances at a given test day. Here, this approach is referred to as indirect prediction (IP). Alternatively, FT-MIR spectral variable can be kept multivariate in the form of factor scores in REML and BLUP analyses. These BLUP predictions, including phenotype (predicted factor scores), were converted to single-trait through calibration outputs; this method is referred to as direct prediction (DP). The main aim of this study was to verify whether mixed modeling of milk spectra in the form of factors scores (DP) gives better prediction of blood β-hydroxybutyrate (BHB) than the univariate approach (IP). Models to predict blood BHB from milk spectra were also developed. Two data sets that contained milk FT-MIR spectra and other information on Polish dairy cattle were used in this study. Data set 1 (n = 826) also contained BHB measured in blood samples, whereas data set 2 (n = 158,028) did not contain measured blood values. Part of data set 1 was used to calibrate a prediction model (n = 496) and the remaining part of data set 1 (n = 330) was used to validate the calibration models, as well as to evaluate the DP and IP approaches. Dimensions of FT-MIR spectra in data set 2 were reduced either into 5 or 10 factor scores (DP) or into a single trait (IP) with calibration outputs. The REML estimates for these factor scores were found using WOMBAT. The BLUP values and predicted BHB for observations in the validation set were computed using the REML estimates. Blood BHB predicted from milk FT-MIR spectra by both approaches were regressed on reference blood BHB that had not been used in the model development. Coefficients of determination in cross-validation for untransformed blood BHB were from 0.21 to 0.32, whereas that for the log-transformed BHB were from 0.31 to 0.38. The corresponding estimates in validation were from 0.29 to 0.37 and 0.21 to 0.43, respectively, for untransformed and logarithmic BHB. Contrary to expectation, slightly better predictions of BHB were found when univariate variance structure was used (IP) than when multivariate covariance structures were used (DP). Conclusive remarks on the importance of keeping spectral data in multivariate form for prediction of phenotypes may be found in data sets where the trait of interest has strong relationships with spectral variables. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Single-Event Effects in High-Frequency Linear Amplifiers: Experiment and Analysis
NASA Astrophysics Data System (ADS)
Zeinolabedinzadeh, Saeed; Ying, Hanbin; Fleetwood, Zachary E.; Roche, Nicolas J.-H.; Khachatrian, Ani; McMorrow, Dale; Buchner, Stephen P.; Warner, Jeffrey H.; Paki-Amouzou, Pauline; Cressler, John D.
2017-01-01
The single-event transient (SET) response of two different silicon-germanium (SiGe) X-band (8-12 GHz) low noise amplifier (LNA) topologies is fully investigated in this paper. The two LNAs were designed and implemented in 130nm SiGe HBT BiCMOS process technology. Two-photon absorption (TPA) laser pulses were utilized to induce transients within various devices in these LNAs. Impulse response theory is identified as a useful tool for predicting the settling behavior of the LNAs subjected to heavy ion strikes. Comprehensive device and circuit level modeling and simulations were performed to accurately simulate the behavior of the circuits under ion strikes. The simulations agree well with TPA measurements. The simulation, modeling and analysis presented in this paper can be applied for any other circuit topologies for SET modeling and prediction.
Elaboration of the α-model derived from the BCS theory of superconductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, David C.
2013-10-14
The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp. Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp.more » Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is Boltzmann's constant and Tc is the superconducting transition temperature. On the other hand, to calculate the electronic free energy, entropy, heat capacity and thermodynamic critical field versus T, the α-model takes α to be an adjustable parameter. Here we write the BCS equations and limiting behaviors for the superconducting state thermodynamic properties explicitly in terms of α, as needed for calculations within the α-model, and present plots of the results versus T and α that are compared with the respective BCS predictions. Mechanisms such as gap anisotropy and strong coupling that can cause deviations of the thermodynamics from the BCS predictions, especially the heat capacity jump at Tc, are considered. Extensions of the α-model that have appeared in the literature, such as the two-band model, are also discussed. Tables of values of Δ(T)/Δ(0), the normalized London parameter Λ(T)/Λ(0) and λL(T)/λL(0) calculated from the BCS theory using α = αBCS are provided, which are the same in the α-model by assumption. Tables of values of the entropy, heat capacity and thermodynamic critical field versus T for seven values of α, including αBCS, are also presented.« less
Sweat loss prediction using a multi-model approach
NASA Astrophysics Data System (ADS)
Xu, Xiaojiang; Santee, William R.
2011-07-01
A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.
A diagnostic model for studying daytime urban air quality trends
NASA Technical Reports Server (NTRS)
Brewer, D. A.; Remsberg, E. E.; Woodbury, G. E.
1981-01-01
A single cell Eulerian photochemical air quality simulation model was developed and validated for selected days of the 1976 St. Louis Regional Air Pollution Study (RAPS) data sets; parameterizations of variables in the model and validation studies using the model are discussed. Good agreement was obtained between measured and modeled concentrations of NO, CO, and NO2 for all days simulated. The maximum concentration of O3 was also predicted well. Predicted species concentrations were relatively insensitive to small variations in CO and NOx emissions and to the concentrations of species which are entrained as the mixed layer rises.
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya
2013-03-01
This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.
Forcey, G.M.; Linz, G.M.; Thogmartin, W.E.; Bleier, W.J.
2008-01-01
Blackbirds share wetland habitat with many waterfowl species in Bird Conservation Region 11 (BCR 11), the prairie potholes. Because of similar habitat preferences, there may be associations between blackbird populations and populations of one or more species of waterfowl in BCR11. This study models populations of red-winged blackbirds and yellow-headed blackbirds as a function of multiple waterfowl species using data from the North American Breeding Bird Survey within BCR11. For each blackbird species, we created a global model with blackbird abundance modeled as a function of 11 waterfowl species; nuisance effects (year, route, and observer) also were included in the model. Hierarchical Poisson regression models were fit using Markov chain Monte Carlo methods in WinBUGS 1.4.1. Waterfowl abundances were weakly associated with blackbird numbers, and no single waterfowl species showed a strong correlation with any blackbird species. These findings suggest waterfowl abundance from a single species is not likely a good bioindicator of blackbird abundance; however, a global model provided good fit for predicting red-winged blackbird abundance. Increased model complexity may be required for accurate predictions of blackbird abundance; the amount of data required to construct appropriate models may limit this approach for predicting blackbird abundance in the prairie potholes. Copyright ?? Taylor & Francis Group, LLC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calcaterra, J.R.; Johnson, W.S.; Neu, R.W.
1997-12-31
Several methodologies have been developed to predict the lives of titanium matrix composites (TMCs) subjected to thermomechanical fatigue (TMF). This paper reviews and compares five life prediction models developed at NASA-LaRC. Wright Laboratories, based on a dingle parameter, the fiber stress in the load-carrying, or 0{degree}, direction. The two other models, both developed at Wright Labs. are multi-parameter models. These can account for long-term damage, which is beyond the scope of the single-parameter models, but this benefit is offset by the additional complexity of the methodologies. Each of the methodologies was used to model data generated at NASA-LeRC. Wright Labs.more » and Georgia Tech for the SCS-6/Timetal 21-S material system. VISCOPLY, a micromechanical stress analysis code, was used to determine the constituent stress state for each test and was used for each model to maintain consistency. The predictive capabilities of the models are compared, and the ability of each model to accurately predict the responses of tests dominated by differing damage mechanisms is addressed.« less
Integrating in silico models to enhance predictivity for developmental toxicity.
Marzo, Marco; Kulkarni, Sunil; Manganaro, Alberto; Roncaglioni, Alessandra; Wu, Shengde; Barton-Maclaren, Tara S; Lester, Cathy; Benfenati, Emilio
2016-08-31
Application of in silico models to predict developmental toxicity has demonstrated limited success particularly when employed as a single source of information. It is acknowledged that modelling the complex outcomes related to this endpoint is a challenge; however, such models have been developed and reported in the literature. The current study explored the possibility of integrating the selected public domain models (CAESAR, SARpy and P&G model) with the selected commercial modelling suites (Multicase, Leadscope and Derek Nexus) to assess if there is an increase in overall predictive performance. The results varied according to the data sets used to assess performance which improved upon model integration relative to individual models. Moreover, because different models are based on different specific developmental toxicity effects, integration of these models increased the applicable chemical and biological spaces. It is suggested that this approach reduces uncertainty associated with in silico predictions by achieving a consensus among a battery of models. The use of tools to assess the applicability domain also improves the interpretation of the predictions. This has been verified in the case of the software VEGA, which makes freely available QSAR models with a measurement of the applicability domain. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zhang, Y; Roberts, J; Tortorici, M; Veldman, A; St Ledger, K; Feussner, A; Sidhu, J
2017-06-01
Essentials rVIII-SingleChain is a unique recombinant factor VIII (FVIII) molecule. A population pharmacokinetic model was based on FVIII activity of severe hemophilia A patients. The model was used to simulate factor VIII activity-time profiles for various dosing scenarios. The model supports prolonged dosing of rVIII-SingleChain with intervals of up to twice per week. Background Single-chain recombinant coagulation factor VIII (rVIII-SingleChain) is a unique recombinant coagulation factor VIII molecule. Objectives To: (i) characterize the population pharmacokinetics (PK) of rVIII-SingleChain in patients with severe hemophilia A; (ii) identify correlates of variability in rVIII-SingleChain PK; and (iii) simulate various dosing scenarios of rVIII-SingleChain. Patients/Methods A population PK model was developed, based on FVIII activity levels of 130 patients with severe hemophilia A (n = 91 for ≥ 12-65 years; n = 39 for < 12 years) who had participated in a single-dose PK investigation with rVIII-SingleChain 50 IU kg -1 . PK sampling was performed for up to 96 h. Results A two-compartment population PK model with first-order elimination adequately described FVIII activity. Body weight and predose level of von Willebrand factor were significant covariates on clearance, and body weight was a significant covariate on the central distribution volume. Simulations using the model with various dosing scenarios estimated that > 85% and > 93% of patients were predicted to maintain FVIII activity level above 1 IU dL -1 , at all times with three-times-weekly dosing (given on days 0, 2, and 4.5) at the lowest (20 IU kg -1 ) and highest (50 IU kg -1 ) doses, respectively. For twice weekly dosing (days 0 and 3.5) of 50 IU kg -1 rVIII-SingleChain, 62-80% of patients across all ages were predicted to maintain a FVIII activity level above 1 IU dL -1 at day 7. Conclusions The population PK model adequately characterized rVIII-SingleChain PK, and the model can be utilized to simulate FVIII activity-time profiles for various dosing scenarios. © 2017 The Authors. Journal of Thrombosis and Haemostasis published by Wiley Periodicals, Inc. on behalf of International Society on Thrombosis and Haemostasis.
Technical note: Equivalent genomic models with a residual polygenic effect.
Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R
2016-03-01
Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Off-Gas Adsorption Model Capabilities and Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyon, Kevin L.; Welty, Amy K.; Law, Jack
2016-03-01
Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capturemore » the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently available. Thus, in order to improve the predictive capabilities of the model, there is a need for more single-species adsorption isotherms at different temperatures, in addition to extending the model to include adsorption kinetics. This report provides background information about the modeling process and a path forward for further model improvement in terms of accuracy and user interface.« less
Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.
2012-01-01
Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912
NASA Astrophysics Data System (ADS)
Wold, A. M.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; MacNeice, P. J.; Jian, L. K.
2017-12-01
The Wang-Sheeley-Arge (WSA)-ENLIL+Cone model is used extensively in space weather operations world-wide to model CME propagation. As such, it is important to assess its performance. We present validation results of the WSA-ENLIL+Cone model installed at the Community Coordinated Modeling Center (CCMC) and executed in real-time by the CCMC/Space Weather Research Center (SWRC). CCMC/SWRC uses the WSA-ENLIL+Cone model to predict CME arrivals at NASA missions throughout the inner heliosphere. In this work we compare model predicted CME arrival-times to in-situ ICME leading edge measurements near Earth, STEREO-A and STEREO-B for simulations completed between March 2010-December 2016 (over 1,800 CMEs). We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For all predicted CME arrivals, the hit rate is 0.5, and the false alarm rate is 0.1. For the 273 events where the CME was predicted to arrive at Earth, STEREO-A, or STEREO-B and we observed an arrival (hit), the mean absolute arrival-time prediction error was 10.4 ± 0.9 hours, with a tendency to early prediction error of -4.0 hours. We show the dependence of the arrival-time error on CME input parameters. We also explore the impact of the multi-spacecraft observations used to initialize the model CME inputs by comparing model verification results before and after the STEREO-B communication loss (since September 2014) and STEREO-A side-lobe operations (August 2014-December 2015). There is an increase of 1.7 hours in the CME arrival time error during single, or limited two-viewpoint periods, compared to the three-spacecraft viewpoint period. This trend would apply to a future space weather mission at L5 or L4 as another coronagraph viewpoint to reduce CME arrival time errors compared to a single L1 viewpoint.
Maladen, Ryan D.; Ding, Yang; Umbanhowar, Paul B.; Kamor, Adam; Goldman, Daniel I.
2011-01-01
We integrate biological experiment, empirical theory, numerical simulation and a physical model to reveal principles of undulatory locomotion in granular media. High-speed X-ray imaging of the sandfish lizard, Scincus scincus, in 3 mm glass particles shows that it swims within the medium without using its limbs by propagating a single-period travelling sinusoidal wave down its body, resulting in a wave efficiency, η, the ratio of its average forward speed to the wave speed, of approximately 0.5. A resistive force theory (RFT) that balances granular thrust and drag forces along the body predicts η close to the observed value. We test this prediction against two other more detailed modelling approaches: a numerical model of the sandfish coupled to a discrete particle simulation of the granular medium, and an undulatory robot that swims within granular media. Using these models and analytical solutions of the RFT, we vary the ratio of undulation amplitude to wavelength (A/λ) and demonstrate an optimal condition for sand-swimming, which for a given A results from the competition between η and λ. The RFT, in agreement with the simulated and physical models, predicts that for a single-period sinusoidal wave, maximal speed occurs for A/λ ≈ 0.2, the same kinematics used by the sandfish. PMID:21378020
Ji, Xiang; Liu, Li-Ming; Li, Hong-Qing
2014-11-01
Taking Jinjing Town in Dongting Lake area as a case, this paper analyzed the evolution of rural landscape patterns by means of life cycle theory, simulated the evolution cycle curve, and calculated its evolution period, then combining CA-Markov model, a complete prediction model was built based on the rule of rural landscape change. The results showed that rural settlement and paddy landscapes of Jinjing Town would change most in 2020, with the rural settlement landscape increased to 1194.01 hm2 and paddy landscape greatly reduced to 3090.24 hm2. The quantitative and spatial prediction accuracies of the model were up to 99.3% and 96.4%, respectively, being more explicit than single CA-Markov model. The prediction model of rural landscape patterns change proposed in this paper would be helpful for rural landscape planning in future.
Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.
2008-01-01
Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.
The Coherent Flame Model for Turbulent Chemical Reactions
1977-01-01
numerical integration of the resulting differential equations. The model predicts the flame length and superficial comparison with experiments suggest a...value for the single universal constant. The theory correctly predicts the change of flame length with changes in stoich- iometric ratio for the...indicate the X will be some where between 0.1 and 0.5. Figure 13 is presented to show the effect of equivalence ratio, , on the flame length when the
Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J
2018-05-01
Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.
Novel Modeling of Combinatorial miRNA Targeting Identifies SNP with Potential Role in Bone Density
Coronnello, Claudia; Hartmaier, Ryan; Arora, Arshi; Huleihel, Luai; Pandit, Kusum V.; Bais, Abha S.; Butterworth, Michael; Kaminski, Naftali; Stormo, Gary D.; Oesterreich, Steffi; Benos, Panayiotis V.
2012-01-01
MicroRNAs (miRNAs) are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting), a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD) in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential tool for miRNA-related studies. PMID:23284279
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jared A.; Hacker, Joshua P.; Delle Monache, Luca
2016-12-14
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this study, we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts.« less
Larval aquatic insect responses to cadmium and zinc in experimental streams
Mebane, Christopher A.; Schmidt, Travis S.; Balistrieri, Laurie S.
2017-01-01
To evaluate the risks of metal mixture effects to natural stream communities under ecologically relevant conditions, the authors conducted 30-d tests with benthic macroinvertebrates exposed to cadmium (Cd) and zinc (Zn) in experimental streams. The simultaneous exposures were with Cd and Zn singly and with Cd+Zn mixtures at environmentally relevant ratios. The tests produced concentration–response patterns that for individual taxa were interpreted in the same manner as classic single-species toxicity tests and for community metrics such as taxa richness and mayfly (Ephemeroptera) abundance were interpreted in the same manner as with stream survey data. Effect concentrations from the experimental stream exposures were usually 2 to 3 orders of magnitude lower than those from classic single-species tests. Relative to a response addition model, which assumes that the joint toxicity of the mixtures can be predicted from the product of their responses to individual toxicants, the Cd+Zn mixtures generally showed slightly less than additive toxicity. The authors applied a modeling approach called Tox to explore the mixture toxicity results and to relate the experimental stream results to field data. The approach predicts the accumulation of toxicants (hydrogen, Cd, and Zn) on organisms using a 2-pKa bidentate model that defines interactions between dissolved cations and biological receptors (biotic ligands) and relates that accumulation through a logistic equation to biological response. The Tox modeling was able to predict Cd+Zn mixture responses from the single-metal exposures as well as responses from field data. The similarity of response patterns between the 30-d experimental stream tests and field data supports the environmental relevance of testing aquatic insects in experimental streams.
Rule, Michael E.; Vargas-Irwin, Carlos; Donoghue, John P.; Truccolo, Wilson
2015-01-01
Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs) can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ,θ,α,β) LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution) with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (<100 ms) spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters. PMID:26157365
Integrated Cox's model for predicting survival time of glioblastoma multiforme.
Ai, Zhibing; Li, Longti; Fu, Rui; Lu, Jing-Min; He, Jing-Dong; Li, Sen
2017-04-01
Glioblastoma multiforme is the most common primary brain tumor and is highly lethal. This study aims to figure out signatures for predicting the survival time of patients with glioblastoma multiforme. Clinical information, messenger RNA expression, microRNA expression, and single-nucleotide polymorphism array data of patients with glioblastoma multiforme were retrieved from The Cancer Genome Atlas. Patients were separated into two groups by using 1 year as a cutoff, and a logistic regression model was used to figure out any variables that can predict whether the patient was able to live longer than 1 year. Furthermore, Cox's model was used to find out features that were correlated with the survival time. Finally, a Cox model integrated the significant clinical variables, messenger RNA expression, microRNA expression, and single-nucleotide polymorphism was built. Although the classification method failed, signatures of clinical features, messenger RNA expression levels, and microRNA expression levels were figured out by using Cox's model. However, no single-nucleotide polymorphisms related to prognosis were found. The selected clinical features were age at initial diagnosis, Karnofsky score, and race, all of which had been suggested to correlate with survival time. Both of the two significant microRNAs, microRNA-221 and microRNA-222, were targeted to p27 Kip1 protein, which implied the important role of p27 Kip1 on the prognosis of glioblastoma multiforme patients. Our results suggested that survival modeling was more suitable than classification to figure out prognostic biomarkers for patients with glioblastoma multiforme. An integrated model containing clinical features, messenger RNA levels, and microRNA expression levels was built, which has the potential to be used in clinics and thus to improve the survival status of glioblastoma multiforme patients.
Constitutive modeling of superalloy single crystals with verification testing
NASA Technical Reports Server (NTRS)
Jordan, Eric; Walker, Kevin P.
1985-01-01
The goal is the development of constitutive equations to describe the elevated temperature stress-strain behavior of single crystal turbine blade alloys. The program includes both the development of a suitable model and verification of the model through elevated temperature-torsion testing. A constitutive model is derived from postulated constitutive behavior on individual crystallographic slip systems. The behavior of the entire single crystal is then arrived at by summing up the slip on all the operative crystallographic slip systems. This type of formulation has a number of important advantages, including the prediction orientation dependence and the ability to directly represent the constitutive behavior in terms which metallurgists use in describing the micromechanisms. Here, the model is briefly described, followed by the experimental set-up and some experimental findings to date.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
A Model of Metacognition, Achievement Goal Orientation, Learning Style and Self-Efficacy
ERIC Educational Resources Information Center
Coutinho, Savia A.; Neuman, George
2008-01-01
Structural equation modelling was used to test a model integrating achievement goal orientation, learning style, self-efficacy and metacognition into a single framework that explained and predicted variation in performance. Self-efficacy was the strongest predictor of performance. Metacognition was a weak predictor of performance. Deep processing…
NASA Technical Reports Server (NTRS)
Mitchell, David L.; Arnott, W. Patrick
1994-01-01
This study builds upon the microphysical modeling described in Part 1 by deriving formulations for the extinction and absorption coefficients in terms of the size distribution parameters predicted from the micro-physical model. The optical depth and single scatter albedo of a cirrus cloud can then be determined, which, along with the asymmetry parameter, are the input parameters needed by cloud radiation models. Through the use of anomalous diffraction theory, analytical expressions were developed describing the absorption and extinction coefficients and the single scatter albedo as functions of size distribution parameters, ice crystal shapes (or habits), wavelength, and refractive index. The extinction coefficient was formulated in terms of the projected area of the size distribution, while the absorption coefficient was formulated in terms of both the projected area and mass of the size distribution. These properties were formulated as explicit functions of ice crystal geometry and were not based on an 'effective radius.' Based on simulations of the second cirrus case study described in Part 1, absorption coefficients predicted in the near infrared for hexagonal columns and rosettes were up to 47% and 71% lower, respectively, than absorption coefficients predicted by using equivalent area spheres. This resulted in single scatter albedos in the near-infrared that were considerably greater than those predicted by the equivalent area sphere method. Reflectances in this region should therefore be underestimated using the equivalent area sphere approach. Cloud optical depth was found to depend on ice crystal habit. When the simulated cirrus cloud contained only bullet rosettes, the optical depth was 142% greater than when the cloud contained only hexagonal columns. This increase produced a doubling in cloud albedo. In the near-infrared (IR), the single scatter albedo also exhibited a significant dependence on ice crystal habit. More research is needed on the geometrical properties of ice crystals before the influence of ice crystal shape on cirrus radiative properties can be adequately understood. This study provides a way of coupling the radiative properties of absorption, extinction, and single scatter albedo to the microphysical properties of cirrus clouds. The dependence of extinction and absorption on ice crystal shape was not just due to geometrical differences between crystal types, but was also due to the effect these differences had on the evolution of ice particle size spectra. The ice particle growth model in Part 1 and the radiative properties treated here are based on analytical formulations, and thus represent a computationally efficient means of modeling the microphysical and radiative properties of cirrus clouds.
NASA Technical Reports Server (NTRS)
Kraft, R. E.
1999-01-01
Single-degree-of-freedom resonators consisting of honeycomb cells covered by perforated facesheets are widely used as acoustic noise suppression liners in aircraft engine ducts. The acoustic resistance and mass reactance of such liners are known to vary with the intensity of the sound incident upon the panel. Since the pressure drop across a perforated liner facesheet increases quadratically with the flow velocity through the facesheet, this is known as the nonlinear resistance effect. In the past, two different empirical frequency domain models have been used to predict the Sound Pressure Level effect of the incident wave on the perforated liner impedance, one that uses the incident particle velocity in isolated narrowbands, and one that models the particle velocity as the overall velocity. In the absence of grazing flow, neither frequency domain model is entirely accurate in predicting the nonlinear effect that is measured for typical perforated sheets. The time domain model is developed in an attempt to understand and improve the model for the effect of spectral shape and amplitude of multi-frequency incident sound pressure on the liner impedance. A computer code for the time-domain finite difference model is developed and predictions using the models are compared to current frequency-domain models.
Corsini, Chiara; Baker, Catriona; Kung, Ethan; Schievano, Silvia; Arbia, Gregory; Baretta, Alessia; Biglino, Giovanni; Migliavacca, Francesco; Dubini, Gabriele; Pennati, Giancarlo; Marsden, Alison; Vignon-Clementel, Irene; Taylor, Andrew; Hsia, Tain-Yen; Dorfman, Adam
2014-01-01
In patients with congenital heart disease and a single ventricle (SV), ventricular support of the circulation is inadequate, and staged palliative surgery (usually 3 stages) is needed for treatment. In the various palliative surgical stages individual differences in the circulation are important and patient-specific surgical planning is ideal. In this study, an integrated approach between clinicians and engineers has been developed, based on patient-specific multi-scale models, and is here applied to predict stage 2 surgical outcomes. This approach involves four distinct steps: (1) collection of pre-operative clinical data from a patient presenting for SV palliation, (2) construction of the pre-operative model, (3) creation of feasible virtual surgical options which couple a three-dimensional model of the surgical anatomy with a lumped parameter model (LPM) of the remainder of the circulation and (4) performance of post-operative simulations to aid clinical decision making. The pre-operative model is described, agreeing well with clinical flow tracings and mean pressures. Two surgical options (bi-directional Glenn and hemi-Fontan operations) are virtually performed and coupled to the pre-operative LPM, with the hemodynamics of both options reported. Results are validated against postoperative clinical data. Ultimately, this work represents the first patient-specific predictive modeling of stage 2 palliation using virtual surgery and closed-loop multi-scale modeling.
Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina
2015-01-01
Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.
Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829
Viterbori, Paola; Usai, M Carmen; Traverso, Laura; De Franchis, Valentina
2015-12-01
This longitudinal study analyzes whether selected components of executive function (EF) measured during the preschool period predict several indices of math achievement in primary school. Six EF measures were assessed in a sample of 5-year-old children (N = 175). The math achievement of the same children was then tested in Grades 1 and 3 using both a composite math score and three single indices of written calculation, arithmetical facts, and problem solving. Using previous results obtained from the same sample of children, a confirmatory factor analysis examining the latent EF structure in kindergarten indicated that a two-factor model provided the best fit for the data. In this model, inhibition and working memory (WM)-flexibility were separate dimensions. A full structural equation model was then used to test the hypothesis that math achievement (the composite math score and single math scores) in Grades 1 and 3 could be explained by the two EF components comprising the kindergarten model. The results indicate that the WM-flexibility component measured during the preschool period substantially predicts mathematical achievement, especially in Grade 3. The math composite scores were predicted by the WM-flexibility factor at both grade levels. In Grade 3, both problem solving and arithmetical facts were predicted by the WM-flexibility component. The results empirically support interventions that target EF as an important component of early childhood mathematics education. Copyright © 2015 Elsevier Inc. All rights reserved.
Hill, Kevin D.; Sampson, Mario R.; Li, Jennifer S.; Tunks, Robert D.; Schulman, Scott R.; Cohen-Wolkowiez, Michael
2015-01-01
Aims Sildenafil is frequently prescribed to children with single ventricle heart defects. These children have unique hepatic physiology with elevated hepatic pressures which may alter drug pharmacokinetics. We sought to determine the impact of hepatic pressure on sildenafil pharmacokinetics in children with single ventricle heart defects. Methods A population pharmacokinetic model was developed using data from 20 single ventricle children receiving single dose intravenous sildenafil during cardiac catheterization. Nonlinear mixed effect modeling was used for model development and covariate effects were evaluated based on estimated precision and clinical significance. Results The analysis included a median (range) of 4 (2–5) pharmacokinetic samples per child. The final structural model was a two-compartment model for sildenafil with a one-compartment model for des-methyl-sildenafil (active metabolite), with assumed 100% sildenafil to des-methyl-sildenafil conversion. Sildenafil clearance was unaffected by hepatic pressure (clearance = 0.62 L/H/kg); however, clearance of des-methyl-sildenafil (1.94 × (hepatic pressure/9)−1.33 L/h/kg) was predicted to decrease ~7 fold as hepatic pressure increased from 4 to 18 mm Hg. Predicted drug exposure was increased by ~1.5 fold in subjects with hepatic pressures ≥ 10 mm Hg versus < 10 mm Hg (median area under the curve = 533 μg*h/L versus 792 μg*h/L). Discussion Elevated hepatic pressure delays clearance of the sildenafil metabolite, des-methyl-sildenafil and increases drug exposure. We speculate that this results from impaired biliary clearance. Hepatic pressure should be considered when prescribing sildenafil to children. These data demonstrate the importance of pharmacokinetic assessment in patients with unique cardiovascular physiology that may affect drug metabolism. PMID:26197839
Zheng, Jenny; van Schaick, Erno; Wu, Liviawati Sutjandra; Jacqmin, Philippe; Perez Ruixo, Juan Jose
2015-08-01
Osteoporosis is a chronic skeletal disease characterized by low bone strength resulting in increased fracture risk. New treatments for osteoporosis are still an unmet medical need because current available treatments have various limitations. Bone mineral density (BMD) is an important endpoint for evaluating new osteoporosis treatments; however, the BMD response is often slower and less profound than that of bone turnover markers (BTMs). If the relationship between BTMs and BMD can be quantified, the BMD response can be predicted by the changes in BTM after a single dose; therefore, a decision based on BMD changes can be informed early. We have applied a bone cycle model to a phase 2 denosumab dose-ranging study in osteopenic women to quantitatively link serum denosumab pharmacokinetics, BTMs, and lumbar spine (LS) BMD. The data from two phase 3 denosumab studies in patients with low bone mass, FREEDOM and DEFEND, were used for external validation. Both internal and external visual predictive checks demonstrated that the model was capable of predicting LS BMD at the denosumab regimen of 60 mg every 6 months. It has been demonstrated that the model, in combination with the changes in BTMs observed from a single-dose study in men, is capable of predicting long-term BMD outcomes (e.g., LS BMD response in men after 1 year of treatment) in different populations. We propose that this model can be used to inform drug development decisions for osteoporosis treatment early via evaluating LS BMD response when BTM data become available in early trials.
NASA Astrophysics Data System (ADS)
Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari
2015-03-01
Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.
Modeling of Interfacial Modification Effects on Thermal Conductivity of Carbon Nanotube Composites
NASA Technical Reports Server (NTRS)
Clancy, Thomas C.; Gates, Thomas S.
2006-01-01
The effect of functionalization of carbon nanotubes on the thermal conductivity of nanocomposites has been studied using a multi-scale modeling approach. These results predict that grafting linear hydrocarbon chains to the surface of a single wall carbon nanotube with covalent chemical bonds should result in a significant increase in the thermal conductivity of these nanocomposites. This is due to the decrease in the interfacial thermal (Kapitza) resistance between the single wall carbon nanotube and the surrounding polymer matrix upon chemical functionalization. The nanocomposites studied here consist of single wall carbon nanotubes in a bulk poly(ethylene vinyl acetate) matrix. The nanotubes are functionalized by end-grafting linear hydrocarbon chains of varying length to the surface of the nanotube. The effect which this functionalization has on the interfacial thermal resistance is studied by molecular dynamics simulation. Interfacial thermal resistance values are calculated for a range of chemical grafting densities and with several chain lengths. These results are subsequently used in an analytical model to predict the resulting effect on the bulk thermal conductivity of the nanocomposite.
Wallace, Meredith L; Anderson, Stewart J; Mazumdar, Sati
2010-12-20
Missing covariate data present a challenge to tree-structured methodology due to the fact that a single tree model, as opposed to an estimated parameter value, may be desired for use in a clinical setting. To address this problem, we suggest a multiple imputation algorithm that adds draws of stochastic error to a tree-based single imputation method presented by Conversano and Siciliano (Technical Report, University of Naples, 2003). Unlike previously proposed techniques for accommodating missing covariate data in tree-structured analyses, our methodology allows the modeling of complex and nonlinear covariate structures while still resulting in a single tree model. We perform a simulation study to evaluate our stochastic multiple imputation algorithm when covariate data are missing at random and compare it to other currently used methods. Our algorithm is advantageous for identifying the true underlying covariate structure when complex data and larger percentages of missing covariate observations are present. It is competitive with other current methods with respect to prediction accuracy. To illustrate our algorithm, we create a tree-structured survival model for predicting time to treatment response in older, depressed adults. Copyright © 2010 John Wiley & Sons, Ltd.
Söllner, Anke; Bröder, Arndt; Glöckner, Andreas; Betsch, Tilmann
2014-02-01
When decision makers are confronted with different problems and situations, do they use a uniform mechanism as assumed by single-process models (SPMs) or do they choose adaptively from a set of available decision strategies as multiple-strategy models (MSMs) imply? Both frameworks of decision making have gathered a lot of support, but only rarely have they been contrasted with each other. Employing an information intrusion paradigm for multi-attribute decisions from givens, SPM and MSM predictions on information search, decision outcomes, attention, and confidence judgments were derived and tested against each other in two experiments. The results consistently support the SPM view: Participants seemingly using a "take-the-best" (TTB) strategy do not ignore TTB-irrelevant information as MSMs would predict, but adapt the amount of information searched, choose alternative choice options, and show varying confidence judgments contingent on the quality of the "irrelevant" information. The uniformity of these findings underlines the adequacy of the novel information intrusion paradigm and comprehensively promotes the notion of a uniform decision making mechanism as assumed by single-process models. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Surtees, Jennifer A; Alani, Eric
2006-07-14
Genetic studies in Saccharomyces cerevisiae predict that the mismatch repair (MMR) factor MSH2-MSH3 binds and stabilizes branched recombination intermediates that form during single strand annealing and gene conversion. To test this model, we constructed a series of DNA substrates that are predicted to form during these recombination events. We show in an electrophoretic mobility shift assay that S. cerevisiae MSH2-MSH3 specifically binds branched DNA substrates containing 3' single-stranded DNA and that ATP stimulates its release from these substrates. Chemical footprinting analyses indicate that MSH2-MSH3 specifically binds at the double-strand/single-strand junction of branched substrates, alters its conformation and opens up the junction. Therefore, MSH2-MSH3 binding to its substrates creates a unique nucleoprotein structure that may signal downstream steps in repair that include interactions with MMR and nucleotide excision repair factors.
Stream network and stream segment temperature models software
Bartholow, John
2010-01-01
This set of programs simulates steady-state stream temperatures throughout a dendritic stream network handling multiple time periods per year. The software requires a math co-processor and 384K RAM. Also included is a program (SSTEMP) designed to predict the steady state stream temperature within a single stream segment for a single time period.
Image Analysis of a Negatively Curved Graphitic Sheet Model for Amorphous Carbon
NASA Astrophysics Data System (ADS)
Bursill, L. A.; Bourgeois, Laure N.
High-resolution electron micrographs are presented which show essentially curved single sheets of graphitic carbon. Image calculations are then presented for the random surface schwarzite-related model of Townsend et al. (Phys. Rev. Lett. 69, 921-924, 1992). Comparison with experimental images does not rule out the contention that such models, containing surfaces of negative curvature, may be useful for predicting some physical properties of specific forms of nanoporous carbon. Some difficulties of the model predictions, when compared with the experimental images, are pointed out. The range of application of this model, as well as competing models, is discussed briefly.
van Dijk, C; de Levie, R
1985-01-01
The continuum and single jump treatments of ion transport through black lipid membranes predict experimentally distinguishable results, even when the same mechanistic assumptions are made and the same potential-distance profile is used. On the basis of steady-state current-voltage curves for nonactin-mediated transport of potassium ions, we find that the continuum model describes the data accurately, whereas the single jump model fails to do so, for all cases investigated in which capacitance measurements indicate that the membrane thickness varies little with applied potential. PMID:3839420
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
The Behaviour of Naturally Debonded Composites Due to Bending Using a Meso-Level Model
NASA Astrophysics Data System (ADS)
Lord, C. E.; Rongong, J. A.; Hodzic, A.
2012-06-01
Numerical simulations and analytical models are increasingly being sought for the design and behaviour prediction of composite materials. The use of high-performance composite materials is growing in both civilian and defence related applications. With this growth comes the necessity to understand and predict how these new materials will behave under their exposed environments. In this study, the displacement behaviour of naturally debonded composites under out-of-plane bending conditions has been investigated. An analytical approach has been developed to predict the displacement response behaviour. The analytical model supports multi-layered composites with full and partial delaminations. The model can be used to extract bulk effective material properties in which can be represented, later, as an ESL (Equivalent Single Layer). The friction between each of the layers is included in the analytical model and is shown to have distinct behaviour for these types of composites. Acceptable agreement was observed between the model predictions, the ANSYS finite element model, and the experiments.
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Nguyen, H. T.; Tran, H.
2018-03-01
In this work, we show that precise predictions of the shapes of H2O rovibrational lines broadened by N2, over a wide pressure range, can be made using simulations corrected by a single measurement. For that, we use the partially-correlated speed-dependent Keilson-Storer (pcsdKS) model whose parameters are deduced from molecular dynamics simulations and semi-classical calculations. This model takes into account the collision-induced velocity-changes effects, the speed dependences of the collisional line width and shift as well as the correlation between velocity and internal-state changes. For each considered transition, the model is corrected by using a parameter deduced from its broadening coefficient measured for a single pressure. The corrected-pcsdKS model is then used to simulate spectra for a wide pressure range. Direct comparisons of the corrected-pcsdKS calculated and measured spectra of 5 rovibrational lines of H2O for various pressures, from 0.1 to 1.2 atm, show very good agreements. Their maximum differences are in most cases well below 1%, much smaller than residuals obtained when fitting the measurements with the Voigt line shape. This shows that the present procedure can be used to predict H2O line shapes for various pressure conditions and thus the simulated spectra can be used to deduce the refined line-shape parameters to complete spectroscopic databases, in the absence of relevant experimental values.
Deformation of periodic nanovoid structures in Mg single crystals
NASA Astrophysics Data System (ADS)
Xu, Shuozhi; Su, Yanqing; Zare Chavoshi, Saeed
2018-01-01
Large scale molecular dynamics (MD) simulations in Mg single crystal containing periodic cylindrical voids subject to uniaxial tension along the z direction are carried out. Models with different initial void sizes and crystallographic orientations are explored using two interatomic potentials. It is found that (i) a larger initial void always leads to a lower yield stress, in agreement with an analytic prediction; (ii) in the model with x[\\bar{1}100]-y[0001]-z[11\\bar{2}0] orientations, the two potentials predict different types of tension twins and phase transformation; (iii) in the model with x[0001]-y[11\\bar{2}0]-z[\\bar{1}100] orientations, the two potentials identically predict the nucleation of edge dislocations on the prismatic plane, which then glide away from the void, resulting in extrusions at the void surface; in the case of the smallest initial void, these surface extrusions pinch the void into two voids. Besides bringing new physical understanding of the nanovoid structures, our work highlights the variability and uncertainty in MD simulations arising from the interatomic potential, an issue relatively lightly addressed in the literature to date.
Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.
Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L
2017-01-01
A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.
N'Djin, W. Apoutou; Chapelon, Jean-Yves; Melodelima, David
2015-01-01
Organ motion is a key component in the treatment of abdominal tumors by High Intensity Focused Ultrasound (HIFU), since it may influence the safety, efficacy and treatment time. Here we report the development in a porcine model of an Ultrasound (US) image-based dynamic fusion modeling method for predicting the effect of in vivo motion on intraoperative HIFU treatments performed in the liver in conjunction with surgery. A speckle tracking method was used on US images to quantify in vivo liver motions occurring intraoperatively during breathing and apnea. A fusion modeling of HIFU treatments was implemented by merging dynamic in vivo motion data in a numerical modeling of HIFU treatments. Two HIFU strategies were studied: a spherical focusing delivering 49 juxtapositions of 5-second HIFU exposures and a toroidal focusing using 1 single 40-second HIFU exposure. Liver motions during breathing were spatially homogenous and could be approximated to a rigid motion mainly encountered in the cranial-caudal direction (f = 0.20Hz, magnitude >13mm). Elastic liver motions due to cardiovascular activity, although negligible, were detectable near millimeter-wide sus-hepatic veins (f = 0.96Hz, magnitude <1mm). The fusion modeling quantified the deleterious effects of respiratory motions on the size and homogeneity of a standard “cigar-shaped” millimetric lesion usually predicted after a 5-second single spherical HIFU exposure in stationary tissues (Dice Similarity Coefficient: DSC<45%). This method assessed the ability to enlarge HIFU ablations during respiration, either by juxtaposing “cigar-shaped” lesions with spherical HIFU exposures, or by generating one large single lesion with toroidal HIFU exposures (DSC>75%). Fusion modeling predictions were preliminarily validated in vivo and showed the potential of using a long-duration toroidal HIFU exposure to accelerate the ablation process during breathing (from 0.5 to 6 cm3·min-1). To improve HIFU treatment control, dynamic fusion modeling may be interesting for assessing numerically focusing strategies and motion compensation techniques in more realistic conditions. PMID:26398366
The Effect of Electronic Structure on the Phases Present in High Entropy Alloys
Leong, Zhaoyuan; Wróbel, Jan S.; Dudarev, Sergei L.; Goodall, Russell; Todd, Iain; Nguyen-Manh, Duc
2017-01-01
Multicomponent systems, termed High Entropy Alloys (HEAs), with predominantly single solid solution phases are a current area of focus in alloy development. Although different empirical rules have been introduced to understand phase formation and determine what the dominant phases may be in these systems, experimental investigation has revealed that in many cases their structure is not a single solid solution phase, and that the rules may not accurately distinguish the stability of the phase boundaries. Here, a combined modelling and experimental approach that looks into the electronic structure is proposed to improve accuracy of the predictions of the majority phase. To do this, the Rigid Band model is generalised for magnetic systems in prediction of the majority phase most likely to be found. Good agreement is found when the predictions are confronted with data from experiments, including a new magnetic HEA system (CoFeNiV). This also includes predicting the structural transition with varying levels of constituent elements, as a function of the valence electron concentration, n, obtained from the integrated spin-polarised density of states. This method is suitable as a new predictive technique to identify compositions for further screening, in particular for magnetic HEAs. PMID:28059106
The Effect of Electronic Structure on the Phases Present in High Entropy Alloys.
Leong, Zhaoyuan; Wróbel, Jan S; Dudarev, Sergei L; Goodall, Russell; Todd, Iain; Nguyen-Manh, Duc
2017-01-06
Multicomponent systems, termed High Entropy Alloys (HEAs), with predominantly single solid solution phases are a current area of focus in alloy development. Although different empirical rules have been introduced to understand phase formation and determine what the dominant phases may be in these systems, experimental investigation has revealed that in many cases their structure is not a single solid solution phase, and that the rules may not accurately distinguish the stability of the phase boundaries. Here, a combined modelling and experimental approach that looks into the electronic structure is proposed to improve accuracy of the predictions of the majority phase. To do this, the Rigid Band model is generalised for magnetic systems in prediction of the majority phase most likely to be found. Good agreement is found when the predictions are confronted with data from experiments, including a new magnetic HEA system (CoFeNiV). This also includes predicting the structural transition with varying levels of constituent elements, as a function of the valence electron concentration, n, obtained from the integrated spin-polarised density of states. This method is suitable as a new predictive technique to identify compositions for further screening, in particular for magnetic HEAs.
Nontangent, Developed Contour Bulkheads for a Single-Stage Launch Vehicle
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Lepsch, Roger A., Jr.
2000-01-01
Dry weights for single-stage launch vehicles that incorporate nontangent, developed contour bulkheads are estimated and compared to a baseline vehicle with 1.414 aspect ratio ellipsoidal bulkheads. Weights, volumes, and heights of optimized bulkhead designs are computed using a preliminary design bulkhead analysis code. The dry weights of vehicles that incorporate the optimized bulkheads are predicted using a vehicle weights and sizing code. Two optimization approaches are employed. A structural-level method, where the vehicle's three major bulkhead regions are optimized separately and then incorporated into a model for computation of the vehicle dry weight, predicts a reduction of4365 lb (2.2 %) from the 200,679-lb baseline vehicle dry weight. In the second, vehicle-level, approach, the vehicle dry weight is the objective function for the optimization. For the vehicle-level analysis, modified bulkhead designs are analyzed and incorporated into the weights model for computation of a dry weight. The optimizer simultaneously manipulates design variables for all three bulkheads to reduce the dry weight. The vehicle-level analysis predicts a dry weight reduction of 5129 lb, a 2.6% reduction from the baseline weight. Based on these results, nontangent, developed contour bulkheads may provide substantial weight savings for single stage vehicles.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Geravanchizadeh, Masoud; Fallah, Ali
2015-12-01
A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.
Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
Competition Processes and Proactive Interference in Short-Term Memory
ERIC Educational Resources Information Center
Bennett, Raymond W.; Kurzeja, Paul L.
1976-01-01
In an experiment using single-word items, subjects are run under three different speed-accuracy trade-off conditions. A competition model would predict that when subjects are forced to respond quickly, there will be an increase in errors, and these will be from recent past items. The prediction was confirmed. (CHK)
USDA-ARS?s Scientific Manuscript database
Single-step Genomic Best Linear Unbiased Predictor (ssGBLUP) has become increasingly popular for whole-genome prediction (WGP) modeling as it utilizes any available pedigree and phenotypes on both genotyped and non-genotyped individuals. The WGP accuracy of ssGBLUP has been demonstrated to be greate...
Assessing the effect of different treatments on decomposition rate of dairy manure.
Khalil, Tariq M; Higgins, Stewart S; Ndegwa, Pius M; Frear, Craig S; Stöckle, Claudio O
2016-11-01
Confined animal feeding operations (CAFOs) contribute to greenhouse gas emission, but the magnitude of these emissions as a function of operation size, infrastructure, and manure management are difficult to assess. Modeling is a viable option to estimate gaseous emission and nutrient flows from CAFOs. These models use a decomposition rate constant for carbon mineralization. However, this constant is usually determined assuming a homogenous mix of manure, ignoring the effects of emerging manure treatments. The aim of this study was to measure and compare the decomposition rate constants of dairy manure in single and three-pool decomposition models, and to develop an empirical model based on chemical composition of manure for prediction of a decomposition rate constant. Decomposition rate constants of manure before and after an anaerobic digester (AD), following coarse fiber separation, and fine solids removal were determined under anaerobic conditions for single and three-pool decomposition models. The decomposition rates of treated manure effluents differed significantly from untreated manure for both single and three-pool decomposition models. In the single-pool decomposition model, AD effluent containing only suspended solids had a relatively high decomposition rate of 0.060 d(-1), while liquid with coarse fiber and fine solids removed had the lowest rate of 0.013 d(-1). In the three-pool decomposition model, fast and slow decomposition rate constants (0.25 d(-1) and 0.016 d(-1) respectively) of untreated AD influent were also significantly different from treated manure fractions. A regression model to predict the decomposition rate of treated dairy manure fitted well (R(2) = 0.83) to observed data. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hoang, Triem T.; OConnell, Tamara; Ku, Jentung
2004-01-01
Loop Heat Pipes (LHPs) have proven themselves as reliable and robust heat transport devices for spacecraft thermal control systems. So far, the LHPs in earth-orbit satellites perform very well as expected. Conventional LHPs usually consist of a single capillary pump for heat acquisition and a single condenser for heat rejection. Multiple pump/multiple condenser LHPs have shown to function very well in ground testing. Nevertheless, the test results of a dual pump/condenser LHP also revealed that the dual LHP behaved in a complicated manner due to the interaction between the pumps and condensers. Thus it is redundant to say that more research is needed before they are ready for 0-g deployment. One research area that perhaps compels immediate attention is the analytical modeling of LHPs, particularly the transient phenomena. Modeling a single pump/single condenser LHP is difficult enough. Only a handful of computer codes are available for both steady state and transient simulations of conventional LHPs. No previous effort was made to develop an analytical model (or even a complete theory) to predict the operational behavior of the multiple pump/multiple condenser LHP systems. The current research project offered a basic theory of the multiple pump/multiple condenser LHP operation. From it, a computer code was developed to predict the LHP saturation temperature in accordance with the system operating and environmental conditions.
Comparing theories' performance in predicting violence.
Haas, Henriette; Cusson, Maurice
2015-01-01
The stakes of choosing the best theory as a basis for violence prevention and offender rehabilitation are high. However, no single theory of violence has ever been universally accepted by a majority of established researchers. Psychiatry, psychology and sociology are each subdivided into different schools relying upon different premises. All theories can produce empirical evidence for their validity, some of them stating the opposite of each other. Calculating different models with multivariate logistic regression on a dataset of N = 21,312 observations and ninety-two influences allowed a direct comparison of the performance of operationalizations of some of the most important schools. The psychopathology model ranked as the best model in terms of predicting violence right after the comprehensive interdisciplinary model. Next came the rational choice and lifestyle model and third the differential association and learning theory model. Other models namely the control theory model, the childhood-trauma model and the social conflict and reaction model turned out to have low sensitivities for predicting violence. Nevertheless, all models produced acceptable results in predictions of a non-violent outcome. Copyright © 2015. Published by Elsevier Ltd.
Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B
2018-04-01
Objective Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.
Bouwhuis, Stef; Geuskens, Goedele A; Boot, Cécile R L; Bongers, Paulien M; van der Beek, Allard J
2017-08-01
To construct prediction models for transitions to combination multiple job holding (MJH) (multiple jobs as an employee) and hybrid MJH (being an employee and self-employed), among employees aged 45-64. A total of 5187 employees in the Netherlands completed online questionnaires annually between 2010 and 2013. We applied logistic regression analyses with a backward elimination strategy to construct prediction models. Transitions to combination MJH and hybrid MJH were best predicted by a combination of factors including: demographics, health and mastery, work characteristics, work history, skills and knowledge, social factors, and financial factors. Not having a permanent contract and a poor household financial situation predicted both transitions. Some predictors only predicted combination MJH, e.g., working part-time, or hybrid MJH, e.g., work-home interference. A wide variety of factors predict combination MJH and/or hybrid MJH. The prediction model approach allowed for the identification of predictors that have not been previously studied. © 2017 Wiley Periodicals, Inc.
Application of single-step genomic evaluation for crossbred performance in pig.
Xiang, T; Nielsen, B; Su, G; Legarra, A; Christensen, O F
2016-03-01
Crossbreding is predominant and intensively used in commercial meat production systems, especially in poultry and swine. Genomic evaluation has been successfully applied for breeding within purebreds but also offers opportunities of selecting purebreds for crossbred performance by combining information from purebreds with information from crossbreds. However, it generally requires that all relevant animals are genotyped, which is costly and presently does not seem to be feasible in practice. Recently, a novel single-step BLUP method for genomic evaluation of both purebred and crossbred performance has been developed that can incorporate marker genotypes into a traditional animal model. This new method has not been validated in real data sets. In this study, we applied this single-step method to analyze data for the maternal trait of total number of piglets born in Danish Landrace, Yorkshire, and two-way crossbred pigs in different scenarios. The genetic correlation between purebred and crossbred performances was investigated first, and then the impact of (crossbred) genomic information on prediction reliability for crossbred performance was explored. The results confirm the existence of a moderate genetic correlation, and it was seen that the standard errors on the estimates were reduced when including genomic information. Models with marker information, especially crossbred genomic information, improved model-based reliabilities for crossbred performance of purebred boars and also improved the predictive ability for crossbred animals and, to some extent, reduced the bias of prediction. We conclude that the new single-step BLUP method is a good tool in the genetic evaluation for crossbred performance in purebred animals.
Nur, N.; Jahncke, J.; Herzog, M.P.; Howar, J.; Hyrenbach, K.D.; Zamon, J.E.; Ainley, D.G.; Wiens, J.A.; Morgan, K.; Balance, L.T.; Stralberg, D.
2011-01-01
Marine Protected Areas (MPAs) provide an important tool for conservation of marine ecosystems. To be most effective, these areas should be strategically located in a manner that supports ecosystem function. To inform marine spatial planning and support strategic establishment of MPAs within the California Current System, we identified areas predicted to support multispecies aggregations of seabirds ("hotspot????). We developed habitat-association models for 16 species using information from at-sea observations collected over an 11-year period (1997-2008), bathymetric data, and remotely sensed oceanographic data for an area from north of Vancouver Island, Canada, to the USA/Mexico border and seaward 600 km from the coast. This approach enabled us to predict distribution and abundance of seabirds even in areas of few or no surveys. We developed single-species predictive models using a machine-learning algorithm: bagged decision trees. Single-species predictions were then combined to identify potential hotspots of seabird aggregation, using three criteria: (1) overall abundance among species, (2) importance of specific areas ("core area????) to individual species, and (3) predicted persistence of hotspots across years. Model predictions were applied to the entire California Current for four seasons (represented by February, May, July, and October) in each of 11 years. Overall, bathymetric variables were often important predictive variables, whereas oceanographic variables derived from remotely sensed data were generally less important. Predicted hotspots often aligned with currently protected areas (e.g., National Marine Sanctuaries), but we also identified potential hotspots in Northern California/Southern Oregon (from Cape Mendocino to Heceta Bank), Southern California (adjacent to the Channel Islands), and adjacent to Vancouver Island, British Columbia, that are not currently included in protected areas. Prioritization and identification of multispecies hotspots will depend on which group of species is of highest management priority. Modeling hotspots at a broad spatial scale can contribute to MPA site selection, particularly if complemented by fine-scale information for focal areas. ?? 2011 by the Ecological Society of America.
Multiscale Modeling of PEEK Using Reactive Molecular Dynamics Modeling and Micromechanics
NASA Technical Reports Server (NTRS)
Pisani, William A.; Radue, Matthew; Chinkanjanarot, Sorayot; Bednarcyk, Brett A.; Pineda, Evan J.; King, Julia A.; Odegard, Gregory M.
2018-01-01
Polyether ether ketone (PEEK) is a high-performance, semi-crystalline thermoplastic that is used in a wide range of engineering applications, including some structural components of aircraft. The design of new PEEK-based materials requires a precise understanding of the multiscale structure and behavior of semi-crystalline PEEK. Molecular Dynamics (MD) modeling can efficiently predict bulk-level properties of single phase polymers, and micromechanics can be used to homogenize those phases based on the overall polymer microstructure. In this study, MD modeling was used to predict the mechanical properties of the amorphous and crystalline phases of PEEK. The hierarchical microstructure of PEEK, which combines the aforementioned phases, was modeled using a multiscale modeling approach facilitated by NASA's MSGMC. The bulk mechanical properties of semi-crystalline PEEK predicted using MD modeling and MSGMC agree well with vendor data, thus validating the multiscale modeling approach.
Bayesian Integration of Information in Hippocampal Place Cells
Madl, Tamas; Franklin, Stan; Chen, Ke; Montaldi, Daniela; Trappl, Robert
2014-01-01
Accurate spatial localization requires a mechanism that corrects for errors, which might arise from inaccurate sensory information or neuronal noise. In this paper, we propose that Hippocampal place cells might implement such an error correction mechanism by integrating different sources of information in an approximately Bayes-optimal fashion. We compare the predictions of our model with physiological data from rats. Our results suggest that useful predictions regarding the firing fields of place cells can be made based on a single underlying principle, Bayesian cue integration, and that such predictions are possible using a remarkably small number of model parameters. PMID:24603429
A comparison between the observed and predicted Fe II spectrum in different plasmas
NASA Astrophysics Data System (ADS)
Johansson, S.
This paper gives a survey of the spectral distribution of emission lines of Fe II, predicted from a single atomic model. The observed differences between the recorded and the predicted spectrum are discussed in terms of deficiencies of the model and interactions within the emitting plasma. A number of illustrative examples of unexpected features with applications to astrophysics are given. Selective population, due to charge transfer and resonant photo excitation, is elucidated. The future need of more laboratory data for Fe II as regards energy levels and line classification is also discussed.
Human Immunity and the Design of Multi-Component, Single Target Vaccines
Saul, Allan; Fay, Michael P.
2007-01-01
Background Inclusion of multiple immunogens to target a single organism is a strategy being pursued for many experimental vaccines, especially where it is difficult to generate a strongly protective response from a single immunogen. Although there are many human vaccines that contain multiple defined immunogens, in almost every case each component targets a different pathogen. As a consequence, there is little practical experience for deciding where the increased complexity of vaccines with multiple defined immunogens vaccines targeting single pathogens will be justifiable. Methodology/Principal Findings A mathematical model, with immunogenicity parameters derived from a database of human responses to established vaccines, was used to predict the increase in the efficacy and the proportion of the population protected resulting from addition of further immunogens. The gains depended on the relative protection and the range of responses in the population to each immunogen and also to the correlation of the responses between immunogens. In most scenarios modeled, the gain in overall efficacy obtained by adding more immunogens was comparable to gains obtained from a single immunogen through the use of better formulations or adjuvants. Multi-component single target vaccines were more effective at decreasing the proportion of poor responders than increasing the overall efficacy of the vaccine in a population. Conclusions/Significance Inclusion of limited number of antigens in a vaccine aimed at targeting a single organism will increase efficacy, but the gains are relatively modest and for a practical vaccine there are constraints that are likely to limit multi-component single target vaccines to a small number of key antigens. The model predicts that this type of vaccine will be most useful where the critical issue is the reduction in proportion of poor responders. PMID:17786221
Predicting survival across chronic interstitial lung disease: the ILD-GAP model.
Ryerson, Christopher J; Vittinghoff, Eric; Ley, Brett; Lee, Joyce S; Mooney, Joshua J; Jones, Kirk D; Elicker, Brett M; Wolters, Paul J; Koth, Laura L; King, Talmadge E; Collard, Harold R
2014-04-01
Risk prediction is challenging in chronic interstitial lung disease (ILD) because of heterogeneity in disease-specific and patient-specific variables. Our objective was to determine whether mortality is accurately predicted in patients with chronic ILD using the GAP model, a clinical prediction model based on sex, age, and lung physiology, that was previously validated in patients with idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis (n=307), chronic hypersensitivity pneumonitis (n=206), connective tissue disease-associated ILD (n=281), idiopathic nonspecific interstitial pneumonia (n=45), or unclassifiable ILD (n=173) were selected from an ongoing database (N=1,012). Performance of the previously validated GAP model was compared with novel prediction models in each ILD subtype and the combined cohort. Patients with follow-up pulmonary function data were used for longitudinal model validation. The GAP model had good performance in all ILD subtypes (c-index, 74.6 in the combined cohort), which was maintained at all stages of disease severity and during follow-up evaluation. The GAP model had similar performance compared with alternative prediction models. A modified ILD-GAP Index was developed for application across all ILD subtypes to provide disease-specific survival estimates using a single risk prediction model. This was done by adding a disease subtype variable that accounted for better adjusted survival in connective tissue disease-associated ILD, chronic hypersensitivity pneumonitis, and idiopathic nonspecific interstitial pneumonia. The GAP model accurately predicts risk of death in chronic ILD. The ILD-GAP model accurately predicts mortality in major chronic ILD subtypes and at all stages of disease.
Huang, Kuan-Chun; White, Ryan J
2013-08-28
We develop a random walk model to simulate the Brownian motion and the electrochemical response of a single molecule confined to an electrode surface via a flexible molecular tether. We use our simple model, which requires no prior knowledge of the physics of the molecular tether, to predict and better understand the voltammetric response of surface-confined redox molecules when motion of the redox molecule becomes important. The single molecule is confined to a hemispherical volume with a maximum radius determined by the flexible molecular tether (5-20 nm) and is allowed to undergo true three-dimensional diffusion. Distance- and potential-dependent electron transfer probabilities are evaluated throughout the simulations to generate cyclic voltammograms of the model system. We find that at sufficiently slow cyclic voltammetric scan rates the electrochemical reaction behaves like an adsorbed redox molecule with no mass transfer limitation; thus, the peak current is proportional to the scan rate. Conversely, at faster scan rates the diffusional motion of the molecule limits the simulated peak current, which exhibits a linear dependence on the square root of the scan rate. The switch between these two limiting regimes occurs when the diffusion layer thickness, (2Dt)(1/2), is ~10 times the tether length. Finally, we find that our model predicts the voltammetric behavior of a redox-active methylene blue tethered to an electrode surface via short flexible single-stranded, polythymine DNAs, allowing the estimation of diffusion coefficients for the end-tethered molecule.
Composite Overwrapped Pressure Vessel (COPV) Stress Rupture Testing
NASA Technical Reports Server (NTRS)
Greene, Nathanael J.; Saulsberry, Regor L.; Leifeste, Mark R.; Yoder, Tommy B.; Keddy, Chris P.; Forth, Scott C.; Russell, Rick W.
2010-01-01
This paper reports stress rupture testing of Kevlar(TradeMark) composite overwrapped pressure vessels (COPVs) at NASA White Sands Test Facility. This 6-year test program was part of the larger effort to predict and extend the lifetime of flight vessels. Tests were performed to characterize control parameters for stress rupture testing, and vessel life was predicted by statistical modeling. One highly instrumented 102-cm (40-in.) diameter Kevlar(TradeMark) COPV was tested to failure (burst) as a single-point model verification. Significant data were generated that will enhance development of improved NDE methods and predictive modeling techniques, and thus better address stress rupture and other composite durability concerns that affect pressure vessel safety, reliability and mission assurance.
Effects of complex life cycles on genetic diversity: cyclical parthenogenesis
Rouger, R; Reichel, K; Malrieu, F; Masson, J P; Stoeckel, S
2016-01-01
Neutral patterns of population genetic diversity in species with complex life cycles are difficult to anticipate. Cyclical parthenogenesis (CP), in which organisms undergo several rounds of clonal reproduction followed by a sexual event, is one such life cycle. Many species, including crop pests (aphids), human parasites (trematodes) or models used in evolutionary science (Daphnia), are cyclical parthenogens. It is therefore crucial to understand the impact of such a life cycle on neutral genetic diversity. In this paper, we describe distributions of genetic diversity under conditions of CP with various clonal phase lengths. Using a Markov chain model of CP for a single locus and individual-based simulations for two loci, our analysis first demonstrates that strong departures from full sexuality are observed after only a few generations of clonality. The convergence towards predictions made under conditions of full clonality during the clonal phase depends on the balance between mutations and genetic drift. Second, the sexual event of CP usually resets the genetic diversity at a single locus towards predictions made under full sexuality. However, this single recombination event is insufficient to reshuffle gametic phases towards full-sexuality predictions. Finally, for similar levels of clonality, CP and acyclic partial clonality (wherein a fixed proportion of individuals are clonally produced within each generation) differentially affect the distribution of genetic diversity. Overall, this work provides solid predictions of neutral genetic diversity that may serve as a null model in detecting the action of common evolutionary or demographic processes in cyclical parthenogens (for example, selection or bottlenecks). PMID:27436524
Predicting Trihalomethanes (THMs) in the New York City Water Supply
NASA Astrophysics Data System (ADS)
Mukundan, R.; Van Dreason, R.
2013-12-01
Chlorine, a commonly used disinfectant in most water supply systems, can combine with organic carbon to form disinfectant byproducts including carcinogenic trihalomethanes (THMs). We used water quality data from 24 monitoring sites within the New York City (NYC) water supply distribution system, measured between January 2009 and April 2012, to develop site-specific empirical models for predicting total trihalomethane (TTHM) levels. Terms in the model included various combinations of the following water quality parameters: total organic carbon, pH, specific conductivity, and water temperature. Reasonable estimates of TTHM levels were achieved with overall R2 of about 0.87 and predicted values within 5 μg/L of measured values. The relative importance of factors affecting TTHM formation was estimated by ranking the model regression coefficients. Site-specific models showed improved model performance statistics compared to a single model for the entire system most likely because the single model did not consider locational differences in the water treatment process. Although never out of compliance in 2011, the TTHM levels in the water supply increased following tropical storms Irene and Lee with 45% of the samples exceeding the 80 μg/L Maximum Contaminant Level (MCL) in October and November. This increase was explained by changes in water quality parameters, particularly by the increase in total organic carbon concentration and pH during this period.
Predicting the Toxicity of Adjuvant Breast Cancer Drug Combination Therapy
2012-09-01
diarrhea and interstitial lung disease/pneumonitis. From largest to smallest, our multiple dose (1,250 mg q24 h) model-predicted ratios of lapatinib...1977) A model for the kinetics of distribution of actinomycin-D in the beagle dog . J Pharmacol Exp Ther 200(3):469–478 31. Collins JM, Dedrick RL, King...a single agent for various tumor types (n = 2045), nausea (39%), diarrhea (39%) and vomiting (22%) were observed; other gastrointestinal events
Predicting the Toxicity of Adjuvant Breast Cancer Drug Combination Therapy
2013-03-01
diarrhea and interstitial lung disease/pneumonitis. From largest to smallest, our multiple dose (1,250 mg q24 h) model-predicted ratios of lapatinib...1977) A model for the kinetics of distribution of actinomycin-D in the beagle dog . J Pharmacol Exp Ther 200(3):469–478 31. Collins JM, Dedrick RL, King...as a single agent for various tumor types (n = 2045), nausea (39%), diarrhea (39%) and vomiting (22%) were observed; other gastrointestinal events
Predicting bifurcation angle effect on blood flow in the microvasculature.
Yang, Jiho; Pak, Y Eugene; Lee, Tae-Rin
2016-11-01
Since blood viscosity is a basic parameter for understanding hemodynamics in human physiology, great amount of research has been done in order to accurately predict this highly non-Newtonian flow property. However, previous works lacked in consideration of hemodynamic changes induced by heterogeneous vessel networks. In this paper, the effect of bifurcation on hemodynamics in a microvasculature is quantitatively predicted. The flow resistance in a single bifurcation microvessel was calculated by combining a new simple mathematical model with 3-dimensional flow simulation for varying bifurcation angles under physiological flow conditions. Interestingly, the results indicate that flow resistance induced by vessel bifurcation holds a constant value of approximately 0.44 over the whole single bifurcation model below diameter of 60μm regardless of geometric parameters including bifurcation angle. Flow solutions computed from this new model showed substantial decrement in flow velocity relative to other mathematical models, which do not include vessel bifurcation effects, while pressure remained the same. Furthermore, when applying the bifurcation angle effect to the entire microvascular network, the simulation results gave better agreements with recent in vivo experimental measurements. This finding suggests a new paradigm in microvascular blood flow properties, that vessel bifurcation itself, regardless of its angle, holds considerable influence on blood viscosity, and this phenomenon will help to develop new predictive tools in microvascular research. Copyright © 2016 Elsevier Inc. All rights reserved.
Some Memories are Odder than Others: Judgments of Episodic Oddity Violate Known Decision Rules
O’Connor, Akira R.; Guhl, Emily N.; Cox, Justin C.; Dobbins, Ian G.
2011-01-01
Current decision models of recognition memory are based almost entirely on one paradigm, single item old/new judgments accompanied by confidence ratings. This task results in receiver operating characteristics (ROCs) that are well fit by both signal-detection and dual-process models. Here we examine an entirely new recognition task, the judgment of episodic oddity, whereby participants select the mnemonically odd members of triplets (e.g., a new item hidden among two studied items). Using the only two known signal-detection rules of oddity judgment derived from the sensory perception literature, the unequal variance signal-detection model predicted that an old item among two new items would be easier to discover than a new item among two old items. In contrast, four separate empirical studies demonstrated the reverse pattern: triplets with two old items were the easiest to resolve. This finding was anticipated by the dual-process approach as the presence of two old items affords the greatest opportunity for recollection. Furthermore, a bootstrap-fed Monte Carlo procedure using two independent datasets demonstrated that the dual-process parameters typically observed during single item recognition correctly predict the current oddity findings, whereas unequal variance signal-detection parameters do not. Episodic oddity judgments represent a case where dual- and single-process predictions qualitatively diverge and the findings demonstrate that novelty is “odder” than familiarity. PMID:22833695
Aldred, J R; Darling, E; Morrison, G; Siegel, J; Corsi, R L
2016-06-01
This study involved the development of a model for evaluating the potential costs and benefits of ozone control by activated carbon filtration in single-family homes. The modeling effort included the prediction of indoor ozone with and without activated carbon filtration in the HVAC system. As one application, the model was used to predict benefit-to-cost ratios for single-family homes in 12 American cities in five different climate zones. Health benefits were evaluated using disability-adjusted life-years and included city-specific age demographics for each simulation. Costs of commercially available activated carbon filters included capital cost differences when compared to conventional HVAC filters of similar particle removal efficiency, energy penalties due to additional pressure drop, and regional utility rates. The average indoor ozone removal effectiveness ranged from 4 to 20% across the 12 target cities and was largely limited by HVAC system operation time. For the parameters selected in this study, the mean predicted benefit-to-cost ratios for 1-inch filters were >1.0 in 10 of the 12 cities. The benefits of residential activated carbon filters were greatest in cities with high seasonal ozone and HVAC usage, suggesting the importance of targeting such conditions for activated carbon filter applications. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.
Rivas, Elena; Lang, Raymond; Eddy, Sean R.
2012-01-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308
Simultaneous prediction of binding free energy and specificity for PDZ domain-peptide interactions
NASA Astrophysics Data System (ADS)
Crivelli, Joseph J.; Lemmon, Gordon; Kaufmann, Kristian W.; Meiler, Jens
2013-12-01
Interactions between protein domains and linear peptides underlie many biological processes. Among these interactions, the recognition of C-terminal peptides by PDZ domains is one of the most ubiquitous. In this work, we present a mathematical model for PDZ domain-peptide interactions capable of predicting both affinity and specificity of binding based on X-ray crystal structures and comparative modeling with R osetta. We developed our mathematical model using a large phage display dataset describing binding specificity for a wild type PDZ domain and 91 single mutants, as well as binding affinity data for a wild type PDZ domain binding to 28 different peptides. Structural refinement was carried out through several R osetta protocols, the most accurate of which included flexible peptide docking and several iterations of side chain repacking and backbone minimization. Our findings emphasize the importance of backbone flexibility and the energetic contributions of side chain-side chain hydrogen bonds in accurately predicting interactions. We also determined that predicting PDZ domain-peptide interactions became increasingly challenging as the length of the peptide increased in the N-terminal direction. In the training dataset, predicted binding energies correlated with those derived through calorimetry and specificity switches introduced through single mutations at interface positions were recapitulated. In independent tests, our best performing protocol was capable of predicting dissociation constants well within one order of magnitude of the experimental values and specificity profiles at the level of accuracy of previous studies. To our knowledge, this approach represents the first integrated protocol for predicting both affinity and specificity for PDZ domain-peptide interactions.
Rossa, Carlos; Sloboda, Ron; Usmani, Nawaid; Tavakoli, Mahdi
2016-07-01
This paper proposes a method to predict the deflection of a flexible needle inserted into soft tissue based on the observation of deflection at a single point along the needle shaft. We model the needle-tissue as a discretized structure composed of several virtual, weightless, rigid links connected by virtual helical springs whose stiffness coefficient is found using a pattern search algorithm that only requires the force applied at the needle tip during insertion and the needle deflection measured at an arbitrary insertion depth. Needle tip deflections can then be predicted for different insertion depths. Verification of the proposed method in synthetic and biological tissue shows a deflection estimation error of [Formula: see text]2 mm for images acquired at 35 % or more of the maximum insertion depth, and decreases to 1 mm for images acquired closer to the final insertion depth. We also demonstrate the utility of the model for prostate brachytherapy, where in vivo needle deflection measurements obtained during early stages of insertion are used to predict the needle deflection further along the insertion process. The method can predict needle deflection based on the observation of deflection at a single point. The ultrasound probe can be maintained at the same position during insertion of the needle, which avoids complications of tissue deformation caused by the motion of the ultrasound probe.
2016-01-01
Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630
Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José
2014-12-23
Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. Copyright © 2015 Montesinos-López et al.
Models of Recognition, Repetition Priming, and Fluency : Exploring a New Framework
ERIC Educational Resources Information Center
Berry, Christopher J.; Shanks, David R.; Speekenbrink, Maarten; Henson, Richard N. A.
2012-01-01
We present a new modeling framework for recognition memory and repetition priming based on signal detection theory. We use this framework to specify and test the predictions of 4 models: (a) a single-system (SS) model, in which one continuous memory signal drives recognition and priming; (b) a multiple-systems-1 (MS1) model, in which completely…
Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K
2015-04-01
Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.
Modeling student success in engineering education
NASA Astrophysics Data System (ADS)
Jin, Qu
In order for the United States to maintain its global competitiveness, the long-term success of our engineering students in specific courses, programs, and colleges is now, more than ever, an extremely high priority. Numerous studies have focused on factors that impact student success, namely academic performance, retention, and/or graduation. However, there are only a limited number of works that have systematically developed models to investigate important factors and to predict student success in engineering. Therefore, this research presents three separate but highly connected investigations to address this gap. The first investigation involves explaining and predicting engineering students' success in Calculus I courses using statistical models. The participants were more than 4000 first-year engineering students (cohort years 2004 - 2008) who enrolled in Calculus I courses during the first semester in a large Midwestern university. Predictions from statistical models were proposed to be used to place engineering students into calculus courses. The success rates were improved by 12% in Calculus IA using predictions from models developed over traditional placement method. The results showed that these statistical models provided a more accurate calculus placement method than traditional placement methods and help improve success rates in those courses. In the second investigation, multi-outcome and single-outcome neural network models were designed to understand and to predict first-year retention and first-year GPA of engineering students. The participants were more than 3000 first year engineering students (cohort years 2004 - 2005) enrolled in a large Midwestern university. The independent variables include both high school academic performance factors and affective factors measured prior to entry. The prediction performances of the multi-outcome and single-outcome models were comparable. The ability to predict cumulative GPA at the end of an engineering student's first year of college was about a half of a grade point for both models. The predictors of retention and cumulative GPA while being similar differ in that high school academic metrics play a more important role in predicting cumulative GPA with the affective measures playing a more important role in predicting retention. In the last investigation, multi-outcome neural network models were used to understand and to predict engineering students' retention, GPA, and graduation from entry to departure. The participants were more than 4000 engineering students (cohort years 2004 - 2006) enrolled in a large Midwestern university. Different patterns of important predictors were identified for GPA, retention, and graduation. Overall, this research explores the feasibility of using modeling to enhance a student's educational experience in engineering. Student success modeling was used to identify the most important cognitive and affective predictors for a student's first calculus course retention, GPA, and graduation. The results suggest that the statistical modeling methods have great potential to assist decision making and help ensure student success in engineering education.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
NASA Astrophysics Data System (ADS)
Harley, P.; Spence, S.; Early, J.; Filsinger, D.; Dietrich, M.
2013-12-01
Single-zone modelling is used to assess different collections of impeller 1D loss models. Three collections of loss models have been identified in literature, and the background to each of these collections is discussed. Each collection is evaluated using three modern automotive turbocharger style centrifugal compressors; comparisons of performance for each of the collections are made. An empirical data set taken from standard hot gas stand tests for each turbocharger is used as a baseline for comparison. Compressor range is predicted in this study; impeller diffusion ratio is shown to be a useful method of predicting compressor surge in 1D, and choke is predicted using basic compressible flow theory. The compressor designer can use this as a guide to identify the most compatible collection of losses for turbocharger compressor design applications. The analysis indicates the most appropriate collection for the design of automotive turbocharger centrifugal compressors.
Prediction of coal grindability from exploration data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez, M.; Hazen, K.
1970-08-01
A general prediction model for the Hardgrove grindability index was constructed from 735 coal samples using the proximate analysis, heating value, and sulfur content. The coals used to develop the general model ranged in volatile matter from 12.8 to 49.2 percent, dry basis, and had grindability indexes ranging from 35 to 121. A restricted model applicable to bituminous coals having grindabilities in the 40 to 110 range was developed from the proximate analysis and the petrographic composition of the coal. The prediction of coal grindability within a single seam was also investigated. The results reported support the belief that mechanicalmore » properties of the coal are related to both chemical and petrographic factors of the coal. The mechanical properties coal may be forecast in advance of mining, because the variables used as input to the prediction models can be measured from drill core samples collected during exploration.« less
NASA Technical Reports Server (NTRS)
Caruso, J. J.
1984-01-01
Finite element substructuring is used to predict unidirectional fiber composite hygral (moisture), thermal, and mechanical properties. COSMIC NASTRAN and MSC/NASTRAN are used to perform the finite element analysis. The results obtained from the finite element model are compared with those obtained from the simplified composite micromechanics equations. A unidirectional composite structure made of boron/HM-epoxy, S-glass/IMHS-epoxy and AS/IMHS-epoxy are studied. The finite element analysis is performed using three dimensional isoparametric brick elements and two distinct models. The first model consists of a single cell (one fiber surrounded by matrix) to form a square. The second model uses the single cell and substructuring to form a nine cell square array. To compare computer time and results with the nine cell superelement model, another nine cell model is constructed using conventional mesh generation techniques. An independent computer program consisting of the simplified micromechanics equation is developed to predict the hygral, thermal, and mechanical properties for this comparison. The results indicate that advanced techniques can be used advantageously for fiber composite micromechanics.
Integrated stoichiometric, thermodynamic and kinetic modelling of steady state metabolism
Fleming, R.M.T.; Thiele, I.; Provan, G.; Nasheuer, H.P.
2010-01-01
The quantitative analysis of biochemical reactions and metabolites is at frontier of biological sciences. The recent availability of high-throughput technology data sets in biology has paved the way for new modelling approaches at various levels of complexity including the metabolome of a cell or an organism. Understanding the metabolism of a single cell and multi-cell organism will provide the knowledge for the rational design of growth conditions to produce commercially valuable reagents in biotechnology. Here, we demonstrate how equations representing steady state mass conservation, energy conservation, the second law of thermodynamics, and reversible enzyme kinetics can be formulated as a single system of linear equalities and inequalities, in addition to linear equalities on exponential variables. Even though the feasible set is non-convex, the reformulation is exact and amenable to large-scale numerical analysis, a prerequisite for computationally feasible genome scale modelling. Integrating flux, concentration and kinetic variables in a unified constraint-based formulation is aimed at increasing the quantitative predictive capacity of flux balance analysis. Incorporation of experimental and theoretical bounds on thermodynamic and kinetic variables ensures that the predicted steady state fluxes are both thermodynamically and biochemically feasible. The resulting in silico predictions are tested against fluxomic data for central metabolism in E. coli and compare favourably with in silico prediction by flux balance analysis. PMID:20230840
Wang, Xujing; Becker, Frederick F.; Gascoyne, Peter R. C.
2010-01-01
The scale-invariant property of the cytoplasmic membrane of biological cells is examined by applying the Minkowski–Bouligand method to digitized scanning electron microscopy images of the cell surface. The membrane is found to exhibit fractal behavior, and the derived fractal dimension gives a good description of its morphological complexity. Furthermore, we found that this fractal dimension correlates well with the specific membrane dielectric capacitance derived from the electrorotation measurements. Based on these findings, we propose a new fractal single-shell model to describe the dielectrics of mammalian cells, and compare it with the conventional single-shell model (SSM). We found that while both models fit with experimental data well, the new model is able to eliminate the discrepancy between the measured dielectric property of cells and that predicted by the SSM. PMID:21198103
Cohesive Modeling of Transverse Cracking in Laminates with a Single Layer of Elements per Ply
NASA Technical Reports Server (NTRS)
VanDerMeer, Frans P.; Davila, Carlos G.
2013-01-01
This study aims to bridge the gap between classical understanding of transverse cracking in cross-ply laminates and recent computational methods for the modeling of progressive laminate failure. Specifically, the study investigates under what conditions a finite element model with cohesive X-FEM cracks can reproduce the in situ effect for the ply strength. It is shown that it is possible to do so with a single element across the thickness of the ply, provided that the interface stiffness is properly selected. The optimal value for this interface stiffness is derived with an analytical shear lag model. It is also shown that, when the appropriate statistical variation of properties has been applied, models with a single element through the thickness of a ply can predict the density of transverse matrix cracks
Cai, Zhongli; Pignol, Jean-Philippe; Chan, Conrad; Reilly, Raymond M
2010-03-01
Our objective was to compare Monte Carlo N-particle (MCNP) self- and cross-doses from (111)In to the nucleus of breast cancer cells with doses calculated by reported analytic methods (Goddu et al. and Farragi et al.). A further objective was to determine whether the MCNP-predicted surviving fraction (SF) of breast cancer cells exposed in vitro to (111)In-labeled diethylenetriaminepentaacetic acid human epidermal growth factor ((111)In-DTPA-hEGF) could accurately predict the experimentally determined values. MCNP was used to simulate the transport of electrons emitted by (111)In from the cell surface, cytoplasm, or nucleus. The doses to the nucleus per decay (S values) were calculated for single cells, closely packed monolayer cells, or cell clusters. The cell and nucleus dimensions of 6 breast cancer cell lines were measured, and cell line-specific S values were calculated. For self-doses, MCNP S values of nucleus to nucleus agreed very well with those of Goddu et al. (ratio of S values using analytic methods vs. MCNP = 0.962-0.995) and Faraggi et al. (ratio = 1.011-1.024). MCNP S values of cytoplasm and cell surface to nucleus compared fairly well with the reported values (ratio = 0.662-1.534 for Goddu et al.; 0.944-1.129 for Faraggi et al.). For cross doses, the S values to the nucleus were independent of (111)In subcellular distribution but increased with cluster size. S values for monolayer cells were significantly different from those of single cells and cell clusters. The MCNP-predicted SF for monolayer MDA-MB-468, MDA-MB-231, and MCF-7 cells agreed with the experimental data (relative error of 3.1%, -1.0%, and 1.7%). The single-cell and cell cluster models were less accurate in predicting the SF. For MDA-MB-468 cells, relative error was 8.1% using the single-cell model and -54% to -67% using the cell cluster model. Individual cell-line dimensions had large effects on S values and were needed to estimate doses and SF accurately. MCNP simulation compared well with the reported analytic methods in the calculation of subcellular S values for single cells and cell clusters. Application of a monolayer model was most accurate in predicting the SF of breast cancer cells exposed in vitro to (111)In-DTPA-hEGF.
A model of litter size distribution in cattle.
Bennett, G L; Echternkamp, S E; Gregory, K E
1998-07-01
Genetic increases in twinning of cattle could result in increased frequency of triplet or higher-order births. There are no estimates of the incidence of triplets in populations with genetic levels of twinning over 40% because these populations either have not existed or have not been documented. A model of the distribution of litter size in cattle is proposed. Empirical estimates of ovulation rate distribution in sheep were combined with biological hypotheses about the fate of embryos in cattle. Two phases of embryo loss were hypothesized. The first phase is considered to be preimplantation. Losses in this phase occur independently (i.e., the loss of one embryo does not affect the loss of the remaining embryos). The second phase occurs after implantation. The loss of one embryo in this stage results in the loss of all embryos. Fewer than 5% triplet births are predicted when 50% of births are twins and triplets. Above 60% multiple births, increased triplets accounted for most of the increase in litter size. Predictions were compared with data from 5,142 calvings by 14 groups of heifers and cows with average litter sizes ranging from 1.14 to 1.36 calves. The predicted number of triplets was not significantly different (chi2 = 16.85, df = 14) from the observed number. The model also predicted differences in conception rates. A cow ovulating two ova was predicted to have the highest conception rate in a single breeding cycle. As mean ovulation rate increased, predicted conception to one breeding cycle increased. Conception to two or three breeding cycles decreased as mean ovulation increased because late-pregnancy failures increased. An alternative model of the fate of ova in cattle based on embryo and uterine competency predicts very similar proportions of singles, twins, and triplets but different conception rates. The proposed model of litter size distribution in cattle accurately predicts the proportion of triplets found in cattle with genetically high twinning rates. This model can be used in projecting efficiency changes resulting from genetically increasing the twinning rate in cattle.
NASA Technical Reports Server (NTRS)
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
A predictive framework for evaluating models of semantic organization in free recall
Morton, Neal W; Polyn, Sean M.
2016-01-01
Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243
Russell, Bayden D.; Harley, Christopher D. G.; Wernberg, Thomas; Mieszkowska, Nova; Widdicombe, Stephen; Hall-Spencer, Jason M.; Connell, Sean D.
2012-01-01
Most studies that forecast the ecological consequences of climate change target a single species and a single life stage. Depending on climatic impacts on other life stages and on interacting species, however, the results from simple experiments may not translate into accurate predictions of future ecological change. Research needs to move beyond simple experimental studies and environmental envelope projections for single species towards identifying where ecosystem change is likely to occur and the drivers for this change. For this to happen, we advocate research directions that (i) identify the critical species within the target ecosystem, and the life stage(s) most susceptible to changing conditions and (ii) the key interactions between these species and components of their broader ecosystem. A combined approach using macroecology, experimentally derived data and modelling that incorporates energy budgets in life cycle models may identify critical abiotic conditions that disproportionately alter important ecological processes under forecasted climates. PMID:21900317
What is adaptive about adaptive decision making? A parallel constraint satisfaction account.
Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc
2014-12-01
There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.
Sun, Xiangqing; Elston, Robert C; Barnholtz-Sloan, Jill S; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Tian, Ye D; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford D; Chandar, Apoorva; Warfe, James M; Brock, Wendy; Chak, Amitabh
2016-05-01
Barrett's esophagus is often asymptomatic and only a small portion of Barrett's esophagus patients are currently diagnosed and under surveillance. Therefore, it is important to develop risk prediction models to identify high-risk individuals with Barrett's esophagus. Familial aggregation of Barrett's esophagus and esophageal adenocarcinoma, and the increased risk of esophageal adenocarcinoma for individuals with a family history, raise the necessity of including genetic factors in the prediction model. Methods to determine risk prediction models using both risk covariates and ascertained family data are not well developed. We developed a Barrett's Esophagus Translational Research Network (BETRNet) risk prediction model from 787 singly ascertained Barrett's esophagus pedigrees and 92 multiplex Barrett's esophagus pedigrees, fitting a multivariate logistic model that incorporates family history and clinical risk factors. The eight risk factors, age, sex, education level, parental status, smoking, heartburn frequency, regurgitation frequency, and use of acid suppressant, were included in the model. The prediction accuracy was evaluated on the training dataset and an independent validation dataset of 643 multiplex Barrett's esophagus pedigrees. Our results indicate family information helps to predict Barrett's esophagus risk, and predicting in families improves both prediction calibration and discrimination accuracy. Our model can predict Barrett's esophagus risk for anyone with family members known to have, or not have, had Barrett's esophagus. It can predict risk for unrelated individuals without knowing any relatives' information. Our prediction model will shed light on effectively identifying high-risk individuals for Barrett's esophagus screening and surveillance, consequently allowing intervention at an early stage, and reducing mortality from esophageal adenocarcinoma. Cancer Epidemiol Biomarkers Prev; 25(5); 727-35. ©2016 AACR. ©2016 American Association for Cancer Research.
Witkiewicz, Agnieszka K; Balaji, Uthra; Eslinger, Cody; McMillan, Elizabeth; Conway, William; Posner, Bruce; Mills, Gordon B; O'Reilly, Eileen M; Knudsen, Erik S
2016-08-16
Pancreatic ductal adenocarcinoma (PDAC) harbors the worst prognosis of any common solid tumor, and multiple failed clinical trials indicate therapeutic recalcitrance. Here, we use exome sequencing of patient tumors and find multiple conserved genetic alterations. However, the majority of tumors exhibit no clearly defined therapeutic target. High-throughput drug screens using patient-derived cell lines found rare examples of sensitivity to monotherapy, with most models requiring combination therapy. Using PDX models, we confirmed the effectiveness and selectivity of the identified treatment responses. Out of more than 500 single and combination drug regimens tested, no single treatment was effective for the majority of PDAC tumors, and each case had unique sensitivity profiles that could not be predicted using genetic analyses. These data indicate a shortcoming of reliance on genetic analysis to predict efficacy of currently available agents against PDAC and suggest that sensitivity profiling of patient-derived models could inform personalized therapy design for PDAC. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Link-prediction to tackle the boundary specification problem in social network surveys
De Wilde, Philippe; Buarque de Lima-Neto, Fernando
2017-01-01
Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826
Brown, Jason L; Cameron, Alison; Yoder, Anne D; Vences, Miguel
2014-10-09
Pattern and process are inextricably linked in biogeographic analyses, though we can observe pattern, we must infer process. Inferences of process are often based on ad hoc comparisons using a single spatial predictor. Here, we present an alternative approach that uses mixed-spatial models to measure the predictive potential of combinations of hypotheses. Biodiversity patterns are estimated from 8,362 occurrence records from 745 species of Malagasy amphibians and reptiles. By incorporating 18 spatially explicit predictions of 12 major biogeographic hypotheses, we show that mixed models greatly improve our ability to explain the observed biodiversity patterns. We conclude that patterns are influenced by a combination of diversification processes rather than by a single predominant mechanism. A 'one-size-fits-all' model does not exist. By developing a novel method for examining and synthesizing spatial parameters such as species richness, endemism and community similarity, we demonstrate the potential of these analyses for understanding the diversification history of Madagascar's biota.
Engine-induced structural-borne noise in a general aviation aircraft
NASA Technical Reports Server (NTRS)
Unruh, J. F.; Scheidt, D. C.; Pomerening, D. J.
1979-01-01
Structural borne interior noise in a single engine general aviation aircraft was studied to determine the importance of engine induced structural borne noise and to determine the necessary modeling requirements for the prediction of structural borne interior noise. Engine attached/detached ground test data show that engine induced structural borne noise is a primary interior noise source for the single engine test aircraft, cabin noise is highly influenced by responses at the propeller tone, and cabin acoustic resonances can influence overall noise levels. Results from structural and acoustic finite element coupled models of the test aircraft show that wall flexibility has a strong influence on fundamental cabin acoustic resonances, the lightweight fuselage structure has a high modal density, and finite element analysis procedures are appropriate for the prediction of structural borne noise.
Numerical and Experimental Validation of a New Damage Initiation Criterion
NASA Astrophysics Data System (ADS)
Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.
2017-09-01
Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.
Application of ANN and fuzzy logic algorithms for streamflow modelling of Savitri catchment
NASA Astrophysics Data System (ADS)
Kothari, Mahesh; Gharde, K. D.
2015-07-01
The streamflow prediction is an essentially important aspect of any watershed modelling. The black box models (soft computing techniques) have proven to be an efficient alternative to physical (traditional) methods for simulating streamflow and sediment yield of the catchments. The present study focusses on development of models using ANN and fuzzy logic (FL) algorithm for predicting the streamflow for catchment of Savitri River Basin. The input vector to these models were daily rainfall, mean daily evaporation, mean daily temperature and lag streamflow used. In the present study, 20 years (1992-2011) rainfall and other hydrological data were considered, of which 13 years (1992-2004) was for training and rest 7 years (2005-2011) for validation of the models. The mode performance was evaluated by R, RMSE, EV, CE, and MAD statistical parameters. It was found that, ANN model performance improved with increasing input vectors. The results with fuzzy logic models predict the streamflow with single input as rainfall better in comparison to multiple input vectors. While comparing both ANN and FL algorithms for prediction of streamflow, ANN model performance is quite superior.
An empirical propellant response function for combustion stability predictions
NASA Technical Reports Server (NTRS)
Hessler, R. O.
1980-01-01
An empirical response function model was developed for ammonium perchlorate propellants to supplant T-burner testing at the preliminary design stage. The model was developed by fitting a limited T-burner data base, in terms of oxidizer size and concentration, to an analytical two parameter response function expression. Multiple peaks are predicted, but the primary effect is of a single peak for most formulations, with notable bulges for the various AP size fractions. The model was extended to velocity coupling with the assumption that dynamic response was controlled primarily by the solid phase described by the two parameter model. The magnitude of velocity coupling was then scaled using an erosive burning law. Routine use of the model for stability predictions on a number of propulsion units indicates that the model tends to overpredict propellant response. It is concluded that the model represents a generally conservative prediction tool, suited especially for the preliminary design stage when T-burner data may not be readily available. The model work included development of a rigorous summation technique for pseudopropellant properties and of a concept for modeling ordered packing of particulates.
Prediction Accuracy of Error Rates for MPTB Space Experiment
NASA Technical Reports Server (NTRS)
Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.
1998-01-01
This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.
Frequency-dependent variation in mimetic fidelity in an intraspecific mimicry system
Iserbyt, Arne; Bots, Jessica; Van Dongen, Stefan; Ting, Janice J.; Van Gossum, Hans; Sherratt, Thomas N.
2011-01-01
Contemporary theory predicts that the degree of mimetic similarity of mimics towards their model should increase as the mimic/model ratio increases. Thus, when the mimic/model ratio is high, then the mimic has to resemble the model very closely to still gain protection from the signal receiver. To date, empirical evidence of this effect is limited to a single example where mimicry occurs between species. Here, for the first time, we test whether mimetic fidelity varies with mimic/model ratios in an intraspecific mimicry system, in which signal receivers are the same species as the mimics and models. To this end, we studied a polymorphic damselfly with a single male phenotype and two female morphs, in which one morph resembles the male phenotype while the other does not. Phenotypic similarity of males to both female morphs was quantified using morphometric data for multiple populations with varying mimic/model ratios repeated over a 3 year period. Our results demonstrate that male-like females were overall closer in size to males than the other female morph. Furthermore, the extent of morphological similarity between male-like females and males, measured as Mahalanobis distances, was frequency-dependent in the direction predicted. Hence, this study provides direct quantitative support for the prediction that the mimetic similarity of mimics to their models increases as the mimic/model ratio increases. We suggest that the phenomenon may be widespread in a range of mimicry systems. PMID:21367784
Paul R. Miller
1980-01-01
These proceedings papers and poster summaries discuss the influence of air pollution on relationships; interactions of producers, consumers, and decomposers under pollutant terrestrial and related aquatic ecosystems. They describe single species-single pollutant stress; and the use of ecological systems models for interpreting and predicting pollutant effects.
Predicting the cover-up of dead branches using a simple single regressor equation
Christopher M. Oswalt; Wayne K. Clatterbuck; E.C. Burkhardt
2007-01-01
Information on the effects of branch diameter on branch occlusion is necessary for building models capable of forecasting the effect of management decisions on tree or log grade. We investigated the relationship between branch size and subsequent branch occlusion through diameter growth with special attention toward the development of a simple single regressor equation...
Du, Qing-Yun; Wang, En-Yin; Huang, Yan; Guo, Xiao-Yi; Xiong, Yu-Jing; Yu, Yi-Ping; Yao, Gui-Dong; Shi, Sen-Lin; Sun, Ying-Pu
2016-04-01
To evaluate the independent effects of the degree of blastocoele expansion and re-expansion and the inner cell mass (ICM) and trophectoderm (TE) grades on predicting live birth after fresh and vitrified/warmed single blastocyst transfer. Retrospective study. Reproductive medical center. Women undergoing 844 fresh and 370 vitrified/warmed single blastocyst transfer cycles. None. Live-birth rate correlated with blastocyst morphology parameters by logistic regression analysis and Spearman correlations analysis. The degree of blastocoele expansion and re-expansion was the only blastocyst morphology parameter that exhibited a significant ability to predict live birth in both fresh and vitrified/warmed single blastocyst transfer cycles respectively by multivariate logistic regression and Spearman correlations analysis. Although the ICM grade was significantly related to live birth in fresh cycles according to the univariate model, its effect was not maintained in the multivariate logistic analysis. In vitrified/warmed cycles, neither ICM nor TE grade was correlated with live birth by logistic regression analysis. This study is the first to confirm that the degree of blastocoele expansion and re-expansion is a better predictor of live birth after both fresh and vitrified/warmed single blastocyst transfer cycles than ICM or TE grade. Copyright © 2016. Published by Elsevier Inc.
Modeling environmental contamination in hospital single- and four-bed rooms.
King, M-F; Noakes, C J; Sleigh, P A
2015-12-01
Aerial dispersion of pathogens is recognized as a potential transmission route for hospital acquired infections; however, little is known about the link between healthcare worker (HCW) contacts' with contaminated surfaces, the transmission of infections and hospital room design. We combine computational fluid dynamics (CFD) simulations of bioaerosol deposition with a validated probabilistic HCW-surface contact model to estimate the relative quantity of pathogens accrued on hands during six types of care procedures in two room types. Results demonstrate that care type is most influential (P < 0.001), followed by the number of surface contacts (P < 0.001) and the distribution of surface pathogens (P = 0.05). Highest hand contamination was predicted during Personal care despite the highest levels of hand hygiene. Ventilation rates of 6 ac/h vs. 4 ac/h showed only minor reductions in predicted hand colonization. Pathogens accrued on hands decreased monotonically after patient care in single rooms due to the physical barrier of bioaerosol transmission between rooms and subsequent hand sanitation. Conversely, contamination was predicted to increase during contact with patients in four-bed rooms due to spatial spread of pathogens. Location of the infectious patient with respect to ventilation played a key role in determining pathogen loadings (P = 0.05). We present the first quantitative model predicting the surface contacts by HCW and the subsequent accretion of pathogenic material as they perform standard patient care. This model indicates that single rooms may significantly reduce the risk of cross-contamination due to indirect infection transmission. Not all care types pose the same risks to patients, and housekeeping performed by HCWs may be an important contribution in the transmission of pathogens between patients. Ventilation rates and positioning of infectious patients within four-bed rooms can mitigate the accretion of pathogens, whereby reducing the risk of missed hand hygiene opportunities. The model provides a tool to quantitatively evaluate the influence of hospital room design on infection risk. © 2015 The Authors. Indoor Air Published by John Wiley & Sons Ltd.
Catanzaro, Daniele; Schäffer, Alejandro A.; Schwartz, Russell
2016-01-01
Ductal Carcinoma In Situ (DCIS) is a precursor lesion of Invasive Ductal Carcinoma (IDC) of the breast. Investigating its temporal progression could provide fundamental new insights for the development of better diagnostic tools to predict which cases of DCIS will progress to IDC. We investigate the problem of reconstructing a plausible progression from single-cell sampled data of an individual with Synchronous DCIS and IDC. Specifically, by using a number of assumptions derived from the observation of cellular atypia occurring in IDC, we design a possible predictive model using integer linear programming (ILP). Computational experiments carried out on a preexisting data set of 13 patients with simultaneous DCIS and IDC show that the corresponding predicted progression models are classifiable into categories having specific evolutionary characteristics. The approach provides new insights into mechanisms of clonal progression in breast cancers and helps illustrate the power of the ILP approach for similar problems in reconstructing tumor evolution scenarios under complex sets of constraints. PMID:26353381
Anthropic prediction for a large multi-jump landscape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz-Perlov, Delia, E-mail: delia@perlov.com
2008-10-15
The assumption of a flat prior distribution plays a critical role in the anthropic prediction of the cosmological constant. In a previous paper we analytically calculated the distribution for the cosmological constant, including the prior and anthropic selection effects, in a large toy 'single-jump' landscape model. We showed that it is possible for the fractal prior distribution that we found to behave as an effectively flat distribution in a wide class of landscapes, but only if the single-jump size is large enough. We extend this work here by investigating a large (N{approx}10{sup 500}) toy 'multi-jump' landscape model. The jump sizesmore » range over three orders of magnitude and an overall free parameter c determines the absolute size of the jumps. We will show that for 'large' c the distribution of probabilities of vacua in the anthropic range is effectively flat, and thus the successful anthropic prediction is validated. However, we argue that for small c, the distribution may not be smooth.« less
Catanzaro, Daniele; Shackney, Stanley E; Schaffer, Alejandro A; Schwartz, Russell
2016-01-01
Ductal Carcinoma In Situ (DCIS) is a precursor lesion of Invasive Ductal Carcinoma (IDC) of the breast. Investigating its temporal progression could provide fundamental new insights for the development of better diagnostic tools to predict which cases of DCIS will progress to IDC. We investigate the problem of reconstructing a plausible progression from single-cell sampled data of an individual with synchronous DCIS and IDC. Specifically, by using a number of assumptions derived from the observation of cellular atypia occurring in IDC, we design a possible predictive model using integer linear programming (ILP). Computational experiments carried out on a preexisting data set of 13 patients with simultaneous DCIS and IDC show that the corresponding predicted progression models are classifiable into categories having specific evolutionary characteristics. The approach provides new insights into mechanisms of clonal progression in breast cancers and helps illustrate the power of the ILP approach for similar problems in reconstructing tumor evolution scenarios under complex sets of constraints.
Multi-scale modelling of elastic moduli of trabecular bone
Hamed, Elham; Jasiuk, Iwona; Yoo, Andrew; Lee, YikHan; Liszka, Tadeusz
2012-01-01
We model trabecular bone as a nanocomposite material with hierarchical structure and predict its elastic properties at different structural scales. The analysis involves a bottom-up multi-scale approach, starting with nanoscale (mineralized collagen fibril) and moving up the scales to sub-microscale (single lamella), microscale (single trabecula) and mesoscale (trabecular bone) levels. Continuum micromechanics methods, composite materials laminate theory and finite-element methods are used in the analysis. Good agreement is found between theoretical and experimental results. PMID:22279160
Luke, Barbara; Brown, Morton B; Wantman, Ethan; Stern, Judy E; Baker, Valerie L; Widra, Eric; Coddington, Charles C; Gibbons, William E; Van Voorhis, Bradley J; Ball, G David
2015-05-01
The purpose of this study was to use a validated prediction model to examine whether single embryo transfer (SET) over 2 cycles results in live birth rates (LBR) comparable with 2 embryos transferred (DET) in 1 cycle and reduces the probability of a multiple birth (ie, multiple birth rate [MBR]). Prediction models of LBR and MBR for a woman considering assisted reproductive technology developed from linked cycles from the Society for Assisted Reproductive Technology Clinic Outcome Reporting System for 2006-2012 were used to compare SET over 2 cycles with DET in 1 cycle. The prediction model was based on a woman's age, body mass index (BMI), gravidity, previous full-term births, infertility diagnoses, embryo state, number of embryos transferred, and number of cycles. To demonstrate the effect of the number of embryos transferred (1 or 2), the LBRs and MBRs were estimated for women with a single infertility diagnosis (male factor, ovulation disorders, diminished ovarian reserve, and unexplained); nulligravid; BMI of 20, 25, 30, and 35 kg/m2; and ages 25, 35, and 40 years old by cycle (first or second). The cumulative LBR over 2 cycles with SET was similar to or better than the LBR with DET in a single cycle (for example, for women with the diagnosis of ovulation disorders: 35 years old; BMI, 30 kg/m2; 54.4% vs 46.5%; and for women who are 40 years old: BMI, 30 kg/m(2); 31.3% vs 28.9%). The MBR with DET in 1 cycle was 32.8% for women 35 years old and 20.9% for women 40 years old; with SET, the cumulative MBR was 2.7% and 1.6%, respectively. The application of this validated predictive model demonstrated that the cumulative LBR is as good as or better with SET over 2 cycles than with DET in 1 cycle, while greatly reducing the probability of a multiple birth. Copyright © 2015 Elsevier Inc. All rights reserved.
Joshi, Shreedhar S; Anthony, G; Manasa, D; Ashwini, T; Jagadeesh, A M; Borde, Deepak P; Bhat, Seetharam; Manjunath, C N
2014-01-01
To validate Aristotle basic complexity and Aristotle comprehensive complexity (ABC and ACC) and risk adjustment in congenital heart surgery-1 (RACHS-1) prediction models for in hospital mortality after surgery for congenital heart disease in a single surgical unit. Patients younger than 18 years, who had undergone surgery for congenital heart diseases from July 2007 to July 2013 were enrolled. Scoring for ABC and ACC scoring and assigning to RACHS-1 categories were done retrospectively from retrieved case files. Discriminative power of scoring systems was assessed with area under curve (AUC) of receiver operating curves (ROC). Calibration (test for goodness of fit of the model) was measured with Hosmer-Lemeshow modification of χ2 test. Net reclassification improvement (NRI) and integrated discrimination improvement (IDI) were applied to assess reclassification. A total of 1150 cases were assessed with an all-cause in-hospital mortality rate of 7.91%. When modeled for multivariate regression analysis, the ABC (χ2 = 8.24, P = 0.08), ACC (χ2 = 4.17 , P = 0.57) and RACHS-1 (χ2 = 2.13 , P = 0.14) scores showed good overall performance. The AUC was 0.677 with 95% confidence interval (CI) of 0.61-0.73 for ABC score, 0.704 (95% CI: 0.64-0.76) for ACC score and for RACHS-1 it was 0.607 (95%CI: 0.55-0.66). ACC had an improved predictability in comparison to RACHS-1 and ABC on analysis with NRI and IDI. ACC predicted mortality better than ABC and RCAHS-1 models. A national database will help in developing predictive models unique to our populations, till then, ACC scoring model can be used to analyze individual performances and compare with other institutes.
ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.
Maghrabi, Ali H A; McGuffin, Liam J
2017-07-03
Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Multiscale Modeling of Angiogenesis and Predictive Capacity
NASA Astrophysics Data System (ADS)
Pillay, Samara; Byrne, Helen; Maini, Philip
Tumors induce the growth of new blood vessels from existing vasculature through angiogenesis. Using an agent-based approach, we model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death-like processes. We use the transition probabilities associated with the discrete model and a discrete conservation equation for cell occupancy to determine collective cell behavior, in terms of partial differential equations (PDEs). We derive three PDE models incorporating single, multi-species and no volume exclusion. By fitting the parameters in our PDE models and other well-established continuum models to agent-based simulations during a specific time period, and then comparing the outputs from the PDE models and agent-based model at later times, we aim to determine how well the PDE models predict the future behavior of the agent-based model. We also determine whether predictions differ across PDE models and the significance of those differences. This may impact drug development strategies based on PDE models.
Vibrational Dynamics of Biological Molecules: Multi-quantum Contributions
Leu, Bogdan M.; Timothy Sage, J.; Zgierski, Marek Z.; Wyllie, Graeme R. A.; Ellison, Mary K.; Robert Scheidt, W.; Sturhahn, Wolfgang; Ercan Alp, E.; Durbin, Stephen M.
2006-01-01
High-resolution X-ray measurements near a nuclear resonance reveal the complete vibrational spectrum of the probe nucleus. Because of this, nuclear resonance vibrational spectroscopy (NRVS) is a uniquely quantitative probe of the vibrational dynamics of reactive iron sites in proteins and other complex molecules. Our measurements of vibrational fundamentals have revealed both frequencies and amplitudes of 57Fe vibrations in proteins and model compounds. Information on the direction of Fe motion has also been obtained from measurements on oriented single crystals, and provides an essential test of normal mode predictions. Here, we report the observation of weaker two-quantum vibrational excitations (overtones and combinations) for compounds that mimic the active site of heme proteins. The predicted intensities depend strongly on the direction of Fe motion. We compare the observed features with predictions based on the observed fundamentals, using information on the direction of Fe motion obtained either from DFT predictions or from single crystal measurements. Two-quantum excitations may become a useful tool to identify the directions of the Fe oscillations when single crystals are not available. PMID:16894397
Lindor, Noralane M; Lindor, Rachel A; Apicella, Carmel; Dowty, James G; Ashley, Amanda; Hunt, Katherine; Mincey, Betty A; Wilson, Marcia; Smith, M Cathie; Hopper, John L
2007-01-01
Models have been developed to predict the probability that a person carries a detectable germline mutation in the BRCA1 or BRCA2 genes. Their relative performance in a clinical setting is unclear. To compare the performance characteristics of four BRCA1/BRCA2 gene mutation prediction models: LAMBDA, based on a checklist and scores developed from data on Ashkenazi Jewish (AJ) women; BRCAPRO, a Bayesian computer program; modified Couch tables based on regression analyses; and Myriad II tables collated by Myriad Genetics Laboratories. Family cancer history data were analyzed from 200 probands from the Mayo Clinic Familial Cancer Program, in a multispecialty tertiary care group practice. All probands had clinical testing for BRCA1 and BRCA2 mutations conducted in a single laboratory. For each model, performance was assessed by the area under the receiver operator characteristic curve (ROC) and by tests of accuracy and dispersion. Cases "missed" by one or more models (model predicted less than 10% probability of mutation when a mutation was actually found) were compared across models. All models gave similar areas under the ROC curve of 0.71 to 0.76. All models except LAMBDA substantially under-predicted the numbers of carriers. All models were too dispersed. In terms of ranking, all prediction models performed reasonably well with similar performance characteristics. Model predictions were widely discrepant for some families. Review of cancer family histories by an experienced clinician continues to be vital to ensure that critical elements are not missed and that the most appropriate risk prediction figures are provided.
Kalman, J; Smith, B D; Riba, I; Blasco, J; Rainbow, P S
2010-06-01
Biodynamic parameters of the ragworm Nereis diversicolor from southern Spain and south England were experimentally derived to assess the inter-population variability of physiological parameters of the bioaccumulation of Ag, Cd and Zn from water and sediment. Although there were some limited variations, these were not consistent with the local metal bioavailability nor with temperature changes. Incorporating the biodynamic parameters into a defined biodynamic model, confirmed that sediment is the predominant source of Cd and Zn accumulated by the worms, accounting in each case for 99% of the overall accumulated metals, whereas the contribution of dissolved Ag to the total accumulated by the worm increased from about 27 to about 53% with increasing dissolved Ag concentration. Standardised values of metal-specific parameters were chosen to generate a generalised model to be extended to N. diversicolor populations across a wide geographical range from western Europe to North Africa. According to the assumptions of this model, predicted steady state concentrations of Cd and Zn in N. diversicolor were overestimated, those of Ag underestimated, but still comparable to independent field measurements. We conclude that species-specific physiological metal bioaccumulation parameters are relatively constant over large geographical distances, and a single generalised biodynamic model does have potential to predict accumulated Ag, Cd and Zn concentrations in this polychaete from a single sediment metal concentration.
Latash, M; Gottleib, G
1990-01-01
Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.
1980-11-01
Occlusion 3.1 Single Measures 3. Primary Task 3.2 Multiple Measures 3.3 Math Modeling 4.1.1 PFF 4.1.2 CSR 4.1.3 M,0 4.1.4 MW 4.1.5 UG3 4.1.6 ZCP 4.1 Single... modeling methodology; and (4) validation of the analytic/predictive methodology In a system design, development, and test effort." Chapter 9: "A central...2.3 Occlusion P S P S S P -P 3.1 Single Measure-Primary S S S S S S S 3.2 Multiple Measure-Primary S S IS S S S S K 3.3 Math Modeling ~ 4.1.7 Eye and
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
Real-Time Monitoring and Prediction of the Pilot Vehicle System (PVS) Closed-Loop Stability
NASA Astrophysics Data System (ADS)
Mandal, Tanmay Kumar
Understanding human control behavior is an important step for improving the safety of future aircraft. Considerable resources are invested during the design phase of an aircraft to ensure that the aircraft has desirable handling qualities. However, human pilots exhibit a wide range of control behaviors that are a function of external stimulus, aircraft dynamics, and human psychological properties (such as workload, stress factor, confidence, and sense of urgency factor). This variability is difficult to address comprehensively during the design phase and may lead to undesirable pilot-aircraft interaction, such as pilot-induced oscillations (PIO). This creates the need to keep track of human pilot performance in real-time to monitor the pilot vehicle system (PVS) stability. This work focused on studying human pilot behavior for the longitudinal axis of a remotely controlled research aircraft and using human-in-the-loop (HuIL) simulations to obtain information about the human controlled system (HCS) stability. The work in this dissertation is divided into two main parts: PIO analysis and human control model parameters estimation. To replicate different flight conditions, this study included time delay and elevator rate limiting phenomena, typical of actuator dynamics during the experiments. To study human control behavior, this study employed the McRuer model for single-input single-output manual compensatory tasks. McRuer model is a lead-lag controller with time delay which has been shown to adequately model manual compensatory tasks. This dissertation presents a novel technique to estimate McRuer model parameters in real-time and associated validation using HuIL simulations to correctly predict HCS stability. The McRuer model parameters were estimated in real-time using a Kalman filter approach. The estimated parameters were then used to analyze the stability of the closed-loop HCS and verify them against the experimental data. Therefore, the main contribution of this dissertation is the design of an unscented Kalman filter-based algorithm to estimate McRuer model parameters in real time, and a framework to validate this algorithm for single-input single-output manual compensatory tasks to predict instabilities.
Multi-component testing using HZ-PAN and AgZ-PAN Sorbents for OSPREY Model validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garn, Troy G.; Greenhalgh, Mitchell; Lyon, Kevin L.
2015-04-01
In efforts to further develop the capability of the Off-gas SeParation and RecoverY (OSPREY) model, multi-component tests were completed using both HZ-PAN and AgZ-PAN sorbents. The primary purpose of this effort was to obtain multi-component xenon and krypton capacities for comparison to future OSPREY predicted multi-component capacities using previously acquired Langmuir equilibrium parameters determined from single component isotherms. Experimental capacities were determined for each sorbent using two feed gas compositions of 1000 ppmv xenon and 150 ppmv krypton in either a helium or air balance. Test temperatures were consistently held at 220 K and the gas flowrate was 50 sccm.more » Capacities were calculated from breakthrough curves using TableCurve® 2D software by Jandel Scientific. The HZ-PAN sorbent was tested in the custom designed cryostat while the AgZ-PAN was tested in a newly installed cooling apparatus. Previous modeling validation efforts indicated the OSPREY model can be used to effectively predict single component xenon and krypton capacities for both engineered form sorbents. Results indicated good agreement with the experimental and predicted capacity values for both krypton and xenon on the sorbents. Overall, the model predicted slightly elevated capacities for both gases which can be partially attributed to the estimation of the parameters and the uncertainty associated with the experimental measurements. Currently, OSPREY is configured such that one species adsorbs and one does not (i.e. krypton in helium). Modification of OSPREY code is currently being performed to incorporate multiple adsorbing species and non-ideal interactions of gas phase species with the sorbent and adsorbed phases. Once these modifications are complete, the sorbent capacities determined in the present work will be used to validate OSPREY multicomponent adsorption predictions.« less