Sample records for model calculations predict

  1. Stock price prediction using geometric Brownian motion

    NASA Astrophysics Data System (ADS)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  2. 75 FR 75961 - Notice of Implementation of the Wind Erosion Prediction System for Soil Erodibility System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...

  3. Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel

    NASA Astrophysics Data System (ADS)

    Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa

    This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.

  4. Predicting Microstructure and Microsegregation in Multicomponent Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Yan, Xinyan; Ding, Ling; Chen, ShuangLin; Xie, Fanyou; Chu, M.; Chang, Y. Austin

    Accurate predictions of microstructure and microsegregation in metallic alloys are highly important for applications such as alloy design and process optimization. Restricted assumptions concerning the phase diagram could easily lead to erroneous predictions. The best approach is to couple microsegregation modeling with phase diagram computations. A newly developed numerical model for the prediction of microstructure and microsegregation in multicomponent alloys during dendritic solidification was introduced. The micromodel is directly coupled with phase diagram calculations using a user-friendly and robust phase diagram calculation engine-PANDAT. Solid state back diffusion, undercooling and coarsening effects are included in this model, and the experimentally measured cooling curves are used as the inputs to carry out the calculations. This model has been used to predict the microstructure and microsegregation in two multicomponent aluminum alloys, 2219 and 7050. The calculated values were confirmed using results obtained from directional solidification.

  5. Road traffic noise prediction model for heterogeneous traffic based on ASJ-RTN Model 2008 with consideration of horn

    NASA Astrophysics Data System (ADS)

    Hustim, M.; Arifin, Z.; Aly, S. H.; Ramli, M. I.; Zakaria, R.; Liputo, A.

    2018-04-01

    This research aimed to predict the noise produced by the traffic in the road network in Makassar City using ASJ-RTN Model 2008 by calculating the horn sound. Observations were taken at 37 survey points on road side. The observations were conducted at 06.00 - 18.00 and 06.00 - 21.00 which research objects were motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). The observed data were traffic volume, vehicle speed, number of horn and traffic noise using Sound Level Meter Tenmars TM-103. The research result indicates that prediction noise model by calculating the horn sound produces the average noise level value of 78.5 dB having the Pearson’s correlation and RMSE of 0.95 and 0.87. Therefore, ASJ-RTN Model 2008 prediction model by calculating the horn sound is said to be sufficiently good for predicting noise level.

  6. Obtaining Predictions from Models Fit to Multiply Imputed Data

    ERIC Educational Resources Information Center

    Miles, Andrew

    2016-01-01

    Obtaining predictions from regression models fit to multiply imputed data can be challenging because treatments of multiple imputation seldom give clear guidance on how predictions can be calculated, and because available software often does not have built-in routines for performing the necessary calculations. This research note reviews how…

  7. Predicting p Ka values from EEM atomic charges

    PubMed Central

    2013-01-01

    The acid dissociation constant p Ka is a very important molecular property, and there is a strong interest in the development of reliable and fast methods for p Ka prediction. We have evaluated the p Ka prediction capabilities of QSPR models based on empirical atomic charges calculated by the Electronegativity Equalization Method (EEM). Specifically, we collected 18 EEM parameter sets created for 8 different quantum mechanical (QM) charge calculation schemes. Afterwards, we prepared a training set of 74 substituted phenols. Additionally, for each molecule we generated its dissociated form by removing the phenolic hydrogen. For all the molecules in the training set, we then calculated EEM charges using the 18 parameter sets, and the QM charges using the 8 above mentioned charge calculation schemes. For each type of QM and EEM charges, we created one QSPR model employing charges from the non-dissociated molecules (three descriptor QSPR models), and one QSPR model based on charges from both dissociated and non-dissociated molecules (QSPR models with five descriptors). Afterwards, we calculated the quality criteria and evaluated all the QSPR models obtained. We found that QSPR models employing the EEM charges proved as a good approach for the prediction of p Ka (63% of these models had R2 > 0.9, while the best had R2 = 0.924). As expected, QM QSPR models provided more accurate p Ka predictions than the EEM QSPR models but the differences were not significant. Furthermore, a big advantage of the EEM QSPR models is that their descriptors (i.e., EEM atomic charges) can be calculated markedly faster than the QM charge descriptors. Moreover, we found that the EEM QSPR models are not so strongly influenced by the selection of the charge calculation approach as the QM QSPR models. The robustness of the EEM QSPR models was subsequently confirmed by cross-validation. The applicability of EEM QSPR models for other chemical classes was illustrated by a case study focused on carboxylic acids. In summary, EEM QSPR models constitute a fast and accurate p Ka prediction approach that can be used in virtual screening. PMID:23574978

  8. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  9. Application of an unsteady-state model for predicting vertical temperature distribution to an existing atrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takemasa, Yuichi; Togari, Satoshi; Arai, Yoshinobu

    1996-11-01

    Vertical temperature differences tend to be great in a large indoor space such as an atrium, and it is important to predict variations of vertical temperature distribution in the early stage of the design. The authors previously developed and reported on a new simplified unsteady-state calculation model for predicting vertical temperature distribution in a large space. In this paper, this model is applied to predicting the vertical temperature distribution in an existing low-rise atrium that has a skylight and is affected by transmitted solar radiation. Detailed calculation procedures that use the model are presented with all the boundary conditions, andmore » analytical simulations are carried out for the cooling condition. Calculated values are compared with measured results. The results of the comparison demonstrate that the calculation model can be applied to the design of a large space. The effects of occupied-zone cooling are also discussed and compared with those of all-zone cooling.« less

  10. Predicting Salt Permeability Coefficients in Highly Swollen, Highly Charged Ion Exchange Membranes.

    PubMed

    Kamcev, Jovan; Paul, Donald R; Manning, Gerald S; Freeman, Benny D

    2017-02-01

    This study presents a framework for predicting salt permeability coefficients in ion exchange membranes in contact with an aqueous salt solution. The model, based on the solution-diffusion mechanism, was tested using experimental salt permeability data for a series of commercial ion exchange membranes. Equilibrium salt partition coefficients were calculated using a thermodynamic framework (i.e., Donnan theory), incorporating Manning's counterion condensation theory to calculate ion activity coefficients in the membrane phase and the Pitzer model to calculate ion activity coefficients in the solution phase. The model predicted NaCl partition coefficients in a cation exchange membrane and two anion exchange membranes, as well as MgCl 2 partition coefficients in a cation exchange membrane, remarkably well at higher external salt concentrations (>0.1 M) and reasonably well at lower external salt concentrations (<0.1 M) with no adjustable parameters. Membrane ion diffusion coefficients were calculated using a combination of the Mackie and Meares model, which assumes ion diffusion in water-swollen polymers is affected by a tortuosity factor, and a model developed by Manning to account for electrostatic effects. Agreement between experimental and predicted salt diffusion coefficients was good with no adjustable parameters. Calculated salt partition and diffusion coefficients were combined within the framework of the solution-diffusion model to predict salt permeability coefficients. Agreement between model and experimental data was remarkably good. Additionally, a simplified version of the model was used to elucidate connections between membrane structure (e.g., fixed charge group concentration) and salt transport properties.

  11. A two-component rain model for the prediction of attenuation statistics

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  12. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    PubMed Central

    Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  13. Multiplexed Predictive Control of a Large Commercial Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Richter, hanz; Singaraju, Anil; Litt, Jonathan S.

    2008-01-01

    Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.

  14. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes

    PubMed Central

    Zhang, Hong; Pei, Yun

    2016-01-01

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266

  15. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.

    PubMed

    Zhang, Hong; Pei, Yun

    2016-08-12

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.

  16. Calculation of turbulence-driven secondary motion in ducts with arbitrary cross section

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    Calculation methods for turbulent duct flows are generalized for ducts with arbitrary cross-sections. The irregular physical geometry is transformed into a regular one in computational space, and the flow equations are solved with a finite-volume numerical procedure. The turbulent stresses are calculated with an algebraic stress model derived by simplifying model transport equations for the individual Reynolds stresses. Two variants of such a model are considered. These procedures enable the prediction of both the turbulence-driven secondary flow and the anisotropy of the Reynolds stresses, in contrast to some of the earlier calculation methods. Model predictions are compared to experimental data for developed flow in triangular duct, trapezoidal duct and a rod-bundle geometry. The correct trends are predicted, and the quantitative agreement is mostly fair. The simpler variant of the algebraic stress model procured better agreement with the measured data.

  17. Navier-Stokes turbine heat transfer predictions using two-equation turbulence closures

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Arnone, Andrea

    1992-01-01

    Navier-Stokes calculations were carried out in order to predict the heat-transfer rates on turbine blades. The calculations were performed using TRAF2D which is a k-epsilon, explicit, finite volume mass-averaged Navier-Stokes solver. Turbulence was modeled using Coakley's q-omega and Chien's k-epsilon two-equation models and the Baldwin-Lomax algebraic model. The model equations along with the flow equations were solved explicitly on a nonperiodic C grid. Implicit residual smoothing (IRS) or a combination of multigrid technique and IRS was applied to enhance convergence rates. Calculations were performed to predict the Stanton number distributions on the first stage vane and blade row as well as the second stage vane row of the SSME high-pressure fuel turbine. The comparison serves to highlight the weaknesses of the turbulence models for use in turbomachinery heat-transfer calculations.

  18. Prediction of Microstructure in HAZ of Welds

    NASA Astrophysics Data System (ADS)

    Khurana, S. P.; Yancey, R.; Jung, G.

    2004-06-01

    A modeling technique for predicting microstructure in the heat-affected zone (HAZ) of the hypoeutectoid steels is presented. This technique aims at predicting the phase fractions of ferrite, pearlite, bainite and martensite present in the HAZ after the cool down of a weld. The austenite formation kinetics and austenite decomposition kinetics are calculated using the transient temperature profile. The thermal profile in the weld and the HAZ is calculated by finite-element analysis (FEA). Two kinds of austenite decomposition models are included. The final phase fractions are predicted with the help of a continuous cooling transformation (CCT) diagram of the material. In the calculation of phase fractions either the experimental CCT diagram or the mathematically calculated CCT diagram can be used.

  19. Predicted phototoxicities of carbon nano-material by quantum mechanical calculations

    EPA Science Inventory

    The purpose of this research is to develop a predictive model for the phototoxicity potential of carbon nanomaterials (fullerenols and single-walled carbon nanotubes). This model is based on the quantum mechanical (ab initio) calculations on these carbon-based materials and compa...

  20. Prediction of soft soil foundation settlement in Guangxi granite area based on fuzzy neural network model

    NASA Astrophysics Data System (ADS)

    Luo, Junhui; Wu, Chao; Liu, Xianlin; Mi, Decai; Zeng, Fuquan; Zeng, Yongjun

    2018-01-01

    At present, the prediction of soft foundation settlement mostly use the exponential curve and hyperbola deferred approximation method, and the correlation between the results is poor. However, the application of neural network in this area has some limitations, and none of the models used in the existing cases adopted the TS fuzzy neural network of which calculation combines the characteristics of fuzzy system and neural network to realize the mutual compatibility methods. At the same time, the developed and optimized calculation program is convenient for engineering designers. Taking the prediction and analysis of soft foundation settlement of gully soft soil in granite area of Guangxi Guihe road as an example, the fuzzy neural network model is established and verified to explore the applicability. The TS fuzzy neural network is used to construct the prediction model of settlement and deformation, and the corresponding time response function is established to calculate and analyze the settlement of soft foundation. The results show that the prediction of short-term settlement of the model is accurate and the final settlement prediction result has certain engineering reference value.

  1. Development of High Level Electrical Stress Failure Threshold and Prediction Model for Small Scale Junction Integrated Circuits

    DTIC Science & Technology

    1978-09-01

    AWACS EMP Guidelines presents two different models to predict the damage pcwer of the dev-ce and the circuit damage EMP voltage ( VEMP ). Neither of...calculated as K P~ I V BD 6. The damage EMP voltage ( VEMP ) is calculated KZ EMP +IZ =D +BD VBD1F 7. The damage EMP voltage is calculated for collector

  2. A general method for assessing the effects of uncertainty in individual-tree volume model predictions on large-area volume estimates with a subtropical forest illustration

    Treesearch

    Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans

    2015-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...

  3. Application of linear regression analysis in accuracy assessment of rolling force calculations

    NASA Astrophysics Data System (ADS)

    Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.

    1998-10-01

    Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.

  4. Deformed shell model calculations of half lives for β+/EC decay and 2ν β+β+/β+EC/ECEC decay in medium-heavy N~Z nuclei

    NASA Astrophysics Data System (ADS)

    Mishra, S.; Shukla, A.; Sahu, R.; Kota, V. K. B.

    2008-08-01

    The β+/EC half-lives of medium heavy N~Z nuclei with mass number A~64-80 are calculated within the deformed shell model (DSM) based on Hartree-Fock states by employing a modified Kuo interaction in (2p3/2,1f5/2,2p1/2,1g9/2) space. The DSM model has been quite successful in predicting many spectroscopic properties of N~Z medium heavy nuclei with A~64-80. The calculated β+/EC half-lives, for prolate and oblate shapes, compare well with the predictions of the calculations with Skyrme force by Sarriguren Going further, following recent searches, half-lives for 2ν β+β+/β+EC/ECEC decay for the nucleus Kr78 are calculated using DSM and the results compare well with QRPA predictions.

  5. Comparison of calculated and measured model rotor loading and wake geometry

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1980-01-01

    The calculated blade bound circulation and wake geometry are compared with measured results for a model helicopter rotor in hover and forward flight. Hover results are presented for rectangular tip and ogee tip planform blades. The correlation is quite good when the measured wake geometry characteristics are used in the analysis. Available prescribed wake geometry models are found to give fair predictions of the loading, but they do not produce a reasonable prediction of the induced power. Forward flight results are presented for twisted and untwisted blades. Fair correlation between measurements and calculations is found for the bound circulation distribution on the advancing side. The tip vortex geometry in the vicinity of the advancing blade in forward flight was predicted well by the free wake calculation used, although the wake geometry did not have a significant influence on the calculated loading and performance for the cases considered.

  6. An improved computer model for prediction of axial gas turbine performance losses

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1984-01-01

    The calculation model performs a rapid preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; and (3) predictions of expected turbine performance. The model uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with an array of seven NASA single-stage axial gas turbine configurations.

  7. BEHAVE: fire behavior prediction and fuel modeling system-BURN Subsystem, part 1

    Treesearch

    Patricia L. Andrews

    1986-01-01

    Describes BURN Subsystem, Part 1, the operational fire behavior prediction subsystem of the BEHAVE fire behavior prediction and fuel modeling system. The manual covers operation of the computer program, assumptions of the mathematical models used in the calculations, and application of the predictions.

  8. Updates to In-Line Calculation of Photolysis Rates

    EPA Science Inventory

    How photolysis rates are calculated affects ozone and aerosol concentrations predicted by the CMAQ model and the model?s run-time. The standard configuration of CMAQ uses the inline option that calculates photolysis rates by solving the radiative transfer equation for the needed ...

  9. Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules

    PubMed Central

    2014-01-01

    We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264

  10. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  11. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  12. Delirium prediction in the intensive care unit: comparison of two delirium prediction models.

    PubMed

    Wassenaar, Annelies; Schoonhoven, Lisette; Devlin, John W; van Haren, Frank M P; Slooter, Arjen J C; Jorens, Philippe G; van der Jagt, Mathieu; Simons, Koen S; Egerod, Ingrid; Burry, Lisa D; Beishuizen, Albertus; Matos, Joaquim; Donders, A Rogier T; Pickkers, Peter; van den Boogaard, Mark

    2018-05-05

    Accurate prediction of delirium in the intensive care unit (ICU) may facilitate efficient use of early preventive strategies and stratification of ICU patients by delirium risk in clinical research, but the optimal delirium prediction model to use is unclear. We compared the predictive performance and user convenience of the prediction  model for delirium (PRE-DELIRIC) and early prediction model for delirium (E-PRE-DELIRIC) in ICU patients and determined the value of a two-stage calculation. This 7-country, 11-hospital, prospective cohort study evaluated consecutive adults admitted to the ICU who could be reliably assessed for delirium using the Confusion Assessment Method-ICU or the Intensive Care Delirium Screening Checklist. The predictive performance of the models was measured using the area under the receiver operating characteristic curve. Calibration was assessed graphically. A physician questionnaire evaluated user convenience. For the two-stage calculation we used E-PRE-DELIRIC immediately after ICU admission and updated the prediction using PRE-DELIRIC after 24 h. In total 2178 patients were included. The area under the receiver operating characteristic curve was significantly greater for PRE-DELIRIC (0.74 (95% confidence interval 0.71-0.76)) compared to E-PRE-DELIRIC (0.68 (95% confidence interval 0.66-0.71)) (z score of - 2.73 (p < 0.01)). Both models were well-calibrated. The sensitivity improved when using the two-stage calculation in low-risk patients. Compared to PRE-DELIRIC, ICU physicians (n = 68) rated the E-PRE-DELIRIC model more feasible. While both ICU delirium prediction models have moderate-to-good performance, the PRE-DELIRIC model predicts delirium better. However, ICU physicians rated the user convenience of E-PRE-DELIRIC superior to PRE-DELIRIC. In low-risk patients the delirium prediction further improves after an update with the PRE-DELIRIC model after 24 h. ClinicalTrials.gov, NCT02518646 . Registered on 21 July 2015.

  13. Pitfalls in Prediction Modeling for Normal Tissue Toxicity in Radiation Therapy: An Illustration With the Individual Radiation Sensitivity and Mammary Carcinoma Risk Factor Investigation Cohorts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert

    Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less

  14. Development and Current Status of the “Cambridge” Loudness Models

    PubMed Central

    2014-01-01

    This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375

  15. Correlation of predicted and measured thermal stresses on an advanced aircraft structure with similar materials

    NASA Technical Reports Server (NTRS)

    Jenkins, J. M.

    1979-01-01

    A laboratory heating test simulating hypersonic heating was conducted on a heat-sink type structure to provide basic thermal stress measurements. Six NASTRAN models utilizing various combinations of bar, shear panel, membrane, and plate elements were used to develop calculated thermal stresses. Thermal stresses were also calculated using a beam model. For a given temperature distribution there was very little variation in NASTRAN calculated thermal stresses when element types were interchanged for a given grid system. Thermal stresses calculated for the beam model compared similarly to the values obtained for the NASTRAN models. Calculated thermal stresses compared generally well to laboratory measured thermal stresses. A discrepancy of signifiance occurred between the measured and predicted thermal stresses in the skin areas. A minor anomaly in the laboratory skin heating uniformity resulted in inadequate temperature input data for the structural models.

  16. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  17. A comparison of radiative transfer models for predicting the microwave emission from soils

    NASA Technical Reports Server (NTRS)

    Schmugge, T. J.; Choudhury, B. J.

    1981-01-01

    Noncoherent and coherent numerical models for predicting emission from soils are compared. Coherent models use the boundary conditions on the electric fields across the layer boundaries to calculate the radiation intensity, and noncoherent models consider radiation intensities directly. Interference may cause different results in the two approaches when coupling between soil layers in coherent models causes greater soil moisture sampling depths. Calculations performed at frequencies of 1.4 and 19.4 GHz show little difference between the models at 19.4 GHz, although differences are apparent at the lower frequency. A definition for an effective emissivity is also given for when a nonuniform temperature profile is present, and measurements made from a tower show good agreement with calculations from the coherent model.

  18. Quiet High Speed Fan (QHSF) Flutter Calculations Using the TURBO Code

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Min, James B.; Mehmed, Oral

    2006-01-01

    A scale model of the NASA/Honeywell Engines Quiet High Speed Fan (QHSF) encountered flutter wind tunnel testing. This report documents aeroelastic calculations done for the QHSF scale model using the blade vibration capability of the TURBO code. Calculations at design speed were used to quantify the effect of numerical parameters on the aerodynamic damping predictions. This numerical study allowed the selection of appropriate values of these parameters, and also allowed an assessment of the variability in the calculated aerodynamic damping. Calculations were also done at 90 percent of design speed. The predicted trends in aerodynamic damping corresponded to those observed during testing.

  19. Prediction of rain effects on earth-space communication links operating in the 10 to 35 GHz frequency range

    NASA Technical Reports Server (NTRS)

    Stutzman, Warren L.

    1989-01-01

    This paper reviews the effects of precipitation on earth-space communication links operating the 10 to 35 GHz frequency range. Emphasis is on the quantitative prediction of rain attenuation and depolarization. Discussions center on the models developed at Virginia Tech. Comments on other models are included as well as literature references to key works. Also included is the system level modeling for dual polarized communication systems with techniques for calculating antenna and propagation medium effects. Simple models for the calculation of average annual attenuation and cross-polarization discrimination (XPD) are presented. Calculation of worst month statistics are also presented.

  20. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  1. Development of bovine serum albumin-water partition coefficients predictive models for ionogenic organic chemicals based on chemical form adjusted descriptors.

    PubMed

    Ding, Feng; Yang, Xianhai; Chen, Guosong; Liu, Jining; Shi, Lili; Chen, Jingwen

    2017-10-01

    The partition coefficients between bovine serum albumin (BSA) and water (K BSA/w ) for ionogenic organic chemicals (IOCs) were different greatly from those of neutral organic chemicals (NOCs). For NOCs, several excellent models were developed to predict their logK BSA/w . However, it was found that the conventional descriptors are inappropriate for modeling logK BSA/w of IOCs. Thus, alternative approaches are urgently needed to develop predictive models for K BSA/w of IOCs. In this study, molecular descriptors that can be used to characterize the ionization effects (e.g. chemical form adjusted descriptors) were calculated and used to develop predictive models for logK BSA/w of IOCs. The models developed had high goodness-of-fit, robustness, and predictive ability. The predictor variables selected to construct the models included the chemical form adjusted averages of the negative potentials on the molecular surface (V s-adj - ), the chemical form adjusted molecular dipole moment (dipolemoment adj ), the logarithm of the n-octanol/water distribution coefficient (logD). As these molecular descriptors can be calculated from their molecular structures directly, the developed model can be easily used to fill the logK BSA/w data gap for other IOCs within the applicability domain. Furthermore, the chemical form adjusted descriptors calculated in this study also could be used to construct predictive models on other endpoints of IOCs. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A New Scoring System to Predict the Risk for High-risk Adenoma and Comparison of Existing Risk Calculators.

    PubMed

    Murchie, Brent; Tandon, Kanwarpreet; Hakim, Seifeldin; Shah, Kinchit; O'Rourke, Colin; Castro, Fernando J

    2017-04-01

    Colorectal cancer (CRC) screening guidelines likely over-generalizes CRC risk, 35% of Americans are not up to date with screening, and there is growing incidence of CRC in younger patients. We developed a practical prediction model for high-risk colon adenomas in an average-risk population, including an expanded definition of high-risk polyps (≥3 nonadvanced adenomas), exposing higher than average-risk patients. We also compared results with previously created calculators. Patients aged 40 to 59 years, undergoing first-time average-risk screening or diagnostic colonoscopies were evaluated. Risk calculators for advanced adenomas and high-risk adenomas were created based on age, body mass index, sex, race, and smoking history. Previously established calculators with similar risk factors were selected for comparison of concordance statistic (c-statistic) and external validation. A total of 5063 patients were included. Advanced adenomas, and high-risk adenomas were seen in 5.7% and 7.4% of the patient population, respectively. The c-statistic for our calculator was 0.639 for the prediction of advanced adenomas, and 0.650 for high-risk adenomas. When applied to our population, all previous models had lower c-statistic results although one performed similarly. Our model compares favorably to previously established prediction models. Age and body mass index were used as continuous variables, likely improving the c-statistic. It also reports absolute predictive probabilities of advanced and high-risk polyps, allowing for more individualized risk assessment of CRC.

  3. Covariant spectator theory of np scattering: Deuteron quadrupole moment

    DOE PAGES

    Gross, Franz

    2015-01-26

    The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N 3LO.« less

  4. Calculating Path-Dependent Travel Time Prediction Variance and Covariance fro a Global Tomographic P-Velocity Model

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.

    2012-12-01

    Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  5. Chemical kinetic modeling of propene oxidation at low and intermediate temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilk, R.D.; Cernansky, N.P.; Pitz, W.J.

    1986-01-13

    A detailed chemical kinetic mechanism for propene oxidation is developed and used to model reactions in a static reactor at temperatures of 590 to 740/sup 0/K, equivalence ratios of 0.8 to 2.0, and a pressure of 600 torr. Modeling of hydrocarbon oxidation in this temperature range is important for the validation of detailed models to be used for performing calculations related to automotive engine knock. The model predicted induction periods and species concentrations for all the species measured experimentally in a static reactor by Wilk, Cernansky, and Cohen. The detailed model predicted a temperature region of approximately constant induction periodmore » which corresponded very closely to the region of negative temperature coefficient behavior found in the experiment. Overall, the calculated concentrations of acetaldehyde, ethene, and methane were somewhat low compared to the experimental measurements, and the calculated concentrations of formaldehyde and methanol were high. The characteristic s-shape of the fuel concentration history was well predicted. The importance of OH+C/sub 3/H/sub 6/ and related rections in determining product distributions and the importance of consumption reactions for allyl radicals was demonstrated by the modeling calculations. 18 refs., 4 figs., 1 tab.« less

  6. Predicting the Inflow Distortion Tone Noise of the NASA Glenn Advanced Noise Control Fan with a Combined Quadrupole-Dipole Model

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    2012-01-01

    A combined quadrupole-dipole model of fan inflow distortion tone noise has been extended to calculate tone sound power levels generated by obstructions arranged in circumferentially asymmetric locations upstream of a rotor. Trends in calculated sound power level agreed well with measurements from tests conducted in 2007 in the NASA Glenn Advanced Noise Control Fan. Calculated values of sound power levels radiated upstream were demonstrated to be sensitive to the accuracy of the modeled wakes from the cylindrical rods that were placed upstream of the fan to distort the inflow. Results indicate a continued need to obtain accurate aerodynamic predictions and measurements at the fan inlet plane as engineers work towards developing fan inflow distortion tone noise prediction tools.

  7. How much crosstalk can be allowed in a stereoscopic system at various grey levels?

    NASA Astrophysics Data System (ADS)

    Shestak, Sergey; Kim, Daesik; Kim, Yongie

    2012-03-01

    We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.

  8. A prediction of 3-D viscous flow and performance of the NASA Low-Speed Centrifugal Compressor

    NASA Technical Reports Server (NTRS)

    Moore, John; Moore, Joan G.

    1990-01-01

    A prediction of the three-dimensional turbulent flow in the NASA Low-Speed Centrifugal Compressor Impeller has been made. The calculation was made for the compressor design conditions with the specified uniform tip clearance gap. The predicted performance is significantly worse than that predicted in the NASA design study. This is explained by the high tip leakage flow in the present calculation and by the different model adopted for tip leakage flow mixing. The calculation gives an accumulation of high losses in the shroud/pressure-side quadrant near the exit of the impeller. It also predicts a region of meridional backflow near the shroud wall. Both of these flow features should be extensive enough in the NASA impeller to allow detailed flow measurements, leading to improved flow modeling. Recommendations are made for future flow studies in the NASA impeller.

  9. A prediction of 3-D viscous flow and performance of the NASA low-speed centrifugal compressor

    NASA Technical Reports Server (NTRS)

    Moore, John; Moore, Joan G.

    1989-01-01

    A prediction of the 3-D turbulent flow in the NASA Low-Speed Centrifugal Compressor Impeller has been made. The calculation was made for the compressor design conditions with the specified uniform tip clearance gap. The predicted performance is significantly worse than that predicted in the NASA design study. This is explained by the high tip leakage flow in the present calculation and by the different model adopted for tip leakage flow mixing. The calculation gives an accumulation for high losses in the shroud/pressure-side quadrant near the exit of the impeller. It also predicts a region of meridional backflow near the shroud wall. Both of these flow features should be extensive enough in the NASA impeller to allow detailed flow measurements, leading to improved flow modelling. Recommendations are made for future flow studies in the NASA impeller.

  10. Assessing the performance of the MM/PBSA and MM/GBSA methods. 1. The accuracy of binding free energy calculations based on molecular dynamics simulations.

    PubMed

    Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei

    2011-01-24

    The Molecular Mechanics/Poisson-Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2, or 4) on the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1) MD simulation length has an obvious impact on the predictions, and longer MD simulation is not always necessary to achieve better predictions. (2) The predictions are quite sensitive to the solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface. (3) Conformational entropy often show large fluctuations in MD trajectories, and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized.

  11. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  12. Study on Development of 1D-2D Coupled Real-time Urban Inundation Prediction model

    NASA Astrophysics Data System (ADS)

    Lee, Seungsoo

    2017-04-01

    In recent years, we are suffering abnormal weather condition due to climate change around the world. Therefore, countermeasures for flood defense are urgent task. In this research, study on development of 1D-2D coupled real-time urban inundation prediction model using predicted precipitation data based on remote sensing technology is conducted. 1 dimensional (1D) sewerage system analysis model which was introduced by Lee et al. (2015) is used to simulate inlet and overflow phenomena by interacting with surface flown as well as flows in conduits. 2 dimensional (2D) grid mesh refinement method is applied to depict road networks for effective calculation time. 2D surface model is coupled with 1D sewerage analysis model in order to consider bi-directional flow between both. Also parallel computing method, OpenMP, is applied to reduce calculation time. The model is estimated by applying to 25 August 2014 extreme rainfall event which caused severe inundation damages in Busan, Korea. Oncheoncheon basin is selected for study basin and observed radar data are assumed as predicted rainfall data. The model shows acceptable calculation speed with accuracy. Therefore it is expected that the model can be used for real-time urban inundation forecasting system to minimize damages.

  13. Challenges in Understanding and Development of Predictive Models of Plasma Assisted Combustion

    DTIC Science & Technology

    2014-01-01

    and electron temperature in transient plasmas sustained by nanosecond pulse duration discharges, and their comparison with modeling predictions, are...in nanosecond pulse discharge in nitrogen at 0.25 bar, using the kinetic model developed in Ref. [11]. Rapid electric field reduction during...discharge pulses with kinetic modeling calculations, using conventional hydrocarbon-air combustion mechanisms. Although modeling predictions for H2-air

  14. Peierls-Nabarro modeling of dislocations in UO2

    NASA Astrophysics Data System (ADS)

    Skelton, Richard; Walker, Andrew M.

    2017-11-01

    Under conditions of high stress or low temperature, glide of dislocations plays an important role in the deformation of UO2. In this paper, the Peierls-Nabarro model is used to calculate the core widths and Peierls stresses of ½<110> edge and screw dislocations gliding on {100}, {110}, and {111}. The energy of the inelastic displacement field in the dislocation core is parameterized using generalized stacking fault energies, which are calculated atomistically using interatomic potentials. We use seven different interatomic potential models, representing the variety of different models available for UO2. The different models broadly agree on the relative order of the strengths of the different slip systems, with the 1/2<110>{100} edge dislocation predicted to be the weakest slip system and 1/2<110>{110} the strongest. However, the calculated Peierls stresses depend strongly on the interatomic potential used, with values ranging between 2.7 and 12.9 GPa for glide of 1/2<110>{100} edge dislocations, 16.4-32.3 GPa for 1/2<110>{110} edge dislocations, and 6.8-13.6 GPa for 1/2<110>{111} edge dislocations. The glide of 1/2<110> screw dislocations in UO2 is also found to depend on the interatomic potential used, with some models predicting similar Peierls stresses for glide on {100} and {111}, while others predict a unique easy glide direction. Comparison with previous fully atomistic calculations show that the Peierls-Nabarro model can accurately predict dislocation properties in UO2.

  15. Code for Calculating Regional Seismic Travel Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  16. Incoherent vector mesons production in PbPb ultraperipheral collisions at the LHC

    NASA Astrophysics Data System (ADS)

    Xie, Ya-Ping; Chen, Xurong

    2017-03-01

    The incoherent rapidity distributions of vector mesons are computed in dipole model in PbPb ultraperipheral collisions at the CERN Large Hadron Collider (LHC). The IIM model fitted from newer data is employed in the dipole amplitude. The Boosted Gaussian and Gaus-LC wave functions for vector mesons are implemented in the calculations as well. Predictions for the J / ψ, ψ (2 s), ρ and ϕ incoherent rapidity distributions are evaluated and compared with experimental data and other theoretical predictions in this paper. We obtain closer predictions of the incoherent rapidity distributions for J / ψ than previous calculations in the IIM model.

  17. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  18. Pharmacogenetics-based area-under-curve model can predict efficacy and adverse events from axitinib in individual patients with advanced renal cell carcinoma.

    PubMed

    Yamamoto, Yoshiaki; Tsunedomi, Ryouichi; Fujita, Yusuke; Otori, Toru; Ohba, Mitsuyoshi; Kawai, Yoshihisa; Hirata, Hiroshi; Matsumoto, Hiroaki; Haginaka, Jun; Suzuki, Shigeo; Dahiya, Rajvir; Hamamoto, Yoshihiko; Matsuyama, Kenji; Hazama, Shoichi; Nagano, Hiroaki; Matsuyama, Hideyasu

    2018-03-30

    We investigated the relationship between axitinib pharmacogenetics and clinical efficacy/adverse events in advanced renal cell carcinoma (RCC) and established a model to predict clinical efficacy and adverse events using pharmacokinetic and gene polymorphisms related to drug metabolism and efflux in a phase II trial. We prospectively evaluated the area under the plasma concentration-time curve (AUC) of axitinib, objective response rate, and adverse events in 44 consecutive advanced RCC patients treated with axitinib. To establish a model for predicting clinical efficacy and adverse events, polymorphisms in genes including ABC transporters ( ABCB1 and ABCG2 ), UGT1A , and OR2B11 were analyzed by whole-exome sequencing, Sanger sequencing, and DNA microarray. To validate this prediction model, calculated AUC by 6 gene polymorphisms was compared with actual AUC in 16 additional consecutive patients prospectively. Actual AUC significantly correlated with the objective response rate ( P = 0.0002) and adverse events (hand-foot syndrome, P = 0.0055; and hypothyroidism, P = 0.0381). Calculated AUC significantly correlated with actual AUC ( P < 0.0001), and correctly predicted objective response rate ( P = 0.0044) as well as adverse events ( P = 0.0191 and 0.0082, respectively). In the validation study, calculated AUC prior to axitinib treatment precisely predicted actual AUC after axitinib treatment ( P = 0.0066). Our pharmacogenetics-based AUC prediction model may determine the optimal initial dose of axitinib, and thus facilitate better treatment of patients with advanced RCC.

  19. Pharmacogenetics-based area-under-curve model can predict efficacy and adverse events from axitinib in individual patients with advanced renal cell carcinoma

    PubMed Central

    Yamamoto, Yoshiaki; Tsunedomi, Ryouichi; Fujita, Yusuke; Otori, Toru; Ohba, Mitsuyoshi; Kawai, Yoshihisa; Hirata, Hiroshi; Matsumoto, Hiroaki; Haginaka, Jun; Suzuki, Shigeo; Dahiya, Rajvir; Hamamoto, Yoshihiko; Matsuyama, Kenji; Hazama, Shoichi; Nagano, Hiroaki; Matsuyama, Hideyasu

    2018-01-01

    We investigated the relationship between axitinib pharmacogenetics and clinical efficacy/adverse events in advanced renal cell carcinoma (RCC) and established a model to predict clinical efficacy and adverse events using pharmacokinetic and gene polymorphisms related to drug metabolism and efflux in a phase II trial. We prospectively evaluated the area under the plasma concentration–time curve (AUC) of axitinib, objective response rate, and adverse events in 44 consecutive advanced RCC patients treated with axitinib. To establish a model for predicting clinical efficacy and adverse events, polymorphisms in genes including ABC transporters (ABCB1 and ABCG2), UGT1A, and OR2B11 were analyzed by whole-exome sequencing, Sanger sequencing, and DNA microarray. To validate this prediction model, calculated AUC by 6 gene polymorphisms was compared with actual AUC in 16 additional consecutive patients prospectively. Actual AUC significantly correlated with the objective response rate (P = 0.0002) and adverse events (hand-foot syndrome, P = 0.0055; and hypothyroidism, P = 0.0381). Calculated AUC significantly correlated with actual AUC (P < 0.0001), and correctly predicted objective response rate (P = 0.0044) as well as adverse events (P = 0.0191 and 0.0082, respectively). In the validation study, calculated AUC prior to axitinib treatment precisely predicted actual AUC after axitinib treatment (P = 0.0066). Our pharmacogenetics-based AUC prediction model may determine the optimal initial dose of axitinib, and thus facilitate better treatment of patients with advanced RCC. PMID:29682213

  20. Challenges of developing a cardiovascular risk calculator for patients with rheumatoid arthritis.

    PubMed

    Crowson, Cynthia S; Rollefstad, Silvia; Kitas, George D; van Riel, Piet L C M; Gabriel, Sherine E; Semb, Anne Grete

    2017-01-01

    Cardiovascular disease (CVD) risk calculators designed for use in the general population do not accurately predict the risk of CVD among patients with rheumatoid arthritis (RA), who are at increased risk of CVD. The process of developing risk prediction models involves numerous issues. Our goal was to develop a CVD risk calculator for patients with RA. Thirteen cohorts of patients with RA originating from 10 different countries (UK, Norway, Netherlands, USA, Sweden, Greece, South Africa, Spain, Canada and Mexico) were combined. CVD risk factors and RA characteristics at baseline, in addition to information on CVD outcomes were collected. Cox models were used to develop a CVD risk calculator, considering traditional CVD risk factors and RA characteristics. Model performance was assessed using measures of discrimination and calibration with 10-fold cross-validation. A total of 5638 RA patients without prior CVD were included (mean age: 55 [SD: 14] years, 76% female). During a mean follow-up of 5.8 years (30139 person years), 389 patients developed a CVD event. Event rates varied between cohorts, necessitating inclusion of high and low risk strata in the models. The multivariable analyses revealed 2 risk prediction models including either a disease activity score including a 28 joint count and erythrocyte sedimentation rate (DAS28ESR) or a health assessment questionnaire (HAQ) along with age, sex, presence of hypertension, current smoking and ratio of total cholesterol to high-density lipoprotein cholesterol. Unfortunately, performance of these models was similar to general population CVD risk calculators. Efforts to develop a specific CVD risk calculator for patients with RA yielded 2 potential models including RA disease characteristics, but neither demonstrated improved performance compared to risk calculators designed for use in the general population. Challenges encountered and lessons learned are discussed in detail.

  1. Thermal Analysis and Correlation of the Mars Odyssey Spacecraft's Solar Array During Aerobraking Operations

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Gasbarre, Joseph F.; George, Benjamin E.

    2002-01-01

    The Mars Odyssey spacecraft made use of multipass aerobraking to gradually reduce its orbit period from a highly elliptical insertion orbit to its final science orbit. Aerobraking operations provided an opportunity to apply advanced thermal analysis techniques to predict the temperature of the spacecraft's solar array for each drag pass. Odyssey telemetry data was used to correlate the thermal model. The thermal analysis was tightly coupled to the flight mechanics, aerodynamics, and atmospheric modeling efforts being performed during operations. Specifically, the thermal analysis predictions required a calculation of the spacecraft's velocity relative to the atmosphere, a prediction of the atmospheric density, and a prediction of the heat transfer coefficients due to aerodynamic heating. Temperature correlations were performed by comparing predicted temperatures of the thermocouples to the actual thermocouple readings from the spacecraft. Time histories of the spacecraft relative velocity, atmospheric density, and heat transfer coefficients, calculated using flight accelerometer and quaternion data, were used to calculate the aerodynamic heating. During aerobraking operations, the correlations were used to continually update the thermal model, thus increasing confidence in the predictions. This paper describes the thermal analysis that was performed and presents the correlations to the flight data.

  2. Long-Term Survival Prediction for Coronary Artery Bypass Grafting: Validation of the ASCERT Model Compared With The Society of Thoracic Surgeons Predicted Risk of Mortality.

    PubMed

    Lancaster, Timothy S; Schill, Matthew R; Greenberg, Jason W; Ruaengsri, Chawannuch; Schuessler, Richard B; Lawton, Jennifer S; Maniar, Hersh S; Pasque, Michael K; Moon, Marc R; Damiano, Ralph J; Melby, Spencer J

    2018-05-01

    The recently developed American College of Cardiology Foundation-Society of Thoracic Surgeons (STS) Collaboration on the Comparative Effectiveness of Revascularization Strategy (ASCERT) Long-Term Survival Probability Calculator is a valuable addition to existing short-term risk-prediction tools for cardiac surgical procedures but has yet to be externally validated. Institutional data of 654 patients aged 65 years or older undergoing isolated coronary artery bypass grafting between 2005 and 2010 were reviewed. Predicted survival probabilities were calculated using the ASCERT model. Survival data were collected using the Social Security Death Index and institutional medical records. Model calibration and discrimination were assessed for the overall sample and for risk-stratified subgroups based on (1) ASCERT 7-year survival probability and (2) the predicted risk of mortality (PROM) from the STS Short-Term Risk Calculator. Logistic regression analysis was performed to evaluate additional perioperative variables contributing to death. Overall survival was 92.1% (569 of 597) at 1 year and 50.5% (164 of 325) at 7 years. Calibration assessment found no significant differences between predicted and actual survival curves for the overall sample or for the risk-stratified subgroups, whether stratified by predicted 7-year survival or by PROM. Discriminative performance was comparable between the ASCERT and PROM models for 7-year survival prediction (p < 0.001 for both; C-statistic = 0.815 for ASCERT and 0.781 for PROM). Prolonged ventilation, stroke, and hospital length of stay were also predictive of long-term death. The ASCERT survival probability calculator was externally validated for prediction of long-term survival after coronary artery bypass grafting in all risk groups. The widely used STS PROM performed comparably as a predictor of long-term survival. Both tools provide important information for preoperative decision making and patient counseling about potential outcomes after coronary artery bypass grafting. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  3. The viscosity of magmatic silicate liquids: A model for calculation

    NASA Technical Reports Server (NTRS)

    Bottinga, Y.; Weill, D. F.

    1971-01-01

    A simple model has been designed to allow reasonably accurate calculations of viscosity as a function of temperature and composition. The problem of predicting viscosities of anhydrous silicate liquids has been investigated since such viscosity numbers are applicable to many extrusive melts and to nearly dry magmatic liquids in general. The fluidizing action of water dissolved in silicate melts is well recognized and it is now possible to predict the effect of water content on viscosity in a semiquantitative way. Water was not incorporated directly into the model. Viscosities of anhydrous compositions were calculated, and, where necessary, the effect of added water and estimated. The model can be easily modified to incorporate the effect of water whenever sufficient additional data are accumulated.

  4. Standard model predictions for B→Kℓ(+)ℓ- with form factors from lattice QCD.

    PubMed

    Bouchard, Chris; Lepage, G Peter; Monahan, Christopher; Na, Heechang; Shigemitsu, Junko

    2013-10-18

    We calculate, for the first time using unquenched lattice QCD form factors, the standard model differential branching fractions dB/dq2(B→Kℓ(+)ℓ(-)) for ℓ=e, μ, τ and compare with experimental measurements by Belle, BABAR, CDF, and LHCb. We report on B(B→Kℓ(+)ℓ(-)) in q2 bins used by experiment and predict B(B→Kτ(+)τ(-))=(1.41±0.15)×10(-7). We also calculate the ratio of branching fractions R(e)(μ)=1.00029(69) and predict R(ℓ)(τ)=1.176(40), for ℓ=e, μ. Finally, we calculate the "flat term" in the angular distribution of the differential decay rate F(H)(e,μ,τ) in experimentally motivated q2 bins.

  5. Experimental and computational prediction of glass transition temperature of drugs.

    PubMed

    Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S

    2014-12-22

    Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.

  6. Agreement between gamma passing rates using computed tomography in radiotherapy and secondary cancer risk prediction from more advanced dose calculated models

    PubMed Central

    Balosso, Jacques

    2017-01-01

    Background During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type ‘a’ algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type ‘b’, which consider change in lateral electrons transport. Methods Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type ‘b’ / OED type ‘a’). Results The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type ‘a’, the OED values from type ‘b’ dose distributions’ were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. Conclusions The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing more precisely the dose distributions, but that the prediction of absolute SCR is still very imprecise, only the EAR ratio could be used to rank radiotherapy plans. PMID:28811995

  7. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  8. Validation of CRASH Model in Prediction of 14-day Mortality and 6-month Unfavorable Outcome of Head Trauma Patients.

    PubMed

    Hashemi, Behrooz; Amanat, Mahnaz; Baratloo, Alireza; Forouzanfar, Mohammad Mehdi; Rahmati, Farhad; Motamedi, Maryam; Safari, Saeed

    2016-11-01

    To date, many prognostic models have been proposed to predict the outcome of patients with traumatic brain injuries. External validation of these models in different populations is of great importance for their generalization. The present study was designed, aiming to determine the value of CRASH prognostic model in prediction of 14-day mortality (14-DM) and 6-month unfavorable outcome (6-MUO) of patients with traumatic brain injury. In the present prospective diagnostic test study, calibration and discrimination of CRASH model were evaluated in head trauma patients referred to the emergency department. Variables required for calculating CRASH expected risks (ER), and observed 14-DM and 6-MUO were gathered. Then ER of 14-DM and 6-MUO were calculated. The patients were followed for 6 months and their 14-DM and 6-MUO were recorded. Finally, the correlation of CRASH ER and the observed outcome of the patients was evaluated. The data were analyzed using STATA version 11.0. In this study, 323 patients with the mean age of 34.0 ± 19.4 years were evaluated (87.3% male). Calibration of the basic and CT models in prediction of 14-day and 6-month outcome were in the desirable range (P < 0.05). Area under the curve in the basic model for prediction of 14-DM and 6-MUO were 0.92 (95% CI: 0.89-0.96) and 0.92 (95% CI: 0.90-0.95), respectively. In addition, area under the curve in the CT model for prediction of 14-DM and 6-MUO were 0.93 (95% CI: 0.91-0.97) and 0.93 (95% CI: 0.91-0.96), respectively. There was no significant difference between the discriminations of the two models in prediction of 14-DM (p = 0.11) and 6-MUO (p = 0.1). The results of the present study showed that CRASH prediction model has proper discrimination and calibration in predicting 14-DM and 6-MUO of head trauma patients. Since there was no difference between the values of the basic and CT models, using the basic model is recommended to simplify the risk calculations.

  9. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    NASA Astrophysics Data System (ADS)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  10. Correlation of predicted and measured thermal stresses on an advanced aircraft structure with dissimilar materials. [hypersonic heating simulation

    NASA Technical Reports Server (NTRS)

    Jenkins, J. M.

    1979-01-01

    Additional information was added to a growing data base from which estimates of finite element model complexities can be made with respect to thermal stress analysis. The manner in which temperatures were smeared to the finite element grid points was examined from the point of view of the impact on thermal stress calculations. The general comparison of calculated and measured thermal stresses is guite good and there is little doubt that the finite element approach provided by NASTRAN results in correct thermal stress calculations. Discrepancies did exist between measured and calculated values in the skin and the skin/frame junctures. The problems with predicting skin thermal stress were attributed to inadequate temperature inputs to the structural model rather than modeling insufficiencies. The discrepancies occurring at the skin/frame juncture were most likely due to insufficient modeling elements rather than temperature problems.

  11. Predictive Caching Using the TDAG Algorithm

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model.

  12. Prediction on the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase based on gene expression programming.

    PubMed

    Li, Yuqin; You, Guirong; Jia, Baoxiu; Si, Hongzong; Yao, Xiaojun

    2014-01-01

    Quantitative structure-activity relationships (QSAR) were developed to predict the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase via heuristic method (HM) and gene expression programming (GEP). The descriptors of 33 pyrrolidine derivatives were calculated by the software CODESSA, which can calculate quantum chemical, topological, geometrical, constitutional, and electrostatic descriptors. HM was also used for the preselection of 5 appropriate molecular descriptors. Linear and nonlinear QSAR models were developed based on the HM and GEP separately and two prediction models lead to a good correlation coefficient (R (2)) of 0.93 and 0.94. The two QSAR models are useful in predicting the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase during the discovery of new anticancer drugs and providing theory information for studying the new drugs.

  13. Experiment-specific cosmic microwave background calculations made easier - Approximation formula for smoothed delta T/T windows

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.

    1993-01-01

    Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.

  14. Modeling Aircraft Position and Conservatively Calculating Airspace Violations for an Autonomous Collision Awareness System for Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Ueunten, Kevin K.

    With the scheduled 30 September 2015 integration of Unmanned Aerial System (UAS) into the national airspace, the Federal Aviation Administration (FAA) is concerned with UAS capabilities to sense and avoid conflicts. Since the operator is outside the cockpit, the proposed collision awareness plugin (CAPlugin), based on probability and error propagation, conservatively predicts potential conflicts with other aircraft and airspaces, thus increasing the operator's situational awareness. The conflict predictions are calculated using a forward state estimator (FSE) and a conflict calculator. Predicting an aircraft's position, modeled as a mixed Gaussian distribution, is the FSE's responsibility. Furthermore, the FSE supports aircraft engaged in the following three flight modes: free flight, flight path following and orbits. The conflict calculator uses the FSE result to calculate the conflict probability between an aircraft and airspace or another aircraft. Finally, the CAPlugin determines the highest conflict probability and warns the operator. In addition to discussing the FSE free flight, FSE orbit and the airspace conflict calculator, this thesis describes how each algorithm is implemented and tested. Lastly two simulations demonstrates the CAPlugin's capabilities.

  15. M3Ag17(SPh)12 Nanoparticles and Their Structure Prediction.

    PubMed

    Wickramasinghe, Sameera; Atnagulov, Aydar; Conn, Brian E; Yoon, Bokwon; Barnett, Robert N; Griffith, Wendell P; Landman, Uzi; Bigioni, Terry P

    2015-09-16

    Although silver nanoparticles are of great fundamental and practical interest, only one structure has been determined thus far: M4Ag44(SPh)30, where M is a monocation, and SPh is an aromatic thiolate ligand. This is in part due to the fact that no other molecular silver nanoparticles have been synthesized with aromatic thiolate ligands. Here we report the synthesis of M3Ag17(4-tert-butylbenzene-thiol)12, which has good stability and an unusual optical spectrum. We also present a rational strategy for predicting the structure of this molecule. First-principles calculations support the structural model, predict a HOMO-LUMO energy gap of 1.77 eV, and predict a new "monomer mount" capping motif, Ag(SR)3, for Ag nanoparticles. The calculated optical absorption spectrum is in good correspondence with the measured spectrum. Heteroatom substitution was also used as a structural probe. First-principles calculations based on the structural model predicted a strong preference for a single Au atom substitution in agreement with experiment.

  16. Ab initio state-specific N2 + O dissociation and exchange modeling for molecular simulations

    NASA Astrophysics Data System (ADS)

    Luo, Han; Kulakhmetov, Marat; Alexeenko, Alina

    2017-02-01

    Quasi-classical trajectory (QCT) calculations are used in this work to calculate state-specific N2(X1Σ ) +O(3P ) →2 N(4S ) +O(3P ) dissociation and N2(X1Σ ) +O(3P ) →NO(X2Π ) +N(4S ) exchange cross sections and rates based on the 13A″ and 13A' ab initio potential energy surface by Gamallo et al. [J. Chem. Phys. 119, 2545-2556 (2003)]. The calculations consider translational energies up to 23 eV and temperatures between 1000 K and 20 000 K. Vibrational favoring is observed for dissociation reaction at the whole range of collision energies and for exchange reaction around the dissociation limit. For the same collision energy, cross sections for v = 30 are 4 to 6 times larger than those for the ground state. The exchange reaction has an effective activation energy that is dependent on the initial rovibrational level, which is different from dissociation reaction. In addition, the exchange cross sections have a maximum when the total collision energy (TCE) approaches dissociation energy. The calculations are used to generate compact QCT-derived state-specific dissociation (QCT-SSD) and QCT-derived state-specific exchange (QCT-SSE) models, which describe over 1 × 106 cross sections with about 150 model parameters. The models can be used directly within direct simulation Monte Carlo and computational fluid dynamics simulations. Rate constants predicted by the new models are compared to the experimental measurements, direct QCT calculations and predictions by other models that include: TCE model, Bose-Candler QCT-based exchange model, Macheret-Fridman dissociation model, Macheret's exchange model, and Park's two-temperature model. The new models match QCT-calculated and experimental rates within 30% under nonequilibrium conditions while other models under predict by over an order of magnitude under vibrationally-cold conditions.

  17. Paleotemperature reconstruction from mammalian phosphate δ18O records - an alternative view on data processing

    NASA Astrophysics Data System (ADS)

    Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej

    2017-04-01

    The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.

  18. Prediction of XV-15 tilt rotor discrete frequency aeroacoustic noise with WOPWOP

    NASA Technical Reports Server (NTRS)

    Coffen, Charles D.; George, Albert R.

    1990-01-01

    The results, methodology, and conclusions of noise prediction calculations carried out to study several possible discrete frequency harmonic noise mechanisms of the XV-15 Tilt Rotor Aircraft in hover and helicopter mode forward flight are presented. The mechanisms studied were thickness and loading noise. In particular, the loading noise caused by flow separation and the fountain/ground plane effect were predicted with calculations made using WOPWOP, a noise prediction program developed by NASA Langley. The methodology was to model the geometry and aerodynamics of the XV-15 rotor blades in hover and steady level flight and then create corresponding FORTRAN subroutines which were used an input for WOPWOP. The models are described and the simplifying assumptions made in creating them are evaluated, and the results of the computations are presented. The computations lead to the following conclusions: The fountain/ground plane effect is an important source of aerodynamic noise for the XV-15 in hover. Unsteady flow separation from the airfoil passing through the fountain at high angles of attack significantly affects the predicted sound spectra and may be an important noise mechanism for the XV-15 in hover mode. The various models developed did not predict the sound spectra in helicopter forward flight. The experimental spectra indicate the presence of blade vortex interactions which were not modeled in these calculations. A need for further study and development of more accurate aerodynamic models, including unsteady stall in hover and blade vortex interactions in forward flight.

  19. Initial comparison of single cylinder Stirling engine computer model predictions with test results

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.; Thieme, L. G.; Miao, D.

    1979-01-01

    A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.

  20. Thermal cut-off response modelling of universal motors

    NASA Astrophysics Data System (ADS)

    Thangaveloo, Kashveen; Chin, Yung Shin

    2017-04-01

    This paper presents a model to predict the thermal cut-off (TCO) response behaviour in universal motors. The mathematical model includes the calculations of heat loss in the universal motor and the flow characteristics around the TCO component which together are the main parameters for TCO response prediction. In order to accurately predict the TCO component temperature, factors like the TCO component resistance, the effect of ambient, and the flow conditions through the motor are taken into account to improve the prediction accuracy of the model.

  1. Prediction of prostate cancer in unscreened men: external validation of a risk calculator.

    PubMed

    van Vugt, Heidi A; Roobol, Monique J; Kranse, Ries; Määttänen, Liisa; Finne, Patrik; Hugosson, Jonas; Bangma, Chris H; Schröder, Fritz H; Steyerberg, Ewout W

    2011-04-01

    Prediction models need external validation to assess their value beyond the setting where the model was derived from. To assess the external validity of the European Randomized study of Screening for Prostate Cancer (ERSPC) risk calculator (www.prostatecancer-riskcalculator.com) for the probability of having a positive prostate biopsy (P(posb)). The ERSPC risk calculator was based on data of the initial screening round of the ERSPC section Rotterdam and validated in 1825 and 531 men biopsied at the initial screening round in the Finnish and Swedish sections of the ERSPC respectively. P(posb) was calculated using serum prostate specific antigen (PSA), outcome of digital rectal examination (DRE), transrectal ultrasound and ultrasound assessed prostate volume. The external validity was assessed for the presence of cancer at biopsy by calibration (agreement between observed and predicted outcomes), discrimination (separation of those with and without cancer), and decision curves (for clinical usefulness). Prostate cancer was detected in 469 men (26%) of the Finnish cohort and in 124 men (23%) of the Swedish cohort. Systematic miscalibration was present in both cohorts (mean predicted probability 34% versus 26% observed, and 29% versus 23% observed, both p<0.001). The areas under the curves were 0.76 and 0.78, and substantially lower for the model with PSA only (0.64 and 0.68 respectively). The model proved clinically useful for any decision threshold compared with a model with PSA only, PSA and DRE, or biopsying all men. A limitation is that the model is based on sextant biopsies results. The ERSPC risk calculator discriminated well between those with and without prostate cancer among initially screened men, but overestimated the risk of a positive biopsy. Further research is necessary to assess the performance and applicability of the ERSPC risk calculator when a clinical setting is considered rather than a screening setting. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. [Application of three compartment model and response surface model to clinical anesthesia using Microsoft Excel].

    PubMed

    Abe, Eiji; Abe, Mari

    2011-08-01

    With the spread of total intravenous anesthesia, clinical pharmacology has become more important. We report Microsoft Excel file applying three compartment model and response surface model to clinical anesthesia. On the Microsoft Excel sheet, propofol, remifentanil and fentanyl effect-site concentrations are predicted (three compartment model), and probabilities of no response to prodding, shaking, surrogates of painful stimuli and laryngoscopy are calculated using predicted effect-site drug concentration. Time-dependent changes in these calculated values are shown graphically. Recent development in anesthetic drug interaction studies are remarkable, and its application to clinical anesthesia with this Excel file is simple and helpful for clinical anesthesia.

  3. An analytical approach to obtaining JWL parameters from cylinder tests

    NASA Astrophysics Data System (ADS)

    Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.

    2017-01-01

    An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.

  4. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model.

    PubMed

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  5. A PRELIMINARY JUPITER MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, W. B.; Militzer, B.

    In anticipation of new observational results for Jupiter's axial moment of inertia and gravitational zonal harmonic coefficients from the forthcoming Juno orbiter, we present a number of preliminary Jupiter interior models. We combine results from ab initio computer simulations of hydrogen–helium mixtures, including immiscibility calculations, with a new nonperturbative calculation of Jupiter's zonal harmonic coefficients, to derive a self-consistent model for the planet's external gravity and moment of inertia. We assume helium rain modified the interior temperature and composition profiles. Our calculation predicts zonal harmonic values to which measurements can be compared. Although some models fit the observed (pre-Juno) second-more » and fourth-order zonal harmonics to within their error bars, our preferred reference model predicts a fourth-order zonal harmonic whose absolute value lies above the pre-Juno error bars. This model has a dense core of about 12 Earth masses and a hydrogen–helium-rich envelope with approximately three times solar metallicity.« less

  6. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    PubMed

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.

  7. Using models for the optimization of hydrologic monitoring

    USGS Publications Warehouse

    Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.

    2011-01-01

    Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The techniques briefly described earlier are described in detail in a U.S. Geological Survey Scientific Investigations Report available on the Internet (Fienen and others, 2010; http://pubs.usgs.gov/sir/2010/5159/). This fact sheet presents a synopsis of the techniques as applied to a synthetic model based on a model constructed using properties from the Lake Michigan Basin (Hoard, 2010).

  8. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    Van Hooidonk, R. J.

    2011-12-01

    Future widespread coral bleaching and subsequent mortality has been projected with sea surface temperature (SST) data from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends and their propagation in predictions. To analyze the relative importance of various types of model errors and biases on coral reef bleaching predictive skill, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from GCMs 20th century simulations to be included in the Intergovernmental Panel on Climate Change (IPCC) 5th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate skill using an objective measure of forecast quality, the Peirce Skill Score (PSS). This methodology will identify frequency bands that are important to predicting coral bleaching and it will highlight deficiencies in these bands in models. The methodology we describe can be used to improve future climate model derived predictions of coral reef bleaching and it can be used to better characterize the errors and uncertainty in predictions.

  9. [Parameters modification and evaluation of two evapotranspiration models based on Penman-Monteith model for summer maize].

    PubMed

    Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing

    2017-06-18

    The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.

  10. Solar g-modes? Comparison of detected asymptotic g-mode frequencies with solar model predictions

    NASA Astrophysics Data System (ADS)

    Wood, Suzannah Rebecca; Guzik, Joyce Ann; Mussack, Katie; Bradley, Paul A.

    2018-06-01

    After many years of searching for solar gravity modes, Fossat et al. (2017) reported detection of the nearly equally spaced high-order g-modes periods using a 15-year time series of GOLF data from the SOHO spacecraft. Here we report progress towards and challenges associated with calculating and comparing g-mode period predictions for several previously published standard solar models using various abundance mixtures and opacities, as well as the predictions for some non-standard models incorporating early mass loss, and compare with the periods reported by Fossat et al (2017). Additionally, we have a side-by-side comparison of results of different stellar pulsation codes for calculating g-mode predictions. These comparisons will allow for testing of nonstandard physics input that affect the core, including an early more massive Sun and dynamic electron screening.

  11. On calculation of a steam-water flow in a geothermal well

    NASA Astrophysics Data System (ADS)

    Shulyupin, A. N.; Chermoshentseva, A. A.

    2013-08-01

    Approaches to calculation of a steam-water flow in a geothermal well are considered. For hydraulic applications, a WELL-4 model of a steam-water well is developed. Data obtained using this model are compared with experimental data and also with calculations by similar models including the well-known HOLA model. The capacity of the A-2 well in the Mutnovskoe flash-steam field (Kamchatka half-island, Russia) after planned reconstruction is predicted.

  12. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements

    DOE PAGES

    Weck, Philippe F.; Kim, Eunja; Wang, Yifeng; ...

    2017-08-01

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematicallymore » compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.« less

  13. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements.

    PubMed

    Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M

    2017-08-01

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.

  14. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weck, Philippe F.; Kim, Eunja; Wang, Yifeng

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematicallymore » compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.« less

  15. Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?

    PubMed Central

    2017-01-01

    Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692

  16. Predicting solar radiation based on available weather indicators

    NASA Astrophysics Data System (ADS)

    Sauer, Frank Joseph

    Solar radiation prediction models are complex and require software that is not available for the household investor. The processing power within a normal desktop or laptop computer is sufficient to calculate similar models. This barrier to entry for the average consumer can be fixed by a model simple enough to be calculated by hand if necessary. Solar radiation modeling has been historically difficult to predict and accurate models have significant assumptions and restrictions on their use. Previous methods have been limited to linear relationships, location restrictions, or input data limits to one atmospheric condition. This research takes a novel approach by combining two techniques within the computational limits of a household computer; Clustering and Hidden Markov Models (HMMs). Clustering helps limit the large observation space which restricts the use of HMMs. Instead of using continuous data, and requiring significantly increased computations, the cluster can be used as a qualitative descriptor of each observation. HMMs incorporate a level of uncertainty and take into account the indirect relationship between meteorological indicators and solar radiation. This reduces the complexity of the model enough to be simply understood and accessible to the average household investor. The solar radiation is considered to be an unobservable state that each household will be unable to measure. The high temperature and the sky coverage are already available through the local or preferred source of weather information. By using the next day's prediction for high temperature and sky coverage, the model groups the data and then predicts the most likely range of radiation. This model uses simple techniques and calculations to give a broad estimate for the solar radiation when no other universal model exists for the average household.

  17. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  18. A hybrid deep neural network and physically based distributed model for river stage prediction

    NASA Astrophysics Data System (ADS)

    hitokoto, Masayuki; sakuraba, Masaaki

    2016-04-01

    We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network architecture of the ANN model, sensitivity analysis was done by the case study approach. The prediction result was evaluated by the superior 4 flood events by the leave-one-out cross validation. The prediction result of the basic 4 layer ANN was better than the conventional 3 layer ANN model. However, the result did not reproduce well the biggest flood event, supposedly because the lack of the sufficient high-water level flood event in the training data. The result of the hybrid model outperforms the basic ANN model and distributed model, especially improved the performance of the basic ANN model in the biggest flood event.

  19. A simple nonlocal damage model for predicting failure of notched laminates

    NASA Technical Reports Server (NTRS)

    Kennedy, T. C.; Nahan, M. F.

    1995-01-01

    The ability to predict failure loads in notched composite laminates is a requirement in a variety of structural design circumstances. A complicating factor is the development of a zone of damaged material around the notch tip. The objective of this study was to develop a computational technique that simulates progressive damage growth around a notch in a manner that allows the prediction of failure over a wide range of notch sizes. This was accomplished through the use of a relatively simple, nonlocal damage model that incorporates strain-softening. This model was implemented in a two-dimensional finite element program. Calculations were performed for two different laminates with various notch sizes under tensile loading, and the calculations were found to correlate well with experimental results.

  20. Validation of an online risk calculator for the prediction of anastomotic leak after colon cancer surgery and preliminary exploration of artificial intelligence-based analytics.

    PubMed

    Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L

    2017-11-01

    Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.

  1. Recalibration of the Shear Stress Transport Model to Improve Calculation of Shock Separated Flows

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.

    2013-01-01

    The Menter Shear Stress Transport (SST) k . turbulence model is one of the most widely used two-equation Reynolds-averaged Navier-Stokes turbulence models for aerodynamic analyses. The model extends Menter s baseline (BSL) model to include a limiter that prevents the calculated turbulent shear stress from exceeding a prescribed fraction of the turbulent kinetic energy via a proportionality constant, a1, set to 0.31. Compared to other turbulence models, the SST model yields superior predictions of mild adverse pressure gradient flows including those with small separations. In shock - boundary layer interaction regions, the SST model produces separations that are too large while the BSL model is on the other extreme, predicting separations that are too small. In this paper, changing a1 to a value near 0.355 is shown to significantly improve predictions of shock separated flows. Several cases are examined computationally and experimental data is also considered to justify raising the value of a1 used for shock separated flows.

  2. Martian Radiation Environment: Model Calculations and Recent Measurements with "MARIE"

    NASA Technical Reports Server (NTRS)

    Saganti, P. B.; Cucinotta, F. A.; zeitlin, C. J.; Cleghorn, T. F.

    2004-01-01

    The Galactic Cosmic Ray spectra in Mars orbit were generated with the recently expanded HZETRN (High Z and Energy Transport) and QMSFRG (Quantum Multiple-Scattering theory of nuclear Fragmentation) model calculations. These model calculations are compared with the first eighteen months of measured data from the MARIE (Martian Radiation Environment Experiment) instrument onboard the 2001 Mars Odyssey spacecraft that is currently in Martian orbit. The dose rates observed by the MARIE instrument are within 10% of the model calculated predictions. Model calculations are compared with the MARIE measurements of dose, dose-equivalent values, along with the available particle flux distribution. Model calculated particle flux includes GCR elemental composition of atomic number, Z = 1-28 and mass number, A = 1-58. Particle flux calculations specific for the current MARIE mapping period are reviewed and presented.

  3. Artificial neural network modeling using clinical and knowledge independent variables predicts salt intake reduction behavior

    PubMed Central

    Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein

    2015-01-01

    Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333

  4. Fluidized bed combustor modeling

    NASA Technical Reports Server (NTRS)

    Horio, M.; Rengarajan, P.; Krishnan, R.; Wen, C. Y.

    1977-01-01

    A general mathematical model for the prediction of performance of a fluidized bed coal combustor (FBC) is developed. The basic elements of the model consist of: (1) hydrodynamics of gas and solids in the combustor; (2) description of gas and solids contacting pattern; (3) kinetics of combustion; and (4) absorption of SO2 by limestone in the bed. The model is capable of calculating the combustion efficiency, axial bed temperature profile, carbon hold-up in the bed, oxygen and SO2 concentrations in the bubble and emulsion phases, sulfur retention efficiency and particulate carry over by elutriation. The effects of bed geometry, excess air, location of heat transfer coils in the bed, calcium to sulfur ratio in the feeds, etc. are examined. The calculated results are compared with experimental data. Agreement between the calculated results and the observed data are satisfactory in most cases. Recommendations to enhance the accuracy of prediction of the model are suggested.

  5. Analysis and calculation of macrosegregation in a casting ingot. MPS solidification model. Volume 1: Formulation and analysis

    NASA Technical Reports Server (NTRS)

    Maples, A. L.; Poirier, D. R.

    1980-01-01

    The physical and numerical formulation of a model for the horizontal solidification of a binary alloy is described. It can be applied in an ingot. The major purpose of the model is to calculate macrosegregation in a casting ingot which results from flow of interdendritic liquid during solidification. The flow, driven by solidification contractions and by gravity acting on density gradients in the interdendritic liquid, was modeled as flow through a porous medium. The symbols used are defined. The physical formulation of the problem leading to a set of equations which can be used to obtain: (1) the pressure field; (2) the velocity field: (3) mass flow and (4) solute flow in the solid plus liquid zone during solidification is presented. With these established, the model calculates macrosegregation after solidification is complete. The numerical techniques used to obtain solution on a computational grid are presented. Results, evaluation of the results, and recommendations for future development of the model are given. The macrosegregation and flow field predictions for tin-lead, aluminum-copper, and tin-bismuth alloys are included as well as comparisons of some of the predictions with published predictions or with empirical data.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dholabhai, Pratik P., E-mail: pratik.dholabhai@asu.ed; Anwar, Shahriar, E-mail: anwar@asu.ed; Adams, James B., E-mail: jim.adams@asu.ed

    Kinetic lattice Monte Carlo (KLMC) model is developed for investigating oxygen vacancy diffusion in praseodymium-doped ceria. The current approach uses a database of activation energies for oxygen vacancy migration, calculated using first-principles, for various migration pathways in praseodymium-doped ceria. Since the first-principles calculations revealed significant vacancy-vacancy repulsion, we investigate the importance of that effect by conducting simulations with and without a repulsive interaction. Initially, as dopant concentrations increase, vacancy concentration and thus conductivity increases. However, at higher concentrations, vacancies interfere and repel one another, and dopants trap vacancies, creating a 'traffic jam' that decreases conductivity, which is consistent with themore » experimental findings. The modeled effective activation energy for vacancy migration slightly increased with increasing dopant concentration in qualitative agreement with the experiment. The current methodology comprising a blend of first-principle calculations and KLMC model provides a very powerful fundamental tool for predicting the optimal dopant concentration in ceria related materials. -- graphical abstract: Ionic conductivity in praseodymium doped ceria as a function of dopant concentration calculated using the kinetic lattice Monte Carlo vacancy-repelling model, which predicts the optimal composition for achieving maximum conductivity. Display Omitted Research highlights: {yields} KLMC method calculates the accurate time-dependent diffusion of oxygen vacancies. {yields} KLMC-VR model predicts a dopant concentration of {approx}15-20% to be optimal in PDC. {yields} At higher dopant concentration, vacancies interfere and repel one another, and dopants trap vacancies. {yields} Activation energy for vacancy migration increases as a function of dopant content« less

  7. Finite element modeling as a tool for predicting the fracture behavior of robocast scaffolds.

    PubMed

    Miranda, Pedro; Pajares, Antonia; Guiberteau, Fernando

    2008-11-01

    The use of finite element modeling to calculate the stress fields in complex scaffold structures and thus predict their mechanical behavior during service (e.g., as load-bearing bone implants) is evaluated. The method is applied to identifying the fracture modes and estimating the strength of robocast hydroxyapatite and beta-tricalcium phosphate scaffolds, consisting of a three-dimensional lattice of interpenetrating rods. The calculations are performed for three testing configurations: compression, tension and shear. Different testing orientations relative to the calcium phosphate rods are considered for each configuration. The predictions for the compressive configurations are compared to experimental data from uniaxial compression tests.

  8. Calculation of single chain cellulose elasticity using fully atomistic modeling

    Treesearch

    Xiawa Wu; Robert J. Moon; Ashlie Martini

    2011-01-01

    Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...

  9. Validation of CRASH Model in Prediction of 14-day Mortality and 6-month Unfavorable Outcome of Head Trauma Patients

    PubMed Central

    Hashemi, Behrooz; Amanat, Mahnaz; Baratloo, Alireza; Forouzanfar, Mohammad Mehdi; Rahmati, Farhad; Motamedi, Maryam; Safari, Saeed

    2016-01-01

    Introduction: To date, many prognostic models have been proposed to predict the outcome of patients with traumatic brain injuries. External validation of these models in different populations is of great importance for their generalization. The present study was designed, aiming to determine the value of CRASH prognostic model in prediction of 14-day mortality (14-DM) and 6-month unfavorable outcome (6-MUO) of patients with traumatic brain injury. Methods: In the present prospective diagnostic test study, calibration and discrimination of CRASH model were evaluated in head trauma patients referred to the emergency department. Variables required for calculating CRASH expected risks (ER), and observed 14-DM and 6-MUO were gathered. Then ER of 14-DM and 6-MUO were calculated. The patients were followed for 6 months and their 14-DM and 6-MUO were recorded. Finally, the correlation of CRASH ER and the observed outcome of the patients was evaluated. The data were analyzed using STATA version 11.0. Results: In this study, 323 patients with the mean age of 34.0 ± 19.4 years were evaluated (87.3% male). Calibration of the basic and CT models in prediction of 14-day and 6-month outcome were in the desirable range (P < 0.05). Area under the curve in the basic model for prediction of 14-DM and 6-MUO were 0.92 (95% CI: 0.89-0.96) and 0.92 (95% CI: 0.90-0.95), respectively. In addition, area under the curve in the CT model for prediction of 14-DM and 6-MUO were 0.93 (95% CI: 0.91-0.97) and 0.93 (95% CI: 0.91-0.96), respectively. There was no significant difference between the discriminations of the two models in prediction of 14-DM (p = 0.11) and 6-MUO (p = 0.1). Conclusion: The results of the present study showed that CRASH prediction model has proper discrimination and calibration in predicting 14-DM and 6-MUO of head trauma patients. Since there was no difference between the values of the basic and CT models, using the basic model is recommended to simplify the risk calculations. PMID:27800540

  10. Influence of dose calculation algorithms on the predicted dose distribution and NTCP values for NSCLC patients.

    PubMed

    Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten

    2011-05-01

    To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.

  11. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  12. The effect of the hot oxygen corona on the interaction of the solar wind with Venus

    NASA Technical Reports Server (NTRS)

    Belotserkovskii, O. M.; Mitnitskii, V. IA.; Breus, T. K.; Krymskii, A. M.; Nagy, A. F.

    1987-01-01

    A numerical gasdynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gasdynamic model that includes the effects of mass loading can be used to predict these parameters.

  13. A Hybrid Approach To Tandem Cylinder Noise

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2004-01-01

    Aeolian tone generation from tandem cylinders is predicted using a hybrid approach. A standard computational fluid dynamics (CFD) code is used to compute the unsteady flow around the cylinders, and the acoustics are calculated using the acoustic analogy. The CFD code is nominally second order in space and time and includes several turbulence models, but the SST k - omega model is used for most of the calculations. Significant variation is observed between laminar and turbulent cases, and with changes in the turbulence model. A two-dimensional implementation of the Ffowcs Williams-Hawkings (FW-H) equation is used to predict the far-field noise.

  14. The effect of the hot oxygen corona on the interaction of the solar wind with Venus

    NASA Astrophysics Data System (ADS)

    Belotserkovskii, O. M.; Breus, T. K.; Krymskii, A. M.; Mitnitskii, V. Ya.; Nagey, A. F.; Gombosi, T. I.

    1987-05-01

    A numerical gas dynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gas dynamic model that includes the effects of mass loading can be used to predict these parameters.

  15. Micro Finite Element models of the vertebral body: Validation of local displacement predictions.

    PubMed

    Costa, Maria Cristiana; Tozzi, Gianluca; Cristofolini, Luca; Danesi, Valentina; Viceconti, Marco; Dall'Ara, Enrico

    2017-01-01

    The estimation of local and structural mechanical properties of bones with micro Finite Element (microFE) models based on Micro Computed Tomography images depends on the quality bone geometry is captured, reconstructed and modelled. The aim of this study was to validate microFE models predictions of local displacements for vertebral bodies and to evaluate the effect of the elastic tissue modulus on model's predictions of axial forces. Four porcine thoracic vertebrae were axially compressed in situ, in a step-wise fashion and scanned at approximately 39μm resolution in preloaded and loaded conditions. A global digital volume correlation (DVC) approach was used to compute the full-field displacements. Homogeneous, isotropic and linear elastic microFE models were generated with boundary conditions assigned from the interpolated displacement field measured from the DVC. Measured and predicted local displacements were compared for the cortical and trabecular compartments in the middle of the specimens. Models were run with two different tissue moduli defined from microindentation data (12.0GPa) and a back-calculation procedure (4.6GPa). The predicted sum of axial reaction forces was compared to the experimental values for each specimen. MicroFE models predicted more than 87% of the variation in the displacement measurements (R2 = 0.87-0.99). However, model predictions of axial forces were largely overestimated (80-369%) for a tissue modulus of 12.0GPa, whereas differences in the range 10-80% were found for a back-calculated tissue modulus. The specimen with the lowest density showed a large number of elements strained beyond yield and the highest predictive errors. This study shows that the simplest microFE models can accurately predict quantitatively the local displacements and qualitatively the strain distribution within the vertebral body, independently from the considered bone types.

  16. Modeling and predicting low-speed vehicle emissions as a function of driving kinematics.

    PubMed

    Hao, Lijun; Chen, Wei; Li, Lei; Tan, Jianwei; Wang, Xin; Yin, Hang; Ding, Yan; Ge, Yunshan

    2017-05-01

    An instantaneous emission model was developed to model and predict the real driving emissions of the low-speed vehicles. The emission database used in the model was measured by using portable emission measurement system (PEMS) under actual traffic conditions in the rural area, and the characteristics of the emission data were determined in relation to the driving kinematics (speed and acceleration) of the low-speed vehicle. The input of the emission model is driving cycle, and the model requires instantaneous vehicle speed and acceleration levels as input variables and uses them to interpolate the pollutant emission rate maps to calculate the transient pollutant emission rates, which will be accumulated to calculate the total emissions released during the whole driving cycle. And the vehicle fuel consumption was determined through the carbon balance method. The model predicted the emissions and fuel consumption of an in-use low-speed vehicle type model, which agreed well with the measured data. Copyright © 2016. Published by Elsevier B.V.

  17. Wheel life prediction model - an alternative to the FASTSIM algorithm for RCF

    NASA Astrophysics Data System (ADS)

    Hossein-Nia, Saeed; Sichani, Matin Sh.; Stichel, Sebastian; Casanueva, Carlos

    2018-07-01

    In this article, a wheel life prediction model considering wear and rolling contact fatigue (RCF) is developed and applied to a heavy-haul locomotive. For wear calculations, a methodology based on Archard's wear calculation theory is used. The simulated wear depth is compared with profile measurements within 100,000 km. For RCF, a shakedown-based theory is applied locally, using the FaStrip algorithm to estimate the tangential stresses instead of FASTSIM. The differences between the two algorithms on damage prediction models are studied. The running distance between the two reprofiling due to RCF is estimated based on a Wöhler-like relationship developed from laboratory test results from the literature and the Palmgren-Miner rule. The simulated crack locations and their angles are compared with a five-year field study. Calculations to study the effects of electro-dynamic braking, track gauge, harder wheel material and the increase of axle load on the wheel life are also carried out.

  18. Turbine Vane External Heat Transfer. Volume 2. Numerical Solutions of the Navier-stokes Equations for Two- and Three-dimensional Turbine Cascades with Heat Transfer

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Weinberg, B. C.; Shamroth, S. J.; Mcdonald, H.

    1985-01-01

    The application of the time-dependent ensemble-averaged Navier-Stokes equations to transonic turbine cascade flow fields was examined. In particular, efforts focused on an assessment of the procedure in conjunction with a suitable turbulence model to calculate steady turbine flow fields using an O-type coordinate system. Three cascade configurations were considered. Comparisons were made between the predicted and measured surface pressures and heat transfer distributions wherever available. In general, the pressure predictions were in good agreement with the data. Heat transfer calculations also showed good agreement when an empirical transition model was used. However, further work in the development of laminar-turbulent transitional models is indicated. The calculations showed most of the known features associated with turbine cascade flow fields. These results indicate the ability of the Navier-Stokes analysis to predict, in reasonable amounts of computation time, the surface pressure distribution, heat transfer rates, and viscous flow development for turbine cascades operating at realistic conditions.

  19. lazar: a modular predictive toxicology framework

    PubMed Central

    Maunz, Andreas; Gütlein, Martin; Rautenberg, Micha; Vorgrimmler, David; Gebele, Denis; Helma, Christoph

    2013-01-01

    lazar (lazy structure–activity relationships) is a modular framework for predictive toxicology. Similar to the read across procedure in toxicological risk assessment, lazar creates local QSAR (quantitative structure–activity relationship) models for each compound to be predicted. Model developers can choose between a large variety of algorithms for descriptor calculation and selection, chemical similarity indices, and model building. This paper presents a high level description of the lazar framework and discusses the performance of example classification and regression models. PMID:23761761

  20. A prediction model for lift-fan simulator performance. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Yuska, J. A.

    1972-01-01

    The performance characteristics of a model VTOL lift-fan simulator installed in a two-dimensional wing are presented. The lift-fan simulator consisted of a 15-inch diameter fan driven by a turbine contained in the fan hub. The performance of the lift-fan simulator was measured in two ways: (1) the calculated momentum thrust of the fan and turbine (total thrust loading), and (2) the axial-force measured on a load cell force balance (axial-force loading). Tests were conducted over a wide range of crossflow velocities, corrected tip speeds, and wing angle of attack. A prediction modeling technique was developed to help in analyzing the performance characteristics of lift-fan simulators. A multiple linear regression analysis technique is presented which calculates prediction model equations for the dependent variables.

  1. Ionosonde-based indices for improved representation of solar cycle variation in the International Reference Ionosphere model

    NASA Astrophysics Data System (ADS)

    Brown, Steven; Bilitza, Dieter; Yiǧit, Erdal

    2018-06-01

    A new monthly ionospheric index, IGNS, is presented to improve the representation of the solar cycle variation of the ionospheric F2 peak plasma frequency, foF2. IGNS is calculated using a methodology similar to the construction of the "global effective sunspot number", IG, given by Liu et al. (1983) but selects ionosonde observations based on hemispheres. We incorporated the updated index into the International Reference Ionosphere (IRI) model and compared the foF2 model predictions with global ionospheric observations. We also investigated the influence of the underlying foF2 model on the IG index. IRI has two options for foF2 specification, the CCIR-66 and URSI-88 foF2 models. For the first time, we have calculated IG using URSI-88 and assessed the impact on model predictions. Through a retrospective model-data comparison, results show that the inclusion of the new monthly IGNS index in place of the current 12-month smoothed IG index reduce the foF2 model prediction errors by nearly a factor of two. These results apply to both day-time and nightime predictions. This is due to an overall improved prediction of foF2 seasonal and solar cycle variations in the different hemispheres.

  2. Molecular Dynamic Simulation and Inhibitor Prediction of Cysteine Synthase Structured Model as a Potential Drug Target for Trichomoniasis

    PubMed Central

    Singh, Satendra; Singh, Atul Kumar; Gautam, Budhayash

    2013-01-01

    In our presented research, we made an attempt to predict the 3D model for cysteine synthase (A2GMG5_TRIVA) using homology-modeling approaches. To investigate deeper into the predicted structure, we further performed a molecular dynamics simulation for 10 ns and calculated several supporting analysis for structural properties such as RMSF, radius of gyration, and the total energy calculation to support the predicted structured model of cysteine synthase. The present findings led us to conclude that the proposed model is stereochemically stable. The overall PROCHECK G factor for the homology-modeled structure was −0.04. On the basis of the virtual screening for cysteine synthase against the NCI subset II molecule, we present the molecule 1-N, 4-N-bis [3-(1H-benzimidazol-2-yl) phenyl] benzene-1,4-dicarboxamide (ZINC01690699) having the minimum energy score (−13.0 Kcal/Mol) and a log P value of 6 as a potential inhibitory molecule used to inhibit the growth of T. vaginalis infection. PMID:24073401

  3. A battery power model for the EUVE spacecraft

    NASA Technical Reports Server (NTRS)

    Yen, Wen L.; Littlefield, Ronald G.; Mclean, David R.; Tuchman, Alan; Broseghini, Todd A.; Page, Brenda J.

    1993-01-01

    This paper describes a battery power model that has been developed to simulate and predict the behavior of the 50 ampere-hour nickel-cadmium battery that supports the Extreme Ultraviolet Explorer (EUVE) spacecraft in its low Earth orbit. First, for given orbit, attitude, solar array panel and spacecraft load data, the model calculates minute-by-minute values for the net power available for charging the battery for a user-specified time period (usually about two weeks). Next, the model is used to calculate minute-by-minute values for the battery voltage, current and state-of-charge for the time period. The model's calculations are explained for its three phases: sunrise charging phase, constant voltage phase, and discharge phase. A comparison of predicted model values for voltage, current and state-of-charge with telemetry data for a complete charge-discharge cycle shows good correlation. This C-based computer model will be used by the EUVE Flight Operations Team for various 'what-if' scheduling analyses.

  4. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  5. Predictive accuracy of a model of volatile anesthetic uptake.

    PubMed

    Kennedy, R Ross; French, Richard A; Spencer, Christopher

    2002-12-01

    A computer program that models anesthetic uptake and distribution has been in use in our department for 20 yr as a teaching tool. New anesthesia machines that electronically measure fresh gas flow rates and vaporizer settings allowed us to assess the performance of this model during clinical anesthesia. Gas flow, vaporizer settings, and end-tidal concentrations were collected from the anesthesia machine (Datex S/5 ADU) at 10-s intervals during 30 elective anesthetics. These were entered into the uptake model. Expired anesthetic vapor concentrations were calculated and compared with actual values as measured by the patient monitor (Datex AS/3). Sevoflurane was used in 16 patients and isoflurane in 14 patients. For all patients, the median performance error was -0.24%, the median absolute performance error was 13.7%, divergence was 2.3%/h, and wobble was 3.1%. There was no significant difference between sevoflurane and isoflurane. This model predicted expired concentrations well in these patients. These results are similar to those seen when comparing calculated and actual propofol concentrations in propofol infusion systems and meet published guidelines for the accuracy of models used in target-controlled anesthesia systems. This model may be useful for predicting responses to changes in fresh gas and vapor settings. We compared measured inhaled anesthetic concentrations with those predicted by a model. The method used for comparison has been used to study models of propofol administration. Our model predicts expired isoflurane and sevoflurane concentrations at least as well as common propofol models predict arterial propofol concentrations.

  6. Lung Cancer Survival Prediction using Ensemble Data Mining on Seer Data

    DOE PAGES

    Agrawal, Ankit; Misra, Sanchit; Narayanan, Ramanathan; ...

    2012-01-01

    We analyze the lung cancer data available from the SEER program with the aim of developing accurate survival prediction models for lung cancer. Carefully designed preprocessing steps resulted in removal/modification/splitting of several attributes, and 2 of the 11 derived attributes were found to have significant predictive power. Several supervised classification methods were used on the preprocessed data along with various data mining optimizations and validations. In our experiments, ensemble voting of five decision tree based classifiers and meta-classifiers was found to result in the best prediction performance in terms of accuracy and area under the ROC curve. We have developedmore » an on-line lung cancer outcome calculator for estimating the risk of mortality after 6 months, 9 months, 1 year, 2 year and 5 years of diagnosis, for which a smaller non-redundant subset of 13 attributes was carefully selected using attribute selection techniques, while trying to retain the predictive power of the original set of attributes. Further, ensemble voting models were also created for predicting conditional survival outcome for lung cancer (estimating risk of mortality after 5 years of diagnosis, given that the patient has already survived for a period of time), and included in the calculator. The on-line lung cancer outcome calculator developed as a result of this study is available at http://info.eecs.northwestern.edu:8080/LungCancerOutcomeCalculator/.« less

  7. Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation

    PubMed Central

    Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai

    2015-01-01

    Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368

  8. Wear Calculation Approach for Sliding - Friction Pairs

    NASA Astrophysics Data System (ADS)

    Springis, G.; Rudzitis, J.; Lungevics, J.; Berzins, K.

    2017-05-01

    One of the most important things how to predict the service life of different products is always connected with the choice of adequate method. With the development of production technologies and measuring devices and with ever increasing precision one can get the appropriate data to be used in analytic calculations. Historically one can find several theoretical wear calculation methods but still there are no exact wear calculation model that could be applied to all cases of wear processes because of difficulties connected with a variety of parameters that are involved in wear process of two or several surfaces. Analysing the wear prediction theories that could be classified into definite groups one can state that each of them has shortcomings that might impact the results thus making unnecessary theoretical calculations. The offered wear calculation method is based on the theories of different branches of science. It includes the description of 3D surface micro-topography using standardized roughness parameters, explains the regularities of particle separation from the material in the wear process using fatigue theory and takes into account material’s physical and mechanical characteristics and definite conditions of product’s working time. The proposed wear calculation model could be of value for prediction of the exploitation time for sliding friction pairs thus allowing the best technologies to be chosen for many mechanical details.

  9. Experimental and theoretical investigations on the validity of the geometrical optics model for calculating the stability of optical traps.

    PubMed

    Schut, T C; Hesselink, G; de Grooth, B G; Greve, J

    1991-01-01

    We have developed a computer program based on the geometrical optics approach proposed by Roosen to calculate the forces on dielectric spheres in focused laser beams. We have explicitly taken into account the polarization of the laser light and thd divergence of the laser beam. The model can be used to evaluate the stability of optical traps in a variety of different optical configurations. Our calculations explain the experimental observation by Ashkin that a stable single-beam optical trap, without the help of the gravitation force, can be obtained with a strongly divergent laser beam. Our calculations also predict a different trap stability in the directions orthogonal and parallel to the polarization direction of the incident light. Different experimental methods were used to test the predictions of the model for the gravity trap. A new method for measuring the radiation force along the beam axis in both the stable and instable regions is presented. Measurements of the radiation force on polystyrene spheres with diameters of 7.5 and 32 microns in a TEM00-mode laser beam showed a good qualitative correlation with the predictions and a slight quantitative difference. The validity of the geometrical approximations involved in the model will be discussed for spheres of different sizes and refractive indices.

  10. Calculating moisture content for 1000-hour timelag fuels in western Washington and western Oregon.

    Treesearch

    Roger D. Ottmar; David V. Sandberg

    1985-01-01

    A predictive model is presented to calculate moisture content of 1000-hour timelag fuels in Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) and western hemlock (Tsuga heterophylla (Raf.) Sarg.) logging slash in western Washington and western Oregon. The model is a modification of the 1000-hour fuel moisture model of the...

  11. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    NASA Astrophysics Data System (ADS)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  12. Validation of a Clinical Scoring System for Outcome Prediction in Dogs with Acute Kidney Injury Managed by Hemodialysis.

    PubMed

    Segev, G; Langston, C; Takada, K; Kass, P H; Cowgill, L D

    2016-05-01

    A scoring system for outcome prediction in dogs with acute kidney injury (AKI) recently has been developed but has not been validated. The scoring system previously developed for outcome prediction will accurately predict outcome in a validation cohort of dogs with AKI managed with hemodialysis. One hundred fifteen client-owned dogs with AKI. Medical records of dogs with AKI treated by hemodialysis between 2011 and 2015 were reviewed. Dogs were included only if all variables required to calculate the final predictive score were available, and the 30-day outcome was known. A predictive score for 3 models was calculated for each dog. Logistic regression was used to evaluate the association of the final predictive score with each model's outcome. Receiver operating curve (ROC) analyses were performed to determine sensitivity and specificity for each model based on previously established cut-off values. Higher scores for each model were associated with decreased survival probability (P < .001). Based on previously established cut-off values, 3 models (models A, B, C) were associated with sensitivities/specificities of 73/75%, 71/80%, and 75/86%, respectively, and correctly classified 74-80% of the dogs. All models were simple to apply and allowed outcome prediction that closely corresponded with actual outcome in an independent cohort. As expected, accuracies were slightly lower compared with those from the previously reported cohort used initially to develop the models. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  13. Simulation of the effect of hydrogen bonds on water activity of glucose and dextran using the Veytsman model.

    PubMed

    De Vito, Francesca; Veytsman, Boris; Painter, Paul; Kokini, Jozef L

    2015-03-06

    Carbohydrates exhibit either van der Waals and ionic interactions or strong hydrogen bonding interactions. The prominence and large number of hydrogen bonds results in major contributions to phase behavior. A thermodynamic framework that accounts for hydrogen bonding interactions is therefore necessary. We have developed an extension of the thermodynamic model based on the Veytsman association theory to predict the contribution of hydrogen bonds to the behavior of glucose-water and dextran-water systems and we have calculated the free energy of mixing and its derivative leading to chemical potential and water activity. We compared our calculations with experimental data of water activity for glucose and dextran and found excellent agreement far superior to the Flory-Huggins theory. The validation of our calculations using experimental data demonstrated the validity of the Veytsman model in properly accounting for the hydrogen bonding interactions and successfully predicting water activity of glucose and dextran. Our calculations of the concentration of hydrogen bonds using the Veytsman model were instrumental in our ability to explain the difference between glucose and dextran and the role that hydrogen bonds play in contributing to these differences. The miscibility predictions showed that the Veytsman model is also able to correctly describe the phase behavior of glucose and dextran. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Resuspension and redistribution of radionuclides during grassland and forest fires in the Chernobyl exclusion zone: part II. Modeling the transport process.

    PubMed

    Yoschenko, V I; Kashparov, V A; Levchuk, S E; Glukhovskiy, A S; Khomutinin, Yu V; Protsak, V P; Lundin, S M; Tschiersch, J

    2006-01-01

    To predict parameters of radionuclide resuspension, transport and deposition during forest and grassland fires, several model modules were developed and adapted. Experimental data of controlled burning of prepared experimental plots in the Chernobyl exclusion zone have been used to evaluate the prognostic power of the models. The predicted trajectories and elevations of the plume match with those visually observed during the fire experiments in the grassland and forest sites. Experimentally determined parameters could be successfully used for the calculation of the initial plume parameters which provide the tools for the description of various fire scenarios and enable prognostic calculations. In summary, the model predicts a release of some per thousand from the radionuclide inventory of the fuel material by the grassland fires. During the forest fire, up to 4% of (137)Cs and (90)Sr and up to 1% of the Pu isotopes can be released from the forest litter according to the model calculations. However, these results depend on the parameters of the fire events. In general, the modeling results are in good accordance with the experimental data. Therefore, the considered models were successfully validated and can be recommended for the assessment of the resuspension and redistribution of radionuclides during grassland and forest fires in contaminated territories.

  15. Shock tube and chemical kinetic modeling study of the oxidation of 2,5-dimethylfuran.

    PubMed

    Sirjean, Baptiste; Fournet, René; Glaude, Pierre-Alexandre; Battin-Leclerc, Frédérique; Wang, Weijing; Oehlschlaeger, Matthew A

    2013-02-21

    A detailed kinetic model describing the oxidation of 2,5-dimethylfuran (DMF), a potential second-generation biofuel, is proposed. The kinetic model is based upon quantum chemical calculations for the initial DMF consumption reactions and important reactions of intermediates. The model is validated by comparison to new DMF shock tube ignition delay time measurements (over the temperature range 1300-1831 K and at nominal pressures of 1 and 4 bar) and the DMF pyrolysis speciation measurements of Lifshitz et al. [ J. Phys. Chem. A 1998 , 102 ( 52 ), 10655 - 10670 ]. Globally, modeling predictions are in good agreement with the considered experimental targets. In particular, ignition delay times are predicted well by the new model, with model-experiment deviations of at most a factor of 2, and DMF pyrolysis conversion is predicted well, to within experimental scatter of the Lifshitz et al. data. Additionally, comparisons of measured and model predicted pyrolysis speciation provides validation of theoretically calculated channels for the oxidation of DMF. Sensitivity and reaction flux analyses highlight important reactions as well as the primary reaction pathways responsible for the decomposition of DMF and formation and destruction of key intermediate and product species.

  16. Predicting Flutter and Forced Response in Turbomachinery

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Adamczyk, John J.; Srivastava, Rakesh; Bakhle, Milind A.; Shabbir, Aamir; Chen, Jen-Ping; Janus, J. Mark; To, Wai-Ming; Barter, John

    2005-01-01

    TURBO-AE is a computer code that enables detailed, high-fidelity modeling of aeroelastic and unsteady aerodynamic characteristics for prediction of flutter, forced response, and blade-row interaction effects in turbomachinery. Flow regimes that can be modeled include subsonic, transonic, and supersonic, with attached and/or separated flow fields. The three-dimensional Reynolds-averaged Navier-Stokes equations are solved numerically to obtain extremely accurate descriptions of unsteady flow fields in multistage turbomachinery configurations. Blade vibration is simulated by use of a dynamic-grid-deformation technique to calculate the energy exchange for determining the aerodynamic damping of vibrations of blades. The aerodynamic damping can be used to assess the stability of a blade row. TURBO-AE also calculates the unsteady blade loading attributable to such external sources of excitation as incoming gusts and blade-row interactions. These blade loadings, along with aerodynamic damping, are used to calculate the forced responses of blades to predict their fatigue lives. Phase-lagged boundary conditions based on the direct-store method are used to calculate nonzero interblade phase-angle oscillations; this practice eliminates the need to model multiple blade passages, and, hence, enables large savings in computational resources.

  17. Model Prediction Results for 2007 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin

    2008-02-01

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.

  18. TK Modeler version 1.0, a Microsoft® Excel®-based modeling software for the prediction of diurnal blood/plasma concentration for toxicokinetic use.

    PubMed

    McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A

    2012-07-01

    TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. An Intercomparison of Lidar Ozone and Temperature Measurements From the SOLVE Mission With Predicted Model Values

    NASA Technical Reports Server (NTRS)

    Burris, John; McGee, Thomas J.; Hoegy, Walt; Lait, Leslie; Sumnicht, Grant; Twigg, Larry; Heaps, William

    2000-01-01

    Temperature profiles acquired by Goddard Space Flight Center's AROTEL lidar during the SOLVE mission onboard NASA's DC-8 are compared with predicted values from several atmospheric models (DAO, NCEP and UKMO). The variability in the differences between measured and calculated temperature fields was approximately 5 K. Retrieved temperatures within the polar vortex showed large regions that were significantly colder than predicted by the atmospheric models.

  20. Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR

    NASA Astrophysics Data System (ADS)

    Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng

    2017-06-01

    The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.

  1. How rare is complex life in the Milky Way?

    PubMed

    Bounama, Christine; von Bloh, Werner; Franck, Siegfried

    2007-10-01

    An integrated Earth system model was applied to calculate the number of habitable Earth-analog planets that are likely to have developed primitive (unicellular) and complex (multicellular) life in extrasolar planetary systems. The model is based on the global carbon cycle mediated by life and driven by increasing stellar luminosity and plate tectonics. We assumed that the hypothetical primitive and complex life forms differed in their temperature limits and CO(2) tolerances. Though complex life would be more vulnerable to environmental stress, its presence would amplify weathering processes on a terrestrial planet. The model allowed us to calculate the average number of Earth-analog planets that may harbor such life by using the formation rate of Earth-like planets in the Milky Way as well as the size of a habitable zone that could support primitive and complex life forms. The number of planets predicted to bear complex life was found to be approximately 2 orders of magnitude lower than the number predicted for primitive life forms. Our model predicted a maximum abundance of such planets around 1.8 Ga ago and allowed us to calculate the average distance between potentially habitable planets in the Milky Way. If the model predictions are accurate, the future missions DARWIN (up to a probability of 65%) and TPF (up to 20%) are likely to detect at least one planet with a biosphere composed of complex life.

  2. Measurements and Modeling of Soot Formation and Radiation in Microgravity Jet Diffusion Flames. Volume 4

    NASA Technical Reports Server (NTRS)

    Ku, Jerry C.; Tong, Li; Greenberg, Paul S.

    1996-01-01

    This is a computational and experimental study for soot formation and radiative heat transfer in jet diffusion flames under normal gravity (1-g) and microgravity (0-g) conditions. Instantaneous soot volume fraction maps are measured using a full-field imaging absorption technique developed by the authors. A compact, self-contained drop rig is used for microgravity experiments in the 2.2-second drop tower facility at NASA Lewis Research Center. On modeling, we have coupled flame structure and soot formation models with detailed radiation transfer calculations. Favre-averaged boundary layer equations with a k-e-g turbulence model are used to predict the flow field, and a conserved scalar approach with an assumed Beta-pdf are used to predict gaseous species mole fraction. Scalar transport equations are used to describe soot volume fraction and number density distributions, with formation and oxidation terms modeled by one-step rate equations and thermophoretic effects included. An energy equation is included to couple flame structure and radiation analyses through iterations, neglecting turbulence-radiation interactions. The YIX solution for a finite cylindrical enclosure is used for radiative heat transfer calculations. The spectral absorption coefficient for soot aggregates is calculated from the Rayleigh solution using complex refractive index data from a Drude- Lorentz model. The exponential-wide-band model is used to calculate the spectral absorption coefficient for H20 and C02. It is shown that when compared to results from true spectral integration, the Rosseland mean absorption coefficient can provide reasonably accurate predictions for the type of flames studied. The soot formation model proposed by Moss, Syed, and Stewart seems to produce better fits to experimental data and more physically sound than the simpler model by Khan et al. Predicted soot volume fraction and temperature results agree well with published data for a normal gravity co-flow laminar flames and turbulent jet flames. Predicted soot volume fraction results also agree with our data for 1-g and 0-g laminar jet names as well as 1-g turbulent jet flames.

  3. Application of binding free energy calculations to prediction of binding modes and affinities of MDM2 and MDMX inhibitors.

    PubMed

    Lee, Hui Sun; Jo, Sunhwan; Lim, Hyun-Suk; Im, Wonpil

    2012-07-23

    Molecular docking is widely used to obtain binding modes and binding affinities of a molecule to a given target protein. Despite considerable efforts, however, prediction of both properties by docking remains challenging mainly due to protein's structural flexibility and inaccuracy of scoring functions. Here, an integrated approach has been developed to improve the accuracy of binding mode and affinity prediction and tested for small molecule MDM2 and MDMX antagonists. In this approach, initial candidate models selected from docking are subjected to equilibration MD simulations to further filter the models. Free energy perturbation molecular dynamics (FEP/MD) simulations are then applied to the filtered ligand models to enhance the ability in predicting the near-native ligand conformation. The calculated binding free energies for MDM2 complexes are overestimated compared to experimental measurements mainly due to the difficulties in sampling highly flexible apo-MDM2. Nonetheless, the FEP/MD binding free energy calculations are more promising for discriminating binders from nonbinders than docking scores. In particular, the comparison between the MDM2 and MDMX results suggests that apo-MDMX has lower flexibility than apo-MDM2. In addition, the FEP/MD calculations provide detailed information on the different energetic contributions to ligand binding, leading to a better understanding of the sensitivity and specificity of protein-ligand interactions.

  4. Prediction of Isoelectric Point of Manganese and Cobalt Lamellar Oxides: Application to Controlled Synthesis of Mixed Oxides.

    PubMed

    Tang, Céline; Giaume, Domitille; Guerlou-Demourgues, Liliane; Lefèvre, Grégory; Barboux, Philippe

    2018-05-30

    To design novel layered materials, bottom-up strategy is very promising. It consists of (1) synthesizing various layered oxides, (2) exfoliating them, then (3) restacking them in a controlled way. The last step is based on electrostatic interactions between different layered oxides and is difficult to control. The aim of this study is to facilitate this step by predicting the isoelectric point (IEP) of exfoliated materials. The Multisite Complexation model (MUSIC) was used for this objective and was shown to be able to predict IEP from the mean oxidation state of the metal in the (hydr)oxides, as the main parameter. Moreover, the effect of exfoliation on IEP has also been calculated. Starting from platelets with a high basal surface area over total surface area, we show that the exfoliation process has no impact on calculated IEP value, as verified with experiments. Moreover, the restacked materials containing different monometallic (hydr)oxide layers also have an IEP consistent with values calculated with the model. This study proves that MUSIC model is a useful tool to predict IEP of various complex metal oxides and hydroxides.

  5. Prediction of friction pressure drop for low pressure two-phase flows on the basis of approximate analytical models

    NASA Astrophysics Data System (ADS)

    Zubov, N. O.; Kaban'kov, O. N.; Yagov, V. V.; Sukomel, L. A.

    2017-12-01

    Wide use of natural circulation loops operating at low redused pressures generates the real need to develop reliable methods for predicting flow regimes and friction pressure drop for two-phase flows in this region of parameters. Although water-air flows at close-to-atmospheric pressures are the most widely studied subject in the field of two-phase hydrodynamics, the problem of reliably calculating friction pressure drop can hardly be regarded to have been fully solved. The specific volumes of liquid differ very much from those of steam (gas) under such conditions, due to which even a small change in flow quality may cause the flow pattern to alter very significantly. Frequently made attempts to use some or another universal approach to calculating friction pressure drop in a wide range of steam quality values do not seem to be justified and yield predicted values that are poorly consistent with experimentally measured data. The article analyzes the existing methods used to calculate friction pressure drop for two-phase flows at low pressures by comparing their results with the experimentally obtained data. The advisability of elaborating calculation procedures for determining the friction pressure drop and void fraction for two-phase flows taking their pattern (flow regime) into account is demonstrated. It is shown that, for flows characterized by low reduced pressures, satisfactory results are obtained from using a homogeneous model for quasi-homogeneous flows, whereas satisfactory results are obtained from using an annular flow model for flows characterized by high values of void fraction. Recommendations for making a shift from one model to another in carrying out engineering calculations are formulated and tested. By using the modified annular flow model, it is possible to obtain reliable predictions for not only the pressure gradient but also for the liquid film thickness; the consideration of droplet entrainment and deposition phenomena allows reasonable corrections to be introduced into calculations. To the best of the authors' knowledge, it is for the first time that the entrainment of droplets from the film surface is taken into consideration in the dispersed-annular flow model.

  6. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  7. Development of a New Model for Accurate Prediction of Cloud Water Deposition on Vegetation

    NASA Astrophysics Data System (ADS)

    Katata, G.; Nagai, H.; Wrzesinsky, T.; Klemm, O.; Eugster, W.; Burkard, R.

    2006-12-01

    Scarcity of water resources in arid and semi-arid areas is of great concern in the light of population growth and food shortages. Several experiments focusing on cloud (fog) water deposition on the land surface suggest that cloud water plays an important role in water resource in such regions. A one-dimensional vegetation model including the process of cloud water deposition on vegetation has been developed to better predict cloud water deposition on the vegetation. New schemes to calculate capture efficiency of leaf, cloud droplet size distribution, and gravitational flux of cloud water were incorporated in the model. Model calculations were compared with the data acquired at the Norway spruce forest at the Waldstein site, Germany. High performance of the model was confirmed by comparisons of calculated net radiation, sensible and latent heat, and cloud water fluxes over the forest with measurements. The present model provided a better prediction of measured turbulent and gravitational fluxes of cloud water over the canopy than the Lovett model, which is a commonly used cloud water deposition model. Detailed calculations of evapotranspiration and of turbulent exchange of heat and water vapor within the canopy and the modifications are necessary for accurate prediction of cloud water deposition. Numerical experiments to examine the dependence of cloud water deposition on the vegetation species (coniferous and broad-leaved trees, flat and cylindrical grasses) and structures (Leaf Area Index (LAI) and canopy height) are performed using the presented model. The results indicate that the differences of leaf shape and size have a large impact on cloud water deposition. Cloud water deposition also varies with the growth of vegetation and seasonal change of LAI. We found that the coniferous trees whose height and LAI are 24 m and 2.0 m2m-2, respectively, produce the largest amount of cloud water deposition in all combinations of vegetation species and structures in the experiments.

  8. Modification of the MML turbulence model for adverse pressure gradient flows. M.S. Thesis - Akron Univ., 1993

    NASA Technical Reports Server (NTRS)

    Conley, Julianne M.

    1994-01-01

    Computational fluid dynamics is being used increasingly to predict flows for aerospace propulsion applications, yet there is still a need for an easy to use, computationally inexpensive turbulence model capable of accurately predicting a wide range of turbulent flows. The Baldwin-Lomax model is the most widely used algebraic model, even though it has known difficulties calculating flows with strong adverse pressure gradients and large regions of separation. The modified mixing length model (MML) was developed specifically to handle the separation which occurs on airfoils and has given significantly better results than the Baldwin-Lomax model. The success of these calculations warrants further evaluation and development of MML. The objective of this work was to evaluate the performance of MML for zero and adverse pressure gradient flows, and modify it as needed. The Proteus Navier-Stokes code was used for this study and all results were compared with experimental data and with calculations made using the Baldwin-Lomax algebraic model, which is currently available in Proteus. The MML model was first evaluated for zero pressure gradient flow over a flat plate, then modified to produce the proper boundary layer growth. Additional modifications, based on experimental data for three adverse pressure gradient flows, were also implemented. The adapted model, called MMLPG (modified mixing length model for pressure gradient flows), was then evaluated for a typical propulsion flow problem, flow through a transonic diffuser. Three cases were examined: flow with no shock, a weak shock and a strong shock. The results of these calculations indicate that the objectives of this study have been met. Overall, MMLPG is capable of accurately predicting the adverse pressure gradient flows examined in this study, giving generally better agreement with experimental data than the Baldwin-Lomax model.

  9. Model for the prediction of subsurface strata movement due to underground mining

    NASA Astrophysics Data System (ADS)

    Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan

    2017-12-01

    The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the proposed prediction model are thus validated.

  10. Three-dimensional viscous rotor flow calculations using a viscous-inviscid interaction approach

    NASA Technical Reports Server (NTRS)

    Chen, Ching S.; Bridgeman, John O.

    1990-01-01

    A three-dimensional viscous-inviscid interaction analysis was developed to predict the performance of rotors in hover and in forward flight at subsonic and transonic tip speeds. The analysis solves the full-potential and boundary-layer equations by finite-difference numerical procedures. Calculations were made for several different model rotor configurations. The results were compared with predictions from a two-dimensional integral method and with experimental data. The comparisons show good agreement between predictions and test data.

  11. Investigation of the Jet Noise Prediction Theory and Application Utilizing the PAO Formulation. [mathematical model for calculating noise radiation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Application of the Phillips theory to engineering calculations of rocket and high speed jet noise radiation is reported. Presented are a detailed derivation of the theory, the composition of the numerical scheme, and discussions of the practical problems arising in the application of the present noise prediction method. The present method still contains some empirical elements, yet it provides a unified approach in the prediction of sound power, spectrum, and directivity.

  12. Prediction of Radical Scavenging Activities of Anthocyanins Applying Adaptive Neuro-Fuzzy Inference System (ANFIS) with Quantum Chemical Descriptors

    PubMed Central

    Jhin, Changho; Hwang, Keum Taek

    2014-01-01

    Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627

  13. A method for grounding grid corrosion rate prediction

    NASA Astrophysics Data System (ADS)

    Han, Juan; Du, Jingyi

    2017-06-01

    Involved in a variety of factors, prediction of grounding grid corrosion complex, and uncertainty in the acquisition process, we propose a combination of EAHP (extended AHP) and fuzzy nearness degree of effective grounding grid corrosion rate prediction model. EAHP is used to establish judgment matrix and calculate the weight of each factors corrosion of grounding grid; different sample classification properties have different corrosion rate of contribution, and combining the principle of close to predict corrosion rate.The application result shows, the model can better capture data variation, thus to improve the validity of the model to get higher prediction precision.

  14. Optical and biometric relationships of the isolated pig crystalline lens.

    PubMed

    Vilupuru, A S; Glasser, A

    2001-07-01

    To investigate the interrelationships between optical and biometric properties of the porcine crystalline lens, to compare these findings with similar relationships found for the human lens and to attempt to fit this data to a geometric model of the optical and biometric properties of the pig lens. Weight, focal length, spherical aberration, surface curvatures, thickness and diameters of 20 isolated pig lenses were measured and equivalent refractive index was calculated. These parameters were compared and used to geometrically model the pig lens. Linear relationships were identified between many of the lens biometric and optical properties. The existence of these relationships allowed a simple geometrical model of the pig lens to be calculated which offers predictions of the optical properties. The linear relationships found and the agreement observed between measured and modeled results suggest that the pig lens confirms to a predictable, preset developmental pattern and that the optical and biometric properties are predictably interrelated.

  15. The prediction of speech intelligibility in classrooms using computer models

    NASA Astrophysics Data System (ADS)

    Dance, Stephen; Dentoni, Roger

    2005-04-01

    Two classrooms were measured and modeled using the industry standard CATT model and the Web model CISM. Sound levels, reverberation times and speech intelligibility were predicted in these rooms using data for 7 octave bands. It was found that overall sound levels could be predicted to within 2 dB by both models. However, overall reverberation time was found to be accurately predicted by CATT 14% prediction error, but not by CISM, 41% prediction error. This compared to a 30% prediction error using classical theory. As for STI: CATT predicted within 11%, CISM to within 3% and Sabine to within 28% of the measured value. It should be noted that CISM took approximately 15 seconds to calculate, while CATT took 15 minutes. CISM is freely available on-line at www.whyverne.co.uk/acoustics/Pages/cism/cism.html

  16. Progress towards a more predictive model for hohlraum radiation drive and symmetry

    NASA Astrophysics Data System (ADS)

    Jones, O. S.; Suter, L. J.; Scott, H. A.; Barrios, M. A.; Farmer, W. A.; Hansen, S. B.; Liedahl, D. A.; Mauche, C. W.; Moore, A. S.; Rosen, M. D.; Salmonson, J. D.; Strozzi, D. J.; Thomas, C. A.; Turnbull, D. P.

    2017-05-01

    For several years, we have been calculating the radiation drive in laser-heated gold hohlraums using flux-limited heat transport with a limiter of 0.15, tabulated values of local thermodynamic equilibrium gold opacity, and an approximate model for not in a local thermodynamic equilibrium (NLTE) gold emissivity (DCA_2010). This model has been successful in predicting the radiation drive in vacuum hohlraums, but for gas-filled hohlraums used to drive capsule implosions, the model consistently predicts too much drive and capsule bang times earlier than measured. In this work, we introduce a new model that brings the calculated bang time into better agreement with the measured bang time. The new model employs (1) a numerical grid that is fully converged in space, energy, and time, (2) a modified approximate NLTE model that includes more physics and is in better agreement with more detailed offline emissivity models, and (3) a reduced flux limiter value of 0.03. We applied this model to gas-filled hohlraum experiments using high density carbon and plastic ablator capsules that had hohlraum He fill gas densities ranging from 0.06 to 1.6 mg/cc and hohlraum diameters of 5.75 or 6.72 mm. The new model predicts bang times to within ±100 ps for most experiments with low to intermediate fill densities (up to 0.85 mg/cc). This model predicts higher temperatures in the plasma than the old model and also predicts that at higher gas fill densities, a significant amount of inner beam laser energy escapes the hohlraum through the opposite laser entrance hole.

  17. Predictions from a flavour GUT model combined with a SUSY breaking sector

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Hohl, Christian

    2017-10-01

    We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.

  18. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  19. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  20. Model predictive control of P-time event graphs

    NASA Astrophysics Data System (ADS)

    Hamri, H.; Kara, R.; Amari, S.

    2016-12-01

    This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.

  1. Radiation risk predictions for Space Station Freedom orbits

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Atwell, William; Weyland, Mark; Hardy, Alva C.; Wilson, John W.; Townsend, Lawrence W.; Shinn, Judy L.; Katz, Robert

    1991-01-01

    Risk assessment calculations are presented for the preliminary proposed solar minimum and solar maximum orbits for Space Station Freedom (SSF). Integral linear energy transfer (LET) fluence spectra are calculated for the trapped proton and GCR environments. Organ dose calculations are discussed using the computerized anatomical man model. The cellular track model of Katz is applied to calculate cell survival, transformation, and mutation rates for various aluminum shields. Comparisons between relative biological effectiveness (RBE) and quality factor (QF) values for SSF orbits are made.

  2. Path-Dependent Travel Time Prediction Variance and Covariance for a Global Tomographic P- and S-Velocity Model

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.

    2015-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  3. Predicting the Shifts of Absorption Maxima of Azulene Derivatives Using Molecular Modeling and ZINDO CI Calculations of UV-Vis Spectra

    ERIC Educational Resources Information Center

    Patalinghug, Wyona C.; Chang, Maharlika; Solis, Joanne

    2007-01-01

    The deep blue color of azulene is drastically changed by the addition of substituents such as CH[subscript 3], F, or CHO. Computational semiempirical methods using ZINDO CI are used to model azulene and azulene derivatives and to calculate their UV-vis spectra. The calculated spectra are used to show the trends in absorption band shifts upon…

  4. Variability of pCO2 in surface waters and development of prediction model.

    PubMed

    Chung, Sewoong; Park, Hyungseok; Yoo, Jisu

    2018-05-01

    Inland waters are substantial sources of atmospheric carbon, but relevant data are rare in Asian monsoon regions including Korea. Emissions of CO 2 to the atmosphere depend largely on the partial pressure of CO 2 (pCO 2 ) in water; however, measured pCO 2 data are scarce and calculated pCO 2 can show large uncertainty. This study had three objectives: 1) to examine the spatial variability of pCO 2 in diverse surface water systems in Korea; 2) to compare pCO 2 calculated using pH-total alkalinity (Alk) and pH-dissolved inorganic carbon (DIC) with pCO 2 measured by an in situ submersible nondispersive infrared detector; and 3) to characterize the major environmental variables determining the variation of pCO 2 based on physical, chemical, and biological data collected concomitantly. Of 30 samples, 80% were found supersaturated in CO 2 with respect to the overlying atmosphere. Calculated pCO 2 using pH-Alk and pH-DIC showed weak prediction capability and large variations with respect to measured pCO 2 . Error analysis indicated that calculated pCO 2 is highly sensitive to the accuracy of pH measurements, particularly at low pH. Stepwise multiple linear regression (MLR) and random forest (RF) techniques were implemented to develop the most parsimonious model based on 10 potential predictor variables (pH, Alk, DIC, Uw, Cond, Turb, COD, DOC, TOC, Chla) by optimizing model performance. The RF model showed better performance than the MLR model, and the most parsimonious RF model (pH, Turb, Uw, Chla) improved pCO 2 prediction capability considerably compared with the simple calculation approach, reducing the RMSE from 527-544 to 105μatm at the study sites. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Dose-Response Calculator for ArcGIS

    USGS Publications Warehouse

    Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.

    2011-01-01

    The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

  6. Tectonic predictions with mantle convection models

    NASA Astrophysics Data System (ADS)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough for an accurate prediction of instantaneous flow, but not for a prediction after 10 My of evolution. Therefore, inverse methods (sequential or data assimilation methods) using short-term fully dynamic evolution that predict surface kinematics are promising tools for a better understanding of the state of the Earth's mantle.

  7. Numerical description of cavitation on axisymmetric bodies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hickox, C.E.; Hailey, C.E.; Wolfe, W.P.

    1988-01-01

    This paper reports on ongoing studies which are directed toward the development of predictive techniques for the modeling of steady cavitation on axisymmetric bodies. The primary goal of the modeling effort is the prediction of cavity shape and pressure distribution from which forces and moments can be calculated. Here we present an overview of the modeling techniques developed and compare predictions with experimental data obtained from water tunnel tests for both limited and supercavitation. 14 refs., 4 figs.

  8. A statistical rain attenuation prediction model with application to the advanced communication technology satellite project. 1: Theoretical development and application to yearly predictions for selected cities in the United States

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1986-01-01

    A rain attenuation prediction model is described for use in calculating satellite communication link availability for any specific location in the world that is characterized by an extended record of rainfall. Such a formalism is necessary for the accurate assessment of such availability predictions in the case of the small user-terminal concept of the Advanced Communication Technology Satellite (ACTS) Project. The model employs the theory of extreme value statistics to generate the necessary statistical rainrate parameters from rain data in the form compiled by the National Weather Service. These location dependent rain statistics are then applied to a rain attenuation model to obtain a yearly prediction of the occurrence of attenuation on any satellite link at that location. The predictions of this model are compared to those of the Crane Two-Component Rain Model and some empirical data and found to be very good. The model is then used to calculate rain attenuation statistics at 59 locations in the United States (including Alaska and Hawaii) for the 20 GHz downlinks and 30 GHz uplinks of the proposed ACTS system. The flexibility of this modeling formalism is such that it allows a complete and unified treatment of the temporal aspects of rain attenuation that leads to the design of an optimum stochastic power control algorithm, the purpose of which is to efficiently counter such rain fades on a satellite link.

  9. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  10. Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth

    NASA Technical Reports Server (NTRS)

    Dawicke, D. S.; Newman, J. C., Jr.

    1992-01-01

    A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.

  11. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    PubMed

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    DOE PAGES

    MacDonald, M. J.; Vorberger, J.; Gamboa, E. J.; ...

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate lattice strains for all initial crystallite orientations, enablingmore » elastic anisotropy and sample texture effects to be modeled directly. Furthermore, the effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.« less

  13. Development of a CSP plant energy yield calculation tool applying predictive models to analyze plant performance sensitivities

    NASA Astrophysics Data System (ADS)

    Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons

    2017-06-01

    At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.

  14. Modified creep and shrinkage prediction model B3 for serviceability limit state analysis of composite slabs

    NASA Astrophysics Data System (ADS)

    Gholamhoseini, Alireza

    2016-03-01

    Relatively little research has been reported on the time-dependent in-service behavior of composite concrete slabs with profiled steel decking as permanent formwork and little guidance is available for calculating long-term deflections. The drying shrinkage profile through the thickness of a composite slab is greatly affected by the impermeable steel deck at the slab soffit, and this has only recently been quantified. This paper presents the results of long-term laboratory tests on composite slabs subjected to both drying shrinkage and sustained loads. Based on laboratory measurements, a design model for the shrinkage strain profile through the thickness of a slab is proposed. The design model is based on some modifications to an existing creep and shrinkage prediction model B3. In addition, an analytical model is developed to calculate the time-dependent deflection of composite slabs taking into account the time-dependent effects of creep and shrinkage. The calculated deflections are shown to be in good agreement with the experimental measurements.

  15. Personalized Prediction of Psychosis: External validation of the NAPLS2 Psychosis Risk Calculator with the EDIPPP project

    PubMed Central

    Carrión, Ricardo E.; Cornblatt, Barbara A.; Burton, Cynthia Z.; Tso, Ivy F; Auther, Andrea; Adelsheim, Steven; Calkins, Roderick; Carter, Cameron S.; Niendam, Tara; Taylor, Stephan F.; McFarlane, William R.

    2016-01-01

    Objective In the current issue, Cannon and colleagues, as part of the second phase of the North American Prodrome Longitudinal Study (NAPLS2), report on a risk calculator for the individualized prediction of developing a psychotic disorder in a 2-year period. The present study represents an external validation of the NAPLS2 psychosis risk calculator using an independent sample of subjects at clinical high risk for psychosis collected as part of the Early Detection, Intervention, and Prevention of Psychosis Program (EDIPPP). Methods 176 subjects with follow-up (from the total EDIPPP sample of 210) rated as clinical high-risk (CHR) based on the Structured Interview for Prodromal Syndromes were used to construct a new prediction model with the 6 significant predictor variables in the NAPLS2 psychosis risk calculator (unusual thoughts, suspiciousness, Symbol Coding, verbal learning, social functioning decline, baseline age, and family history). Discrimination performance was assessed with the area under the receiver operating curve (AUC). The NAPLS2 risk calculator was then used to generate a psychosis risk estimate for each case in the external validation sample. Results The external validation model showed good discrimination, with an AUC of 79% (95% CI 0.644–0.937). In addition, the personalized risk generated by the NAPLS calculator provided a solid estimation of the actual conversion outcome in the validation sample. Conclusions In the companion papers in this issue, two independent samples of CHR subjects converge to validate the NAPLS2 psychosis risk calculator. This prediction calculator represents a meaningful step towards early intervention and personalized treatment of psychotic disorders. PMID:27363511

  16. Testing and Life Prediction for Composite Rotor Hub Flexbeams

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2004-01-01

    A summary of several studies of delamination in tapered composite laminates with internal ply-drops is presented. Initial studies used 2D FE models to calculate interlaminar stresses at the ply-ending locations in linear tapered laminates under tension loading. Strain energy release rates for delamination in these laminates indicated that delamination would likely start at the juncture of the tapered and thin regions and grow unstably in both directions. Tests of glass/epoxy and graphite/epoxy linear tapered laminates under axial tension delaminated as predicted. Nonlinear tapered specimens were cut from a full-size helicopter rotor hub and were tested under combined constant axial tension and cyclic transverse bending loading to simulate the loading experienced by a rotorhub flexbeam in flight. For all the tested specimens, delamination began at the tip of the outermost dropped ply group and grew first toward the tapered region. A 2D FE model was created that duplicated the test flexbeam layup, geometry, and loading. Surface strains calculated by the model agreed very closely with the measured surface strains in the specimens. The delamination patterns observed in the tests were simulated in the model by releasing pairs of MPCs along those interfaces. Strain energy release rates associated with the delamination growth were calculated for several configurations and using two different FE analysis codes. Calculations from the codes agreed very closely. The strain energy release rate results were used with material characterization data to predict fatigue delamination onset lives for nonlinear tapered flexbeams with two different ply-dropping schemes. The predicted curves agreed well with the test data for each case studied.

  17. Development of burnup dependent fuel rod model in COBRA-TF

    NASA Astrophysics Data System (ADS)

    Yilmaz, Mine Ozdemir

    The purpose of this research was to develop a burnup dependent fuel thermal conductivity model within Pennsylvania State University, Reactor Dynamics and Fuel Management Group (RDFMG) version of the subchannel thermal-hydraulics code COBRA-TF (CTF). The model takes into account first, the degradation of fuel thermal conductivity with high burnup; and second, the fuel thermal conductivity dependence on the Gadolinium content for both UO2 and MOX fuel rods. The modified Nuclear Fuel Industries (NFI) model for UO2 fuel rods and Duriez/Modified NFI Model for MOX fuel rods were incorporated into CTF and fuel centerline predictions were compared against Halden experimental test data and FRAPCON-3.4 predictions to validate the burnup dependent fuel thermal conductivity model in CTF. Experimental test cases from Halden reactor fuel rods for UO2 fuel rods at Beginning of Life (BOL), through lifetime without Gd2O3 and through lifetime with Gd 2O3 and a MOX fuel rod were simulated with CTF. Since test fuel rod and FRAPCON-3.4 results were based on single rod measurements, CTF was run for a single fuel rod surrounded with a single channel configuration. Input decks for CTF were developed for one fuel rod located at the center of a subchannel (rod-centered subchannel approach). Fuel centerline temperatures predicted by CTF were compared against the measurements from Halden experimental test data and the predictions from FRAPCON-3.4. After implementing the new fuel thermal conductivity model in CTF and validating the model with experimental data, CTF model was applied to steady state and transient calculations. 4x4 PWR fuel bundle configuration from Purdue MOX benchmark was used to apply the new model for steady state and transient calculations. First, one of each high burnup UO2 and MOX fuel rods from 4x4 matrix were selected to carry out single fuel rod calculations and fuel centerline temperatures predicted by CTF/TORT-TD were compared against CTF /TORT-TD /FRAPTRAN predictions. After confirming that the new fuel thermal conductivity model in CTF worked and provided consistent results with FRAPTRAN predictions for a single fuel rod configuration, the same type of analysis was carried out for a bigger system which is the 4x4 PWR bundle consisting of 15 fuel pins and one control guide tube. Steady- state calculations at Hot Full Power (HFP) conditions for control guide tube out (unrodded) were performed using the 4x4 PWR array with CTF/TORT-TD coupled code system. Fuel centerline, surface and average temperatures predicted by CTF/TORT-TD with and without the new fuel thermal conductivity model were compared against CTF/TORT-TD/FRAPTRAN predictions to demonstrate the improvement in fuel centerline predictions when new model was used. In addition to that constant and CTF dynamic gap conductance model were used with the new thermal conductivity model to show the performance of the CTF dynamic gap conductance model and its impact on fuel centerline and surface temperatures. Finally, a Rod Ejection Accident (REA) scenario using the same 4x4 PWR array was run both at Hot Zero Power (HZP) and Hot Full Power (HFP) condition, starting at a position where half of the control rod is inserted. This scenario was run using CTF/TORT-TD coupled code system with and without the new fuel thermal conductivity model. The purpose of this transient analysis was to show the impact of thermal conductivity degradation (TCD) on feedback effects, specifically Doppler Reactivity Coefficient (DRC) and, eventually, total core reactivity.

  18. Effect of PVA fiber content on creep property of fiber reinforced high-strength concrete columns

    NASA Astrophysics Data System (ADS)

    Xu, Zongnan; Wang, Tao; Wang, Weilun

    2018-04-01

    The effect of PVA (polyvinyl alcohol) fiber content on the creep property of fiber reinforced high-strength concrete columns was investigated. The correction factor of PVA fiber content was proposed and the creep prediction model of ACI209 was modified. Controlling the concrete strength as C80, changing the content of PVA fiber (volume fraction 0%, 0.25%, 0.5%, 1% respectively), the creep experiment of PVA fiber reinforced concrete columns was carried out, the creep coefficient of each specimen was calculated to characterize the creep property. The influence of PVA fiber content on the creep property was analyzed based on the creep coefficient and the calculation results of several frequently used creep prediction models. The correction factor of PVA fiber content was proposed to modify the ACI209 creep prediction model.

  19. Development and external multicenter validation of Chinese Prostate Cancer Consortium prostate cancer risk calculator for initial prostate biopsy.

    PubMed

    Chen, Rui; Xie, Liping; Xue, Wei; Ye, Zhangqun; Ma, Lulin; Gao, Xu; Ren, Shancheng; Wang, Fubo; Zhao, Lin; Xu, Chuanliang; Sun, Yinghao

    2016-09-01

    Substantial differences exist in the relationship of prostate cancer (PCa) detection rate and prostate-specific antigen (PSA) level between Western and Asian populations. Classic Western risk calculators, European Randomized Study for Screening of Prostate Cancer Risk Calculator, and Prostate Cancer Prevention Trial Risk Calculator, were shown to be not applicable in Asian populations. We aimed to develop and validate a risk calculator for predicting the probability of PCa and high-grade PCa (defined as Gleason Score sum 7 or higher) at initial prostate biopsy in Chinese men. Urology outpatients who underwent initial prostate biopsy according to the inclusion criteria were included. The multivariate logistic regression-based Chinese Prostate Cancer Consortium Risk Calculator (CPCC-RC) was constructed with cases from 2 hospitals in Shanghai. Discriminative ability, calibration and decision curve analysis were externally validated in 3 CPCC member hospitals. Of the 1,835 patients involved, PCa was identified in 338/924 (36.6%) and 294/911 (32.3%) men in the development and validation cohort, respectively. Multivariate logistic regression analyses showed that 5 predictors (age, logPSA, logPV, free PSA ratio, and digital rectal examination) were associated with PCa (Model 1) or high-grade PCa (Model 2), respectively. The area under the curve of Model 1 and Model 2 was 0.801 (95% CI: 0.771-0.831) and 0.826 (95% CI: 0.796-0.857), respectively. Both models illustrated good calibration and substantial improvement in decision curve analyses than any single predictors at all threshold probabilities. Higher predicting accuracy, better calibration, and greater clinical benefit were achieved by CPCC-RC, compared with European Randomized Study for Screening of Prostate Cancer Risk Calculator and Prostate Cancer Prevention Trial Risk Calculator in predicting PCa. CPCC-RC performed well in discrimination and calibration and decision curve analysis in external validation compared with Western risk calculators. CPCC-RC may aid in decision-making of prostate biopsy in Chinese or in other Asian populations with similar genetic and environmental backgrounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A study on the influence of corona on currents and electromagnetic fields predicted by a nonlinear lightning return-stroke model

    NASA Astrophysics Data System (ADS)

    De Conti, Alberto; Silveira, Fernando H.; Visacro, Silvério

    2014-05-01

    This paper investigates the influence of corona on currents and electromagnetic fields predicted by a return-stroke model that represents the lightning channel as a nonuniform transmission line with time-varying (nonlinear) resistance. The corona model used in this paper allows the calculation of corona currents as a function of the radial electric field in the vicinity of the channel. A parametric study is presented to investigate the influence of corona parameters, such as the breakdown electric field and the critical electric field for the stable propagation of streamers, on predicted currents and electromagnetic fields. The results show that, regardless of the assumed corona parameters, the incorporation of corona into the nonuniform and nonlinear transmission line model under investigation modifies the model predictions so that they consistently reproduce most of the typical features of experimentally observed lightning electromagnetic fields and return-stroke speed profiles. In particular, it is shown that the proposed model leads to close vertical electric fields presenting waveforms, amplitudes, and decay with distance in good agreement with dart leader electric field changes measured in triggered lightning experiments. A comparison with popular engineering return-stroke models further confirms the model's ability to predict consistent electric field waveforms in the close vicinity of the channel. Some differences observed in the field amplitudes calculated with the different models can be related to the fact that current distortion, while present in the proposed model, is ultimately neglected in the considered engineering return-stroke models.

  1. A cluster expansion model for predicting activation barrier of atomic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rehman, Tafizur; Jaipal, M.; Chatterjee, Abhijit, E-mail: achatter@iitk.ac.in

    2013-06-15

    We introduce a procedure based on cluster expansion models for predicting the activation barrier of atomic processes encountered while studying the dynamics of a material system using the kinetic Monte Carlo (KMC) method. Starting with an interatomic potential description, a mathematical derivation is presented to show that the local environment dependence of the activation barrier can be captured using cluster interaction models. Next, we develop a systematic procedure for training the cluster interaction model on-the-fly, which involves: (i) obtaining activation barriers for handful local environments using nudged elastic band (NEB) calculations, (ii) identifying the local environment by analyzing the NEBmore » results, and (iii) estimating the cluster interaction model parameters from the activation barrier data. Once a cluster expansion model has been trained, it is used to predict activation barriers without requiring any additional NEB calculations. Numerical studies are performed to validate the cluster expansion model by studying hop processes in Ag/Ag(100). We show that the use of cluster expansion model with KMC enables efficient generation of an accurate process rate catalog.« less

  2. Ab Initio Molecular Dynamics and Lattice Dynamics-Based Force Field for Modeling Hexagonal Boron Nitride in Mechanical and Interfacial Applications.

    PubMed

    Govind Rajan, Ananth; Strano, Michael S; Blankschtein, Daniel

    2018-04-05

    Hexagonal boron nitride (hBN) is an up-and-coming two-dimensional material, with applications in electronic devices, tribology, and separation membranes. Herein, we utilize density-functional-theory-based ab initio molecular dynamics (MD) simulations and lattice dynamics calculations to develop a classical force field (FF) for modeling hBN. The FF predicts the crystal structure, elastic constants, and phonon dispersion relation of hBN with good accuracy and exhibits remarkable agreement with the interlayer binding energy predicted by random phase approximation calculations. We demonstrate the importance of including Coulombic interactions but excluding 1-4 intrasheet interactions to obtain the correct phonon dispersion relation. We find that improper dihedrals do not modify the bulk mechanical properties and the extent of thermal vibrations in hBN, although they impact its flexural rigidity. Combining the FF with the accurate TIP4P/Ice water model yields excellent agreement with interaction energies predicted by quantum Monte Carlo calculations. Our FF should enable an accurate description of hBN interfaces in classical MD simulations.

  3. A Mass-balance nitrate model for predicting the effects of land use on ground-water quality in municipal wellhead-protection areas

    USGS Publications Warehouse

    Frimpter, M.H.; Donohue, J.J.; Rapacz, M.V.; Beye, H.G.

    1990-01-01

    A mass-balance accounting model can be used to guide the management of septic systems and fertilizers to control the degradation of groundwater quality in zones of an aquifer that contributes water to public supply wells. The nitrate nitrogen concentration of the mixture in the well can be predicted for steady-state conditions by calculating the concentration that results from the total weight of nitrogen and total volume of water entering the zone of contribution to the well. These calculations will allow water-quality managers to predict the nitrate concentrations that would be produced by different types and levels of development, and to plan development accordingly. Computations for different development schemes provide a technical basis for planners and managers to compare water quality effects and to select alternatives that limit nitrate concentration in wells. Appendix A contains tables of nitrate loads and water volumes from common sources for use with the accounting model. Appendix B describes the preparation of a spreadsheet for the nitrate loading calculations with a software package generally available for desktop computers. (USGS)

  4. Comparison of different risk stratification systems in predicting short-term serious outcome of syncope patients.

    PubMed

    Safari, Saeed; Baratloo, Alireza; Hashemi, Behrooz; Rahmati, Farhad; Forouzanfar, Mohammad Mehdi; Motamedi, Maryam; Mirmohseni, Ladan

    2016-01-01

    Determining etiologic causes and prognosis can significantly improve management of syncope patients. The present study aimed to compare the values of San Francisco, Osservatorio Epidemiologico sulla Sincope nel Lazio (OESIL), Boston, and Risk Stratification of Syncope in the Emergency Department (ROSE) score clinical decision rules in predicting the short-term serious outcome of syncope patients. The present diagnostic accuracy study with 1-week follow-up was designed to evaluate the predictive values of the four mentioned clinical decision rules. Screening performance characteristics of each model in predicting mortality, myocardial infarction (MI), and cerebrovascular accidents (CVAs) were calculated and compared. To evaluate the value of each aforementioned model in predicting the outcome, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio were calculated and receiver-operating curve (ROC) curve analysis was done. A total of 187 patients (mean age: 64.2 ± 17.2 years) were enrolled in the study. Mortality, MI, and CVA were seen in 19 (10.2%), 12 (6.4%), and 36 (19.2%) patients, respectively. Area under the ROC curve for OESIL, San Francisco, Boston, and ROSE models in prediction the risk of 1-week mortality, MI, and CVA was in the 30-70% range, with no significant difference among models ( P > 0.05). The pooled model did not show higher accuracy in prediction of mortality, MI, and CVA compared to others ( P > 0.05). This study revealed the weakness of all four evaluated models in predicting short-term serious outcome of syncope patients referred to the emergency department without any significant advantage for one among others.

  5. Acoustical transmission-line model of the middle-ear cavities and mastoid air cells.

    PubMed

    Keefe, Douglas H

    2015-04-01

    An acoustical transmission line model of the middle-ear cavities and mastoid air cell system (MACS) was constructed for the adult human middle ear with normal function. The air-filled cavities comprised the tympanic cavity, aditus, antrum, and MACS. A binary symmetrical airway branching model of the MACS was constructed using an optimization procedure to match the average total volume and surface area of human temporal bones. The acoustical input impedance of the MACS was calculated using a recursive procedure, and used to predict the input impedance of the middle-ear cavities at the location of the tympanic membrane. The model also calculated the ratio of the acoustical pressure in the antrum to the pressure in the middle-ear cavities at the location of the tympanic membrane. The predicted responses were sensitive to the magnitude of the viscothermal losses within the MACS. These predicted input impedance and pressure ratio functions explained the presence of multiple resonances reported in published data, which were not explained by existing MACS models.

  6. Assessing bioavailability levels of metals in effluent-affected rivers: effect of Fe(III) and chelating agents on the distribution of metal speciation.

    PubMed

    Han, Shuping; Naito, Wataru; Masunaga, Shigeki

    To assess the effects of Fe(III) and anthropogenic ligands on the bioavailability of Ni, Cu, Zn, and Pb, concentrations of bioavailable metals were measured by the DGT (diffusive gradients in thin films) method in some urban rivers, and were compared with concentrations calculated by a chemical equilibrium model (WHAM 7.0). Assuming that dissolved Fe(III) (<0.45 μm membrane filtered) was in equilibrium with colloidal iron oxide, the WHAM 7.0 model estimated that bioavailable concentrations of Ni, Cu, and Zn were slightly higher than the corresponding values estimated assuming that dissolved Fe(III) was absent. In contrast, lower levels of free Pb were predicted by the WHAM 7.0 model when dissolved Fe(III) was included. Estimates showed that most of the dissolved Pb was present as colloidal iron-Pb complex. Ethylene-diamine-tetra-acetic acid (EDTA) concentrations at sampling sites were predicted from the relationship between EDTA and the calculated bioavailable concentration of Zn. When both colloidal iron and predicted EDTA concentrations were included in the WHAM 7.0 calculations, dissolved metals showed a strong tendency to form EDTA complexes, in the order Ni > Cu > Zn > Pb. With the inclusion of EDTA, bioavailable concentrations of Ni, Cu, and Zn predicted by WHAM 7.0 were different from those predicted considering only humic substances and colloidal iron.

  7. Using a Programmable Calculator to Teach Teophylline Pharmacokinetics.

    ERIC Educational Resources Information Center

    Closson, Richard Grant

    1981-01-01

    A calculator program for a Texas Instruments Model 59 to predict serum theophylline concentrations is described. The program accommodates the input of multiple dose times at irregular intervals, clearance changes due to concurrent patient diseases and age less than 17 years. The calculations for five hypothetical patients are given. (Author/MLW)

  8. Collaborative Physical Chemistry Projects Involving Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Whisnant, David M.; Howe, Jerry J.; Lever, Lisa S.

    2000-02-01

    The physical chemistry classes from three colleges have collaborated on two computational chemistry projects using Quantum CAChe 3.0 and Gaussian 94W running on Pentium II PCs. Online communication by email and the World Wide Web was an important part of the collaboration. In the first project, students used molecular modeling to predict benzene derivatives that might be possible hair dyes. They used PM3 and ZINDO calculations to predict the electronic spectra of the molecules and tested the predicted spectra by comparing some with experimental measurements. They also did literature searches for real hair dyes and possible health effects. In the final phase of the project they proposed a synthetic pathway for one compound. In the second project the students were asked to predict which isomer of a small carbon cluster (C3, C4, or C5) was responsible for a series of IR lines observed in the spectrum of a carbon star. After preliminary PM3 calculations, they used ab initio calculations at the HF/6-31G(d) and MP2/6-31G(d) level to model the molecules and predict their vibrational frequencies and rotational constants. A comparison of the predictions with the experimental spectra suggested that the linear isomer of the C5 molecule was responsible for the lines.

  9. Comparison of experimental surface pressures with theoretical predictions on twin two-dimensional convergent-divergent nozzles

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Pendergraft, O. C., Jr.; Burley, J. R., II

    1986-01-01

    A three-dimensional subsonic aerodynamic panel code (VSAERO) was used to predict the effects of upper and lower external nozzle flap geometry on the external afterbody/nozzle pressure coefficient distributions and external nozzle drag of nonaxisymmetric convergent-divergent exhaust nozzles having parallel external sidewalls installed on a generic twin-engine high performance aircraft model. Nozzle static pressure coefficient distributions along the upper and lower surfaces near the model centerline and near the outer edges (corner) of the two surfaces were calculated, and nozzle drag was predicted using these surface pressure distributions. A comparison between the theoretical predictions and experimental wind tunnel data is made to evaluate the utility of the code in calculating the flow about these types of non-axisymmetric afterbody configurations. For free-stream Mach numbers of 0.60 and 0.90, the conditions where the flows were attached on the boattails yielded the best comparison between the theoretical predictions and the experimental data. For the Boattail terminal angles of greater than 15 deg., the experimental data for M = 0.60 and 0.90 indicated areas of separated flow, so the theoretical predictions failed to match the experimental data. Even though calculations of regions of separated flows are within the capabilities of the theoretical method, acceptable solutions were not obtained.

  10. CFL3D, FUN3d, and NSU3D Contributions to the Fifth Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Laflin, Kelly R.; Chaffin, Mark S.; Powell, Nicholas; Levy, David W.

    2013-01-01

    Results presented at the Fifth Drag Prediction Workshop using CFL3D, FUN3D, and NSU3D are described. These are calculations on the workshop provided grids and drag adapted grids. The NSU3D results have been updated to reflect an improvement to skin friction calculation on skewed grids. FUN3D results generated after the workshop are included for custom participant generated grids and a grid from a previous workshop. Uniform grid refinement at the design condition shows a tight grouping in calculated drag, where the variation in the pressure component of drag is larger than the skin friction component. At this design condition, A fine-grid drag value was predicted with a smaller drag adjoint adapted grid via tetrahedral adaption to a metric and mixed-element subdivision. The buffet study produced larger variation than the design case, which is attributed to large differences in the predicted side-of-body separation extent. Various modeling and discretization approaches had a strong impact on predicted side-of-body separation. This large wing root separation bubble was not observed in wind tunnel tests indicating that more work is necessary in modeling wing root juncture flows to predict experiments.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, A.; Chadwick, T.; Makhlouf, M.

    This paper deals with the effects of various solidification variables such as cooling rate, temperature gradient, solidification rate, etc. on the microstructure and shrinkage defects in aluminum alloy (A356) castings. The effects are first predicted using commercial solidification modeling softwares and then verified experimentally. For this work, the authors are considering a rectangular bar cast in a sand mold. Simulation is performed using SIMULOR, a finite volume based casting simulation program. Microstructural variables such as dendritic arm spacing (DAS) and defects (percentage porosity) are calculated from the temperature fields, cooling rate, solidification time, etc. predicted by the computer softwares. Themore » same variables are then calculated experimentally in the foundry. The test piece is cast in a resin (Sodium Silicate) bonded sand mold and the DAS and porosity variables are calculated using Scanning Electron Microscopy and Image Analysis. The predictions from the software are compared with the experimental results. The results are presented and critically analyzed to determine the quality of the predicted results. The usefulness of the commercial solidification modeling softwares as a tool for the foundry are also discussed.« less

  12. Calculating the dermal flux of chemicals with OELs based on their molecular structure: An attempt to assign the skin notation.

    PubMed

    Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir

    2010-09-01

    Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. FASTGRASS: A mechanistic model for the prediction of Xe, I, Cs, Te, Ba, and Sr release from nuclear fuel under normal and severe-accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Zawadzki, S.A.

    The primary physical/chemical models that form the basis of the FASTGRASS mechanistic computer model for calculating fission-product release from nuclear fuel are described. Calculated results are compared with test data and the major mechanisms affecting the transport of fission products during steady-state and accident conditions are identified.

  14. NASA Langley developments in response calculations needed for failure and life prediction

    NASA Technical Reports Server (NTRS)

    Housner, Jerrold M.

    1993-01-01

    NASA Langley developments in response calculations needed for failure and life predictions are discussed. Topics covered include: structural failure analysis in concurrent engineering; accuracy of independent regional modeling demonstrated on classical example; functional interface method accurately joins incompatible finite element models; interface method for insertion of local detail modeling extended to curve pressurized fuselage window panel; interface concept for joining structural regions; motivation for coupled 2D-3D analysis; compression panel with discontinuous stiffener coupled 2D-3D model and axial surface strains at the middle of the hat stiffener; use of adaptive refinement with multiple methods; adaptive mesh refinement; and studies on quantity effect of bow-type initial imperfections on reliability of stiffened panels.

  15. Comparisons of a Quantum Annealing and Classical Computer Neural Net Approach for Inferring Global Annual CO2 Fluxes over Land

    NASA Astrophysics Data System (ADS)

    Halem, M.; Radov, A.; Singh, D.

    2017-12-01

    Investigations of mid to high latitude atmospheric CO2 show growing amplitudes in seasonal variations over the past several decades. Recent high-resolution satellite measurements of CO2 concentration are now available for three years from the Orbiting Carbon Observatory-2. The Atmospheric Radiation Measurement (ARM) program of DOE has been making long-term CO2-flux measurements (in addition to CO2 concentration and an array of other meteorological quantities) at several towers and mobile sites located around the globe at half-hour frequencies. Recent papers have shown CO2 fluxes inferred by assimilating CO2 observations into ecosystem models are largely inconsistent with station observations. An investigation of how the biosphere has reacted to changes in atmospheric CO2 is essential to our understanding of potential climate-vegetation feedbacks. Thus, new approaches for calculating CO2-flux for assimilation into land surface models are necessary for improving the prediction of annual carbon uptake. In this study, we calculate and compare the predicted CO2 fluxes results employing a Feed Forward Backward Propagation Neural Network model on two architectures, (i) an IBM Minsky Computer node and (ii) a hybrid version of the ARC D-Wave quantum annealing computer. We compare the neural net results of predictions of CO2 flux from ARM station data for three different DOE ecosystem sites; an arid plains near Oklahoma City, a northern arctic site at Barrows AL, and a tropical rainforest site in the Amazon. Training times and predictive results for the calculating annual CO2 flux for the two architectures for each of the three sites are presented. Comparative results of predictions as measured by RMSE and MAE are discussed. Plots and correlations of observed vs predicted CO2 flux are also presented for all three sites. We show the estimated training times for quantum and classical calculations when extended to calculating global annual Carbon Uptake over land. We also examine the efficiency, dependability and resilience of the quantum neural net approach relative to classical computer systems in predicting annual CO2 flux globally.

  16. Comparative Risk Predictions of Second Cancers After Carbon-Ion Therapy Versus Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eley, John G., E-mail: jeley@som.umaryland.edu; University of Texas Graduate School of Biomedical Sciences, Houston, Texas; Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland

    Purpose: This work proposes a theoretical framework that enables comparative risk predictions for second cancer incidence after particle beam therapy for different ion species for individual patients, accounting for differences in relative biological effectiveness (RBE) for the competing processes of tumor initiation and cell inactivation. Our working hypothesis was that use of carbon-ion therapy instead of proton therapy would show a difference in the predicted risk of second cancer incidence in the breast for a sample of Hodgkin lymphoma (HL) patients. Methods and Materials: We generated biologic treatment plans and calculated relative predicted risks of second cancer in the breastmore » by using two proposed methods: a full model derived from the linear quadratic model and a simpler linear-no-threshold model. Results: For our reference calculation, we found the predicted risk of breast cancer incidence for carbon-ion plans-to-proton plan ratio, , to be 0.75 ± 0.07 but not significantly smaller than 1 (P=.180). Conclusions: Our findings suggest that second cancer risks are, on average, comparable between proton therapy and carbon-ion therapy.« less

  17. Comparative Risk Predictions of Second Cancers After Carbon-Ion Therapy Versus Proton Therapy.

    PubMed

    Eley, John G; Friedrich, Thomas; Homann, Kenneth L; Howell, Rebecca M; Scholz, Michael; Durante, Marco; Newhauser, Wayne D

    2016-05-01

    This work proposes a theoretical framework that enables comparative risk predictions for second cancer incidence after particle beam therapy for different ion species for individual patients, accounting for differences in relative biological effectiveness (RBE) for the competing processes of tumor initiation and cell inactivation. Our working hypothesis was that use of carbon-ion therapy instead of proton therapy would show a difference in the predicted risk of second cancer incidence in the breast for a sample of Hodgkin lymphoma (HL) patients. We generated biologic treatment plans and calculated relative predicted risks of second cancer in the breast by using two proposed methods: a full model derived from the linear quadratic model and a simpler linear-no-threshold model. For our reference calculation, we found the predicted risk of breast cancer incidence for carbon-ion plans-to-proton plan ratio, , to be 0.75 ± 0.07 but not significantly smaller than 1 (P=.180). Our findings suggest that second cancer risks are, on average, comparable between proton therapy and carbon-ion therapy. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Highly ionized atoms in cooling gas. [in model for cooling of hot Galactic corona

    NASA Technical Reports Server (NTRS)

    Edgar, Richard J.; Chevalier, Roger A.

    1986-01-01

    The ionization of low density gas cooling from a high temperature was calculated. The evolution during the cooling is assumed to be isochoric, isobaric, or a combination of these cases. The calculations are used to predict the column densities and ultraviolet line luminosities of highly ionized atoms in cooling gas. In a model for cooling of a hot galactic corona, it is shown that the observed value of N(N V) can be produced in the cooling gas, while the predicted value of N(Si IV) falls short of the observed value by a factor of about 5. The same model predicts fluxes of ultraviolet emission lines that are a factor of 10 lower than the claimed detections of Feldman, Bruna, and Henry. Predictions are made for ultraviolet lines in cooling flows in early-type galaxies and clusters of galaxies. It is shown that the column densities of interest vary over a fairly narrow range, while the emission line luminosities are simply proportional to the mass inflow rate.

  19. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and Analysis of Cosmic Ray Effects in Electronics). The Single Event Figure of Merit method was also revised to use the solar minimum galactic cosmic ray spectrum and extended to circular orbits down to 200 km at any inclination. More recently a series of commercial codes was developed by TRAD (Test & Radiations) which includes the OMERE code which calculates single event effects. There are other error rate prediction methods which use Monte Carlo techniques. In this chapter the analytic methods for estimating the environment within spacecraft will be discussed.

  20. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  1. Three-dimensional effects for radio frequency antenna modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, M.D.; Batchelor, D.B.; Stallings, D.C.

    1994-10-15

    Electromagnetic field calculations for radio frequency (rf) antennas in two dimensions (2-D) neglect finite antenna length effects as well as the feeders leading to the main current strap. The 2-D calculations predict that the return currents in the sidewalls of the antenna structure depend strongly on the plasma parameters, but this prediction is suspect because of experimental evidence. To study the validity of the 2-D approximation, the Multiple Antenna Implementation System (MAntIS) has been used to perform three-dimensional (3-D) modeling of the power spectrum, plasma loading, and inductance for a relevant loop antenna design. Effects on antenna performance caused bymore » feeders to the main current strap and conducting sidewalls are considered. The modeling shows that the feeders affect the launched power spectrum in an indirect way by forcing the driven rf current to return in the antenna structure rather than the plasma, as in the 2-D model. It has also been found that poloidal dependencies in the plasma impedance matrix can reduce the loading predicted from that predicted in the 2-D model. For some plasma parameters, the combined 3-D effects can lead to a reduction in the predicted loading by as much as a factor of 2 from that given by the 2-D model, even with end-effect corrections for the 2-D model.« less

  2. Three-dimensional effects for radio frequency antenna modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, M.D.; Batchelor, D.B.; Stallings, D.C.

    1993-12-31

    Electromagnetic field calculations for radio frequency (rf) antennas in two dimensions (2-D) neglect finite antenna length effects as well as the feeders leading to the main current strap. The 2-D calculations predict that the return currents in the sidewalls of the antenna structure depend strongly on the plasma parameters, but this prediction is suspect because of experimental evidence. To study the validity of the 2-D approximation, the Multiple Antenna Implementation System (MAntIS) has been used to perform three-dimensional (3-D) modeling of the power spectrum, plasma loading, and inductance for a relevant loop antenna design. Effects on antenna performance caused bymore » feeders to the main current strap and conducting sidewalls are considered. The modeling shows that the feeders affect the launched power spectrum in an indirect way by forcing the driven rf current to return in the antenna structure rather than the plasma, as in the 2-D model. It has also been found that poloidal dependencies in the plasma impedance matrix can reduce the loading predicted from that predicted in the 2-D model. For some plasma parameters, the combined 3-D effects can lead to a reduction in the predicted loading by as much as a factor of 2 from that given by the 2-D model, even with end-effect corrections for the 2-D model.« less

  3. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    van Hooidonk, R.; Huber, M.

    2012-03-01

    Future widespread coral bleaching and subsequent mortality has been projected using sea surface temperature (SST) data derived from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. Such weaknesses most likely reduce the accuracy of predicting coral bleaching, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends, and their propagation in predictions. To analyze the relative importance of various types of model errors and biases in predicting coral bleaching, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from 24 GCMs 20th century simulations included in the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate accuracy using an objective measure of forecast quality, the Peirce skill score (PSS). Major findings are that: (1) predictions are most sensitive to the seasonal cycle and inter-annual variability in the ENSO 24-60 months frequency band and (2) because models tend to understate the seasonal cycle at reef locations, they systematically underestimate future bleaching. The methodology we describe can be used to improve the accuracy of bleaching predictions by characterizing the errors and uncertainties involved in the predictions.

  4. Cosmic ray antiprotons in closed galaxy model

    NASA Technical Reports Server (NTRS)

    Protheroe, R.

    1981-01-01

    The flux of secondary antiprotons expected for the leaky-box model was calculated as well as that for the closed galaxy model of Peters and Westergard (1977). The antiproton/proton ratio observed at several GeV is a factor of 4 higher than the prediction for the leaky-box model but is consistent with that predicted for the closed galaxy model. New low energy data is not consistent with either model. The possibility of a primary antiproton component is discussed.

  5. Predicting the stability of nanodevices

    NASA Astrophysics Data System (ADS)

    Lin, Z. Z.; Yu, W. F.; Wang, Y.; Ning, X. J.

    2011-05-01

    A simple model based on the statistics of single atoms is developed to predict the stability or lifetime of nanodevices without empirical parameters. Under certain conditions, the model produces the Arrhenius law and the Meyer-Neldel compensation rule. Compared with the classical molecular-dynamics simulations for predicting the stability of monatomic carbon chain at high temperature, the model is proved to be much more accurate than the transition state theory. Based on the ab initio calculation of the static potential, the model can give out a corrected lifetime of monatomic carbon and gold chains at higher temperature, and predict that the monatomic chains are very stable at room temperature.

  6. Predictions for partial and monolayer coverages of O2 on graphite

    NASA Technical Reports Server (NTRS)

    Pan, R. P.; Etters, R. D.; Kobashi, K.; Chandrasekharan, V.

    1982-01-01

    Monolayer properties of O2 on graphite are calculated using a pattern recognition, optimization scheme. Equilibrium monolayers are predicted at two different densities with properties in agreement with recent X-ray diffraction, specific heat, and neutron scattering data. Properties of the extremely low density regime are calculated using a model based upon a distribution of two-dimensional O2 clusters. The results are consistent with experimental evidence.

  7. Variation Among Internet Based Calculators in Predicting Spontaneous Resolution of Vesicoureteral Reflux

    PubMed Central

    Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.

    2010-01-01

    Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550

  8. Novel prediction model of renal function after nephrectomy from automated renal volumetry with preoperative multidetector computed tomography (MDCT).

    PubMed

    Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo

    2015-10-01

    The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p < 0.01). The significant correlated variables for %eGFR alteration were %RCV preservation (r = 0.58, p < 0.01) and %RPV preservation (r = 0.54, p < 0.01). We developed our regression model as follows: postoperative eGFR = 57.87 - 0.55(age) - 15.01(body surface area) + 0.30(preoperative eGFR) + 52.92(%RCV preservation). Strong correlation was seen between postoperative eGFR and the calculated estimation model (r = 0.83; p < 0.001). The external validation cohort (n = 21) showed our model outperformed previously reported models. Combining MDCT renal volumetry and clinical indices might yield an important tool for predicting postoperative renal function.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Bo, E-mail: luboufl@gmail.com; Park, Justin C.; Fan, Qiyong

    Purpose: Accurately localizing lung tumor localization is essential for high-precision radiation therapy techniques such as stereotactic body radiation therapy (SBRT). Since direct monitoring of tumor motion is not always achievable due to the limitation of imaging modalities for treatment guidance, placement of fiducial markers on the patient’s body surface to act as a surrogate for tumor position prediction is a practical alternative for tracking lung tumor motion during SBRT treatments. In this work, the authors propose an innovative and robust model to solve the multimarker position optimization problem. The model is able to overcome the major drawbacks of the sparsemore » optimization approach (SOA) model. Methods: The principle-component-analysis (PCA) method was employed as the framework to build the authors’ statistical prediction model. The method can be divided into two stages. The first stage is to build the surrogate tumor matrix and calculate its eigenvalues and associated eigenvectors. The second stage is to determine the “best represented” columns of the eigenvector matrix obtained from stage one and subsequently acquire the optimal marker positions as well as numbers. Using 4-dimensional CT (4DCT) and breath hold CT imaging data, the PCA method was compared to the SOA method with respect to calculation time, average prediction accuracy, prediction stability, noise resistance, marker position consistency, and marker distribution. Results: The PCA and SOA methods which were both tested were on all 11 patients for a total of 130 cases including 4DCT and breath-hold CT scenarios. The maximum calculation time for the PCA method was less than 1 s with 64 752 surface points, whereas the average calculation time for the SOA method was over 12 min with 400 surface points. Overall, the tumor center position prediction errors were comparable between the two methods, and all were less than 1.5 mm. However, for the extreme scenarios (breath hold), the prediction errors for the PCA method were not only smaller, but were also more stable than for the SOA method. Results obtained by imposing a series of random noises to the surrogates indicated that the PCA method was much more noise resistant than the SOA method. The marker position consistency tests using various combinations of 4DCT phases to construct the surrogates suggested that the marker position predictions of the PCA method were more consistent than those of the SOA method, in spite of surrogate construction. Marker distribution tests indicated that greater than 80% of the calculated marker positions fell into the high cross correlation and high motion magnitude regions for both of the algorithms. Conclusions: The PCA model is an accurate, efficient, robust, and practical model for solving the multimarker position optimization problem to predict lung tumor motion during SBRT treatments. Due to its generality, PCA model can also be applied to other imaging guidance system whichever using surface motion as the surrogates.« less

  10. Model predictions of latitude-dependent ozone depletion due to aerospace vehicle operations

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Whitten, R. C.; Watson, V. R.; Riegel, C. A.; Maples, A. L.; Capone, L. A.

    1976-01-01

    Results are presented from a two-dimensional model of the stratosphere that simulates the seasonal movement of ozone by both wind and eddy transport, and contains all the chemistry known to be important. The calculated reductions in ozone due to NO2 injection from a fleet of supersonic transports are compared with the zonally averaged results of a three-dimensional model for a similar episode of injection. The agreement is good in the northern hemisphere, but is not as good in the southern hemisphere. Both sets of calculations show a strong corridor effect in that the predicted ozone depletions are largest to the north of the flight corridor for aircraft operating in the northern hemisphere.

  11. The prediction of sea-surface temperature variations by means of an advective mixed-layer ocean model

    NASA Technical Reports Server (NTRS)

    Atlas, R. M.

    1976-01-01

    An advective mixed layer ocean model was developed by eliminating the assumption of horizontal homogeneity in an already existing mixed layer model, and then superimposing a mean and anomalous wind driven current field. This model is based on the principle of conservation of heat and mechanical energy and utilizes a box grid for the advective part of the calculation. Three phases of experiments were conducted: evaluation of the model's ability to account for climatological sea surface temperature (SST) variations in the cooling and heating seasons, sensitivity tests in which the effect of hypothetical anomalous winds was evaluated, and a thirty-day synoptic calculation using the model. For the case studied, the accuracy of the predictions was improved by the inclusion of advection, although nonadvective effects appear to have dominated.

  12. SU-F-J-199: Predictive Models for Cone Beam CT-Based Online Verification of Pencil Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, L; Lin, A; Ahn, P

    Purpose: To utilize online CBCT scans to develop models for predicting DVH metrics in proton therapy of head and neck tumors. Methods: Nine patients with locally advanced oropharyngeal cancer were retrospectively selected in this study. Deformable image registration was applied to the simulation CT, target volumes, and organs at risk (OARs) contours onto each weekly CBCT scan. Intensity modulated proton therapy (IMPT) treatment plans were created on the simulation CT and forward calculated onto each corrected CBCT scan. Thirty six potentially predictive metrics were extracted from each corrected CBCT. These features include minimum/maximum/mean over and under-ranges at the proximal andmore » distal surface of PTV volumes, and geometrical and water equivalent distance between PTV and each OARs. Principal component analysis (PCA) was used to reduce the dimension of the extracted features. Three principal components were found to account for over 90% of variances in those features. Datasets from eight patients were used to train a machine learning model to fit these principal components with DVH metrics (dose to 95% and 5% of PTV, mean dose or max dose to OARs) from the forward calculated dose on each corrected CBCT. The accuracy of this model was verified on the datasets from the 9th patient. Results: The predicted changes of DVH metrics from the model were in good agreement with actual values calculated on corrected CBCT images. Median differences were within 1 Gy for most DVH metrics except for larynx and constrictor mean dose. However, a large spread of the differences was observed, indicating additional training datasets and predictive features are needed to improve the model. Conclusion: Intensity corrected CBCT scans hold the potential to be used for online verification of proton therapy and prediction of delivered dose distributions.« less

  13. The SAMPL4 host-guest blind prediction challenge: an overview.

    PubMed

    Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K

    2014-04-01

    Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.

  14. Evaluation of in vitro models for predicting acidosis risk of barley grain in finishing beef cattle.

    PubMed

    Anele, U Y; Swift, M-L; McAllister, T A; Galyean, M L; Yang, W Z

    2015-10-01

    Our objective was to develop a model to predict the acidosis potential of barley based on the in vitro batch culture incubation of 50 samples varying in bulk density, starch content, processing method, growing location, and agronomic practices. The model was an adaptation of the acidosis index (calculated from a combination of in situ and in vitro analyses and from several components of grain chemical composition) developed in Australia for use in the feed industry to estimate the potential for grains to increase the risk of ruminal acidosis. Of the independent variables considered, DM disappearance at 6 h of incubation (DMD6) using reduced-strength (20%) buffer in the batch culture accounted for 90.5% of the variation in the acidosis index with a root mean square error (RMSE) of 4.46%. To evaluate our model using independent datasets (derived from previous batch culture studies using full-strength [100%] buffer), we performed another batch culture study using full-strength buffer. The full-strength buffer model using in vitro DMD6 (DMD6-FS) accounted for 66.5% of the variation in the acidosis index with an RMSE of 8.30%. When the new full-strength buffer model was applied to 3 independent datasets to predict acidosis, it accounted for 20.1, 28.5, and 30.2% of the variation in the calculated acidosis index. Significant ( < 0.001) mean bias was evident in 2 of the datasets, for which the DMD6 model underpredicted the acidosis index by 46.9 and 5.73%. Ranking of samples from the most diverse independent dataset using the DMD6-FS model and the Black (2008) model (calculated using in situ starch degradation) indicated the relationship between the rankings using Spearman's rank correlation was negative (ρ = -0.30; = 0.059). When the reduced-strength buffer model was used, however, there were similarities in the acidosis index ranking of barley samples by the models as shown by the result of a correlation analysis between calculated (using the Australian model) and predicted (using the reduced-strength buffer DMD6 model) acidosis index (ρ = 0.67; < 0.001). Results suggest that our model, which is based on a reduced-strength buffer in vitro DMD6, has the potential to predict acidosis risk and can rank barley samples based on their acidotic risk. Nonetheless, the model would benefit from further refinement by expanding the database.

  15. Micro Finite Element models of the vertebral body: Validation of local displacement predictions

    PubMed Central

    Costa, Maria Cristiana; Tozzi, Gianluca; Cristofolini, Luca; Danesi, Valentina; Viceconti, Marco

    2017-01-01

    The estimation of local and structural mechanical properties of bones with micro Finite Element (microFE) models based on Micro Computed Tomography images depends on the quality bone geometry is captured, reconstructed and modelled. The aim of this study was to validate microFE models predictions of local displacements for vertebral bodies and to evaluate the effect of the elastic tissue modulus on model’s predictions of axial forces. Four porcine thoracic vertebrae were axially compressed in situ, in a step-wise fashion and scanned at approximately 39μm resolution in preloaded and loaded conditions. A global digital volume correlation (DVC) approach was used to compute the full-field displacements. Homogeneous, isotropic and linear elastic microFE models were generated with boundary conditions assigned from the interpolated displacement field measured from the DVC. Measured and predicted local displacements were compared for the cortical and trabecular compartments in the middle of the specimens. Models were run with two different tissue moduli defined from microindentation data (12.0GPa) and a back-calculation procedure (4.6GPa). The predicted sum of axial reaction forces was compared to the experimental values for each specimen. MicroFE models predicted more than 87% of the variation in the displacement measurements (R2 = 0.87–0.99). However, model predictions of axial forces were largely overestimated (80–369%) for a tissue modulus of 12.0GPa, whereas differences in the range 10–80% were found for a back-calculated tissue modulus. The specimen with the lowest density showed a large number of elements strained beyond yield and the highest predictive errors. This study shows that the simplest microFE models can accurately predict quantitatively the local displacements and qualitatively the strain distribution within the vertebral body, independently from the considered bone types. PMID:28700618

  16. Flood Hazard Mapping Assessment for El-Awali River Catchment-Lebanon

    NASA Astrophysics Data System (ADS)

    Hdeib, Rouya; Abdallah, Chadi; Moussa, Roger; Hijazi, Samar

    2016-04-01

    River flooding prediction and flood forecasting has become an essential stage in the major flood mitigation plans worldwide. Delineation of floodplains resulting from a river flooding event requires coupling between a Hydrological rainfall-runoff model to calculate the resulting outflows of the catchment and a hydraulic model to calculate the corresponding water surface profiles along the river main course. In this study several methods were applied to predict the flood discharge of El-Awali River using the available historical data and gauging records and by conducting several site visits. The HEC-HMS Rainfall-Runoff model was built and applied to calculate the flood hydrographs along several outlets on El-Awali River and calibrated using the storm that took place on January 2013 and caused flooding of the major Lebanese rivers and by conducting additional site visits to calculate proper river sections and record witnesses of the locals. The Hydraulic HEC-RAS model was then applied to calculate the corresponding water surface profiles along El-Awali River main reach. Floodplain delineation and Hazard mapping for 10,50 and 100 years return periods was performed using the Watershed Modeling System WMS. The results first show an underestimation of the flood discharge recorded by the operating gauge stations on El-Awali River, whereas, the discharge of the 100 years flood may reach up to 506 m3/s compared by lower values calculated using the traditional discharge estimation methods. Second any flooding of El-Awali River may be catastrophic especially to the coastal part of the catchment and can cause tragic losses in agricultural lands and properties. Last a major floodplain was noticed in Marj Bisri village this floodplain can reach more than 200 meters in width. Overall, performance was good and the Rainfall-Runoff model can provide valuable information about flows especially on ungauged points and can perform a great aid for the floodplain delineation and flood prediction methods in poorly gauged basins, but further model updates and calibration is always required to compensate the weaknesses in such model and attain better results.

  17. In silico environmental chemical science: properties and processes from statistical and computational modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tratnyek, Paul G.; Bylaska, Eric J.; Weber, Eric J.

    2017-01-01

    Quantitative structure–activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with “in silico” results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs usingmore » descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for “in silico environmental chemical science” are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.« less

  18. Principal Component-Based Radiative Transfer Model (PCRTM) for Hyperspectral Sensors. Part I; Theoretical Concept

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen

    2005-01-01

    Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.

  19. Risk Factors Analysis and Death Prediction in Some Life-Threatening Ailments Using Chi-Square Case-Based Reasoning (χ2 CBR) Model.

    PubMed

    Adeniyi, D A; Wei, Z; Yang, Y

    2018-01-30

    A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.

  20. A Comparison Between The NORCAT Rover Test Results and the ISRU Excavation System Model Predictions Results

    NASA Technical Reports Server (NTRS)

    Gallo, Christopher A.; Agui, Juan H.; Creager, Colin M.; Oravec, Heather A.

    2012-01-01

    An Excavation System Model has been written to simulate the collection and transportation of regolith on the moon. The calculations in this model include an estimation of the forces on the digging tool as a result of excavation into the regolith. Verification testing has been performed and the forces recorded from this testing were compared to the calculated theoretical data. The Northern Centre for Advanced Technology Inc. rovers were tested at the NASA Glenn Research Center Simulated Lunar Operations facility. This testing was in support of the In-Situ Resource Utilization program Innovative Partnership Program. Testing occurred in soils developed at the Glenn Research Center which are a mixture of different types of sands and whose soil properties have been well characterized. This testing is part of an ongoing correlation of actual field test data to the blade forces calculated by the Excavation System Model. The results from this series of tests compared reasonably with the predicted values from the code.

  1. A model of rotationally-sampled wind turbulence for predicting fatigue loads in wind turbines

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1995-01-01

    Empirical equations are presented with which to model rotationally-sampled (R-S) turbulence for input to structural-dynamic computer codes and the calculation of wind turbine fatigue loads. These equations are derived from R-S turbulence data which were measured at the vertical-plane array in Clayton, New Mexico. For validation, the equations are applied to the calculation of cyclic flapwise blade loads for the NASA/DOE Mod-2 2.5-MW experimental HAWT's (horizontal-axis wind turbines), and the results compared to measured cyclic loads. Good correlation is achieved, indicating that the R-S turbulence model developed in this study contains the characteristics of the wind which produce many of the fatigue loads sustained by wind turbines. Empirical factors are included which permit the prediction of load levels at specified percentiles of occurrence, which is required for the generation of fatigue load spectra and the prediction of the fatigue lifetime of structures.

  2. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    NASA Astrophysics Data System (ADS)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  3. A Comparison Between Modeled and Measured Clear-Sky Radiative Shortwave Fluxes in Arctic Environments, with Special Emphasis on Diffuse Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnard, James C.; Flynn, Donna M.

    2002-10-08

    The ability of the SBDART radiative transfer model to predict clear-sky diffuse and direct normal broadband shortwave irradiances is investigated. Model calculations of these quantities are compared with data from the Atmospheric Radiation Measurement (ARM) program’s Southern Great Plains (SGP) and North Slope of Alaska (NSA) sites. The model tends to consistently underestimate the direct normal irradiances at both sites by about 1%. In regards to clear-sky diffuse irradiance, the model overestimates this quantity at the SGP site in a manner similar to what has been observed in other studies (Halthore and Schwartz, 2000). The difference between the diffuse SBDARTmore » calculations and Halthore and Schwartz’s MODTRAN calculations is very small, thus demonstrating that SBDART performs similarly to MODTRAN. SBDART is then applied to the NSA site, and here it is found that the discrepancy between the model calculations and corrected diffuse measurements (corrected for daytime offsets, Dutton et al., 2001) is 0.4 W/m2 when averaged over the 12 cases considered here. Two cases of diffuse measurements from a shaded “black and white” pyranometer are also compared with the calculations and the discrepancy is again minimal. Thus, it appears as if the “diffuse discrepancy” that exists at the SGP site does not exist at the NSA sites. We cannot yet explain why the model predicts diffuse radiation well at one site but not at the other.« less

  4. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    NASA Astrophysics Data System (ADS)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  5. Use of Navier-Stokes methods for the calculation of high-speed nozzle flow fields

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.

    1994-01-01

    Flows through three reference nozzles have been calculated to determine the capabilities and limitations of the widely used Navier-Stokes solver, PARC. The nozzles examined have similar dominant flow characteristics as those considered for supersonic transport programs. Flows from an inverted velocity profile (IVP) nozzle, an under expanded nozzle, and an ejector nozzle were examined. PARC calculations were obtained with its standard algebraic turbulence model, Thomas, and the two-equation turbulence model, Chien k-epsilon. The Thomas model was run with the default coefficient of mixing set at both 0.09 and a larger value of 0.13 to improve the mixing prediction. Calculations using the default value substantially underpredicted the mixing for all three flows. The calculations obtained with the higher mixing coefficient better predicted mixing in the IVP and underexpanded nozzle flows but adversely affected PARC's convergence characteristics for the IVP nozzle case. The ejector nozzle case did not converge with the Thomas model and the higher mixing coefficient. The Chien k-epsilon results were in better agreement with the experimental data overall than were those of the Thomas run with the default mixing coefficient, but the default boundary conditions for k and epsilon underestimated the levels of mixing near the nozzle exits.

  6. Comparison of binding energies of SrcSH2-phosphotyrosyl peptides with structure-based prediction using surface area based empirical parameterization.

    PubMed Central

    Henriques, D. A.; Ladbury, J. E.; Jackson, R. M.

    2000-01-01

    The prediction of binding energies from the three-dimensional (3D) structure of a protein-ligand complex is an important goal of biophysics and structural biology. Here, we critically assess the use of empirical, solvent-accessible surface area-based calculations for the prediction of the binding of Src-SH2 domain with a series of tyrosyl phosphopeptides based on the high-affinity ligand from the hamster middle T antigen (hmT), where the residue in the pY+ 3 position has been changed. Two other peptides based on the C-terminal regulatory site of the Src protein and the platelet-derived growth factor receptor (PDGFR) are also investigated. Here, we take into account the effects of proton linkage on binding, and test five different surface area-based models that include different treatments for the contributions to conformational change and protein solvation. These differences relate to the treatment of conformational flexibility in the peptide ligand and the inclusion of proximal ordered solvent molecules in the surface area calculations. This allowed the calculation of a range of thermodynamic state functions (deltaCp, deltaS, deltaH, and deltaG) directly from structure. Comparison with the experimentally derived data shows little agreement for the interaction of SrcSH2 domain and the range of tyrosyl phosphopeptides. Furthermore, the adoption of the different models to treat conformational change and solvation has a dramatic effect on the calculated thermodynamic functions, making the predicted binding energies highly model dependent. While empirical, solvent-accessible surface area based calculations are becoming widely adopted to interpret thermodynamic data, this study highlights potential problems with application and interpretation of this type of approach. There is undoubtedly some agreement between predicted and experimentally determined thermodynamic parameters: however, the tolerance of this approach is not sufficient to make it ubiquitously applicable. PMID:11106171

  7. Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP

    NASA Astrophysics Data System (ADS)

    Wen, Lei; Yu, Jiake; Zhao, Xin

    2017-10-01

    In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.

  8. Modeling and Ab initio Calculations of Thermal Transport in Si-Based Clathrates and Solar Perovskites

    NASA Astrophysics Data System (ADS)

    He, Yuping

    2015-03-01

    We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.

  9. Fast integration-based prediction bands for ordinary differential equation models.

    PubMed

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Physical re-examination of parameters on a molecular collisions-based diffusion model for diffusivity prediction in polymers.

    PubMed

    Ohashi, Hidenori; Tamaki, Takanori; Yamaguchi, Takeo

    2011-12-29

    Molecular collisions, which are the microscopic origin of molecular diffusive motion, are affected by both the molecular surface area and the distance between molecules. Their product can be regarded as the free space around a penetrant molecule defined as the "shell-like free volume" and can be taken as a characteristic of molecular collisions. On the basis of this notion, a new diffusion theory has been developed. The model can predict molecular diffusivity in polymeric systems using only well-defined single-component parameters of molecular volume, molecular surface area, free volume, and pre-exponential factors. By consideration of the physical description of the model, the actual body moved and which neighbor molecules are collided with are the volume and the surface area of the penetrant molecular core. In the present study, a semiempirical quantum chemical calculation was used to calculate both of these parameters. The model and the newly developed parameters offer fairly good predictive ability. © 2011 American Chemical Society

  11. Numerical prediction of transitional features of turbulent forced gas flows in circular tubes with strong heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezato, K.; Shehata, A.M.; Kunugi, T.

    1999-08-01

    In order to treat strongly heated, forced gas flows at low Reynolds numbers in vertical circular tubes, the {kappa}-{epsilon} turbulence model of Abe, Kondoh, and Nagano (1994), developed for forced turbulent flow between parallel plates with the constant property idealization, has been successfully applied. For thermal energy transport, the turbulent Prandtl number model of Kays and Crawford (1993) was adopted. The capability to handle these flows was assessed via calculations at the conditions of experiments by Shehata (1984), ranging from essentially turbulent to laminarizing due to the heating. Predictions forecast the development of turbulent transport quantities, Reynolds stress, and turbulentmore » heat flux, as well as turbulent viscosity and turbulent kinetic energy. Overall agreement between the calculations and the measured velocity and temperature distributions is good, establishing confidence in the values of the forecast turbulence quantities--and the model which produced them. Most importantly, the model yields predictions which compare well with the measured wall heat transfer parameters and the pressure drop.« less

  12. Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code

    NASA Astrophysics Data System (ADS)

    Sabotinov, Luben; Chevrier, Patrick

    The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.

  13. Analysing the accuracy of machine learning techniques to develop an integrated influent time series model: case study of a sewage treatment plant, Malaysia.

    PubMed

    Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed

    2018-04-01

    The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.

  14. Blind predictions of protein interfaces by docking calculations in CAPRI.

    PubMed

    Lensink, Marc F; Wodak, Shoshana J

    2010-11-15

    Reliable prediction of the amino acid residues involved in protein-protein interfaces can provide valuable insight into protein function, and inform mutagenesis studies, and drug design applications. A fast-growing number of methods are being proposed for predicting protein interfaces, using structural information, energetic criteria, or sequence conservation or by integrating multiple criteria and approaches. Overall however, their performance remains limited, especially when applied to nonobligate protein complexes, where the individual components are also stable on their own. Here, we evaluate interface predictions derived from protein-protein docking calculations. To this end we measure the overlap between the interfaces in models of protein complexes submitted by 76 participants in CAPRI (Critical Assessment of Predicted Interactions) and those of 46 observed interfaces in 20 CAPRI targets corresponding to nonobligate complexes. Our evaluation considers multiple models for each target interface, submitted by different participants, using a variety of docking methods. Although this results in a substantial variability in the prediction performance across participants and targets, clear trends emerge. Docking methods that perform best in our evaluation predict interfaces with average recall and precision levels of about 60%, for a small majority (60%) of the analyzed interfaces. These levels are significantly higher than those obtained for nonobligate complexes by most extant interface prediction methods. We find furthermore that a sizable fraction (24%) of the interfaces in models ranked as incorrect in the CAPRI assessment are actually correctly predicted (recall and precision ≥50%), and that these models contribute to 70% of the correct docking-based interface predictions overall. Our analysis proves that docking methods are much more successful in identifying interfaces than in predicting complexes, and suggests that these methods have an excellent potential of addressing the interface prediction challenge. © 2010 Wiley-Liss, Inc.

  15. Maximal Predictability Approach for Identifying the Right Descriptors for Electrocatalytic Reactions.

    PubMed

    Krishnamurthy, Dilip; Sumaria, Vaidish; Viswanathan, Venkatasubramanian

    2018-02-01

    Density functional theory (DFT) calculations are being routinely used to identify new material candidates that approach activity near fundamental limits imposed by thermodynamics or scaling relations. DFT calculations are associated with inherent uncertainty, which limits the ability to delineate materials (distinguishability) that possess high activity. Development of error-estimation capabilities in DFT has enabled uncertainty propagation through activity-prediction models. In this work, we demonstrate an approach to propagating uncertainty through thermodynamic activity models leading to a probability distribution of the computed activity and thereby its expectation value. A new metric, prediction efficiency, is defined, which provides a quantitative measure of the ability to distinguish activity of materials and can be used to identify the optimal descriptor(s) ΔG opt . We demonstrate the framework for four important electrochemical reactions: hydrogen evolution, chlorine evolution, oxygen reduction and oxygen evolution. Future studies could utilize expected activity and prediction efficiency to significantly improve the prediction accuracy of highly active material candidates.

  16. A new approach to complete aircraft landing gear noise prediction

    NASA Astrophysics Data System (ADS)

    Lopes, Leonard V.

    This thesis describes a new landing gear noise prediction system developed at The Pennsylvania State University, called Landing Gear Model and Acoustic Prediction code (LGMAP). LGMAP is used to predict the noise of an isolated or installed landing gear geometry. The predictions include several techniques to approximate the aeroacoustic and aerodynamic interactions of landing gear noise generation. These include (1) a method for approximating the shielding of noise caused by the landing gear geometry, (2) accounting for local flow variations due to the wing geometry, (3) the interaction of the landing gear wake with high-lift devices, and (4) a method for estimating the effect of gross landing gear design changes on local flow and acoustic radiation. The LGMAP aeroacoustic prediction system has been created to predict the noise generated by a given landing gear. The landing gear is modeled as a set of simple components that represent individual parts of the structure. Each component, ranging from large to small, is represented by a simple geometric shape and the unsteady flow on the component is modeled based on an individual characteristic length, local flow velocity, and the turbulent flow environment. A small set of universal models is developed and applied to a large range of similar components. These universal models, combined with the actual component geometry and local environment, give a unique loading spectrum and acoustic field for each component. Then, the sum of all the individual components in the complete configuration is used to model the high level of geometric complexity typical of current aircraft undercarriage designs. A line of sight shielding algorithm based on scattering by a two-dimensional cylinder approximates the effect of acoustic shielding caused by the landing gear. Using the scattering from a cylinder in two-dimensions at an observer position directly behind the cylinder, LGMAP is able to estimate the reduction in noise due to shielding by the landing gear geometry. This thesis compares predictions with data from a recent wind tunnel experiment conducted at NASA Langley Research Center, and demonstrates that including the acoustic scattering can improve the predictions by LGMAP at all observer positions. In this way, LGMAP provides more information about the actual noise propagation than simple empirical schemes. Two-dimensional FLUENT calculations of approximate wing cross-sections are used by LGMAP to compute the change in noise due to the change in local flow velocity in the vicinity of the landing gear due to circulation around the wing. By varying angle of attack and flap deflection angle in the CFD calculations, LGMAP is able to predict the noise level change due to the change in local flow velocity in the landing gear vicinity. A brief trade study is performed on the angle of attack of the wing and flap deflection angle of the flap system. It is shown that increasing the angle of attack or flap deflection angle reduces the flow velocity in the vicinity of the landing gear, and therefore the predicted noise. Predictions demonstrate the ability of the prediction system to quickly estimate the change in landing gear noise caused by a change in wing configuration. A three-dimensional immersed boundary CFD calculation of simplified landing gear geometries provides relatively quick estimates of the mean flow around the landing gear. The mean flow calculation provides the landing gear wake geometry for the prediction of trailing edge noise associated with the interaction of the landing gear wake with the high lift devices. Using wind tunnel experiments that relate turbulent intensity to wake size and the Ffowcs Williams and Hall trailing edge noise equation for the acoustic calculation, LGMAP is able to predict the landing gear wake generated trailing edge noise. In this manner, LGMAP includes the effect of the interaction of the landing gear's wake with the wing/flap system on the radiated noise. The final prediction technique implemented includes local flow calculations of a landing gear with various truck angles using the immersed boundary scheme. Using the mean flow calculation, LGMAP is able to predict noise changes caused by gross changes in landing gear design. Calculations of the mean flow around the landing gear show that the rear wheels of a six-wheel bogie experience significantly reduced mean flow velocity when the truck is placed in a toe-down configuration. This reduction in the mean flow results is a lower noise signature from the rear wheel. Since the noise from a six-wheel bogie at flyover observer positions is primarily composed of wheel noise, the reduced local flow velocity results in a reduced noise signature from the entire landing gear geometry. Comparisons with measurements show the accuracy of the predictions of landing gear noise levels and directivity. Airframe noise predictions for the landing gear of a complete aircraft are described including all of the above mentioned developments and prediction techniques. These show that the nose gear noise and the landing gear wake/flap interaction noise, while not significantly changing the overall shape of the radiated noise, do contribute to the overall noise from the installed landing gear.

  17. Comparison of different risk stratification systems in predicting short-term serious outcome of syncope patients

    PubMed Central

    Safari, Saeed; Baratloo, Alireza; Hashemi, Behrooz; Rahmati, Farhad; Forouzanfar, Mohammad Mehdi; Motamedi, Maryam; Mirmohseni, Ladan

    2016-01-01

    Background: Determining etiologic causes and prognosis can significantly improve management of syncope patients. The present study aimed to compare the values of San Francisco, Osservatorio Epidemiologico sulla Sincope nel Lazio (OESIL), Boston, and Risk Stratification of Syncope in the Emergency Department (ROSE) score clinical decision rules in predicting the short-term serious outcome of syncope patients. Materials and Methods: The present diagnostic accuracy study with 1-week follow-up was designed to evaluate the predictive values of the four mentioned clinical decision rules. Screening performance characteristics of each model in predicting mortality, myocardial infarction (MI), and cerebrovascular accidents (CVAs) were calculated and compared. To evaluate the value of each aforementioned model in predicting the outcome, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio were calculated and receiver-operating curve (ROC) curve analysis was done. Results: A total of 187 patients (mean age: 64.2 ± 17.2 years) were enrolled in the study. Mortality, MI, and CVA were seen in 19 (10.2%), 12 (6.4%), and 36 (19.2%) patients, respectively. Area under the ROC curve for OESIL, San Francisco, Boston, and ROSE models in prediction the risk of 1-week mortality, MI, and CVA was in the 30–70% range, with no significant difference among models (P > 0.05). The pooled model did not show higher accuracy in prediction of mortality, MI, and CVA compared to others (P > 0.05). Conclusion: This study revealed the weakness of all four evaluated models in predicting short-term serious outcome of syncope patients referred to the emergency department without any significant advantage for one among others. PMID:27904602

  18. Prediction of solvation enthalpy of gaseous organic compounds in propanol

    NASA Astrophysics Data System (ADS)

    Golmohammadi, Hassan; Dashtbozorgi, Zahra

    2016-09-01

    The purpose of this paper is to present a novel way for developing quantitative structure-property relationship (QSPR) models to predict the gas-to-propanol solvation enthalpy (Δ H solv) of 95 organic compounds. Different kinds of descriptors were calculated for each compound using the Dragon software package. The variable selection technique of replacement method (RM) was employed to select the optimal subset of solute descriptors. Our investigation reveals that the dependence of physical chemistry properties of solution on solvation enthalpy is nonlinear and that the RM method is unable to model the solvation enthalpy accurately. The results established that the calculated Δ H solv values by SVM were in good agreement with the experimental ones, and the performances of the SVM models were superior to those obtained by RM model.

  19. Development of an algebraic stress/two-layer model for calculating thrust chamber flow fields

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Shang, H. M.; Huang, J.

    1993-01-01

    Following the consensus of a workshop in Turbulence Modeling for Liquid Rocket Thrust Chambers, the current effort was undertaken to study the effects of second-order closure on the predictions of thermochemical flow fields. To reduce the instability and computational intensity of the full second-order Reynolds Stress Model, an Algebraic Stress Model (ASM) coupled with a two-layer near wall treatment was developed. Various test problems, including the compressible boundary layer with adiabatic and cooled walls, recirculating flows, swirling flows and the entire SSME nozzle flow were studied to assess the performance of the current model. Detailed calculations for the SSME exit wall flow around the nozzle manifold were executed. As to the overall flow predictions, the ASM removes another assumption for appropriate comparison with experimental data, to account for the non-isotropic turbulence effects.

  20. Geometric modeling of Plateau borders using the orthographic projection method for closed cell rigid polyurethane foam thermal conductivity prediction

    NASA Astrophysics Data System (ADS)

    Xu, Jie; Wu, Tao; Peng, Chuang; Adegbite, Stephen

    2017-09-01

    The geometric Plateau border model for closed cell polyurethane foam was developed based on volume integrations of approximated 3D four-cusp hypocycloid structure. The tetrahedral structure of convex struts was orthogonally projected into 2D three-cusp deltoid with three central cylinders. The idealized single unit strut was modeled by superposition. The volume of each component was calculated by geometric analyses. The strut solid fraction f s and foam porosity coefficient δ were calculated based on representative elementary volume of Kelvin and Weaire-Phelan structures. The specific surface area Sv derived respectively from packing structures and deltoid approximation model were put into contrast against strut dimensional ratio ɛ. The characteristic foam parameters obtained from this semi-empirical model were further employed to predict foam thermal conductivity.

  1. Elaboration of the α-model derived from the BCS theory of superconductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, David C.

    2013-10-14

    The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp. Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp.more » Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is Boltzmann's constant and Tc is the superconducting transition temperature. On the other hand, to calculate the electronic free energy, entropy, heat capacity and thermodynamic critical field versus T, the α-model takes α to be an adjustable parameter. Here we write the BCS equations and limiting behaviors for the superconducting state thermodynamic properties explicitly in terms of α, as needed for calculations within the α-model, and present plots of the results versus T and α that are compared with the respective BCS predictions. Mechanisms such as gap anisotropy and strong coupling that can cause deviations of the thermodynamics from the BCS predictions, especially the heat capacity jump at Tc, are considered. Extensions of the α-model that have appeared in the literature, such as the two-band model, are also discussed. Tables of values of Δ(T)/Δ(0), the normalized London parameter Λ(T)/Λ(0) and λL(T)/λL(0) calculated from the BCS theory using α = αBCS are provided, which are the same in the α-model by assumption. Tables of values of the entropy, heat capacity and thermodynamic critical field versus T for seven values of α, including αBCS, are also presented.« less

  2. Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort

    PubMed Central

    Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang

    2017-01-01

    Purpose We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials and Methods Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. Conclusions KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. PMID:28046017

  3. Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort.

    PubMed

    Park, Jae Young; Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang; Byun, Seok-Soo

    2017-01-01

    We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings.

  4. Deep Flare Net (DeFN) Model for Solar Flare Prediction

    NASA Astrophysics Data System (ADS)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Ishii, M.

    2018-05-01

    We developed a solar flare prediction model using a deep neural network (DNN) named Deep Flare Net (DeFN). This model can calculate the probability of flares occurring in the following 24 hr in each active region, which is used to determine the most likely maximum classes of flares via a binary classification (e.g., ≥M class versus

  5. Development of QSAR models using artificial neural network analysis for risk assessment of repeated-dose, reproductive, and developmental toxicities of cosmetic ingredients.

    PubMed

    Hisaki, Tomoka; Aiba Née Kaneko, Maki; Yamaguchi, Masahiko; Sasa, Hitoshi; Kouzuki, Hirokazu

    2015-04-01

    Use of laboratory animals for systemic toxicity testing is subject to strong ethical and regulatory constraints, but few alternatives are yet available. One possible approach to predict systemic toxicity of chemicals in the absence of experimental data is quantitative structure-activity relationship (QSAR) analysis. Here, we present QSAR models for prediction of maximum "no observed effect level" (NOEL) for repeated-dose, developmental and reproductive toxicities. NOEL values of 421 chemicals for repeated-dose toxicity, 315 for reproductive toxicity, and 156 for developmental toxicity were collected from Japan Existing Chemical Data Base (JECDB). Descriptors to predict toxicity were selected based on molecular orbital (MO) calculations, and QSAR models employing multiple independent descriptors as the input layer of an artificial neural network (ANN) were constructed to predict NOEL values. Robustness of the models was indicated by the root-mean-square (RMS) errors after 10-fold cross-validation (0.529 for repeated-dose, 0.508 for reproductive, and 0.558 for developmental toxicity). Evaluation of the models in terms of the percentages of predicted NOELs falling within factors of 2, 5 and 10 of the in-vivo-determined NOELs suggested that the model is applicable to both general chemicals and the subset of chemicals listed in International Nomenclature of Cosmetic Ingredients (INCI). Our results indicate that ANN models using in silico parameters have useful predictive performance, and should contribute to integrated risk assessment of systemic toxicity using a weight-of-evidence approach. Availability of predicted NOELs will allow calculation of the margin of safety, as recommended by the Scientific Committee on Consumer Safety (SCCS).

  6. Predicting net joint moments during a weightlifting exercise with a neural network model.

    PubMed

    Kipp, Kristof; Giordanelli, Matthew; Geiser, Christopher

    2018-06-06

    The purpose of this study was to develop and train a Neural Network (NN) that uses barbell mass and motions to predict hip, knee, and ankle Net Joint Moments (NJM) during a weightlifting exercise. Seven weightlifters performed two cleans at 85% of their competition maximum while ground reaction forces and 3-D motion data were recorded. An inverse dynamics procedure was used to calculate hip, knee, and ankle NJM. Vertical and horizontal barbell motion data were extracted and, along with barbell mass, used as inputs to a NN. The NN was then trained to model the association between the mass and kinematics of the barbell and the calculated NJM for six weightlifters, the data from the remaining weightlifter was then used to test the performance of the NN - this was repeated 7 times with a k-fold cross-validation procedure to assess the NN accuracy. Joint-specific predictions of NJM produced coefficients of determination (r 2 ) that ranged from 0.79 to 0.95, and the percent difference between NN-predicted and inverse dynamics calculated peak NJM ranged between 5% and 16%. The NN was thus able to predict the spatiotemporal patterns and discrete peaks of the three NJM with reasonable accuracy, which suggests that it is feasible to predict lower extremity NJM from the mass and kinematics of the barbell. Future work is needed to determine whether combining a NN model with low cost technology (e.g., digital video and free digitising software) can also be used to predict NJM of weightlifters during field-testing situations, such as practice and competition, with comparable accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Investigation to advance prediction techniques of the low-speed aerodynamics of V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Maskew, B.; Strash, D.; Nathman, J.; Dvorak, F. A.

    1985-01-01

    A computer program, VSAERO, has been applied to a number of V/STOL configurations with a view to advancing prediction techniques for the low-speed aerodynamic characteristics. The program couples a low-order panel method with surface streamline calculation and integral boundary layer procedures. The panel method--which uses piecewise constant source and doublet panels-includes an iterative procedure for wake shape and models boundary layer displacement effect using the source transpiration technique. Certain improvements to a basic vortex tube jet model were installed in the code prior to evaluation. Very promising results were obtained for surface pressures near a jet issuing at 90 deg from a flat plate. A solid core model was used in the initial part of the jet with a simple entrainment model. Preliminary representation of the downstream separation zone significantly improve the correlation. The program accurately predicted the pressure distribution inside the inlet on the Grumman 698-411 design at a range of flight conditions. Furthermore, coupled viscous/potential flow calculations gave very close correlation with experimentally determined operational boundaries dictated by the onset of separation inside the inlet. Experimentally observed degradation of these operational boundaries between nacelle-alone tests and tests on the full configuration were also indicated by the calculation. Application of the program to the General Dynamics STOL fighter design were equally encouraging. Very close agreement was observed between experiment and calculation for the effects of power on pressure distribution, lift and lift curve slope.

  8. Can the ACS-NSQIP surgical risk calculator predict post-operative complications in patients undergoing flap reconstruction following soft tissue sarcoma resection?

    PubMed

    Slump, Jelena; Ferguson, Peter C; Wunder, Jay S; Griffin, Anthony; Hoekstra, Harald J; Bagher, Shaghayegh; Zhong, Toni; Hofer, Stefan O P; O'Neill, Anne C

    2016-10-01

    The ACS-NSQIP surgical risk calculator is an open-access on-line tool that estimates the risk of adverse post-operative outcomes for a wide range of surgical procedures. Wide surgical resection of soft tissue sarcoma (STS) often requires complex reconstructive procedures that can be associated with relatively high rates of complications. This study evaluates the ability of this calculator to identify patients with STS at risk for post-operative complications following flap reconstruction. Clinical details of 265 patients who underwent flap reconstruction following STS resection were entered into the online calculator. The predicted rates of complications were compared to the observed rates. The calculator model was validated using measures of prediction and discrimination. The mean predicted rate of any complication was 15.35 ± 5.6% which differed significantly from the observed rate of 32.5% (P = 0.009). The c-statistic was relatively low at 0.626 indicating poor discrimination between patients who are at risk of complications and those who are not. The Brier's score of 0.242 was significantly different from 0 (P < 0.001) indicating poor correlation between the predicted and actual probability of complications. The ACS-NSQIP universal risk calculator did not maintain its predictive value in patients undergoing flap reconstruction following STS resection. J. Surg. Oncol. 2016;114:570-575. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Modeling Alkyl p-Methoxy Cinnamate (APMC) as UV absorber based on electronic transition using semiempirical quantum mechanics ZINDO/s calculation

    NASA Astrophysics Data System (ADS)

    Salmahaminati; Azis, Muhlas Abdul; Purwiandono, Gani; Arsyik Kurniawan, Muhammad; Rubiyanto, Dwiarso; Darmawan, Arif

    2017-11-01

    In this research, modeling several alkyl p-methoxy cinnamate (APMC) based on electronic transition by using semiempirical mechanical quantum ZINDO/s calculation is performed. Alkyl cinnamates of C1 (methyl) up to C7 (heptyl) homolog with 1-5 example structures of each homolog are used as materials. Quantum chemistry-package software Hyperchem 8.0 is used to simulate the drawing of the structure, geometry optimization by a semiempirical Austin Model 1 algorithm and single point calculation employing a semiempirical ZINDO/s technique. ZINDO/s calculations use a defined criteria that singly excited -Configuration Interaction (CI) where a gap of HOMO-LUMO energy transition and maximum degeneracy level are 7 and 2, respectively. Moreover, analysis of the theoretical spectra is focused on the UV-B (290-320 nm) and UV-C (200-290 nm) area. The results show that modeling of the compound can be used to predict the type of UV protection activity depends on the electronic transition in the UV area. Modification of the alkyl homolog relatively does not change the value of wavelength absorption to indicate the UV protection activity. Alkyl cinnamate compounds are predicted as UV-B and UV-C sunscreen.

  10. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  11. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, M. J., E-mail: macdonm@umich.edu; SLAC National Accelerator Laboratory, Menlo Park, California 94025; Vorberger, J.

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit [Higginbotham, J. Appl. Phys. 115, 174906 (2014)]. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate latticemore » strains for all initial crystallite orientations, enabling elastic anisotropy and sample texture effects to be modeled directly. The effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.« less

  12. Non-LTE models of Titan's upper atmosphere

    NASA Technical Reports Server (NTRS)

    Yelle, Roger V.

    1991-01-01

    Models for the thermal structure of Titan's upper atmosphere, between 0.1 mbar and 0.01 nbar are presented. The calculations include non-LTE heating/cooling in the rotation-vibration bands of CH4, C2H2, and C2H6, absorption of solar IR radiation in the near-IR bands of CH4 and subsequent cascading to the nu-4 band of CH4, absorption of solar EUV and UV radiation, thermal conduction and cooling by HCN rotational lines. Unlike earlier models, the calculated exospheric temperature agrees well with observations, because of the importance of HCN cooling. The calculations predict a well-developed mesopause with a temperature of 135-140 K at an altitude of approximately 600 km and pressure of about 0.1 microbar. The mesopause is at a higher pressure than predicted by earlier calculations because non-LTE radiative transfer in the rotation-vibration bands of CH4, C2H2, and C2H6 is treated in an accurate manner. The accuracy of the LTE approximation for source functions and heating rates is discussed.

  13. Constitutive Modeling of Piezoelectric Polymer Composites

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Tom (Technical Monitor)

    2003-01-01

    A new modeling approach is proposed for predicting the bulk electromechanical properties of piezoelectric composites. The proposed model offers the same level of convenience as the well-known Mori-Tanaka method. In addition, it is shown to yield predicted properties that are, in most cases, more accurate or equally as accurate as the Mori-Tanaka scheme. In particular, the proposed method is used to determine the electromechanical properties of four piezoelectric polymer composite materials as a function of inclusion volume fraction. The predicted properties are compared to those calculated using the Mori-Tanaka and finite element methods.

  14. Tank System Integrated Model: A Cryogenic Tank Performance Prediction Program

    NASA Technical Reports Server (NTRS)

    Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Sutherlin, S. G.; Schnell, A. R.; Moder, J. P.

    2017-01-01

    Accurate predictions of the thermodynamic state of the cryogenic propellants, pressurization rate, and performance of pressure control techniques in cryogenic tanks are required for development of cryogenic fluid long-duration storage technology and planning for future space exploration missions. This Technical Memorandum (TM) presents the analytical tool, Tank System Integrated Model (TankSIM), which can be used for modeling pressure control and predicting the behavior of cryogenic propellant for long-term storage for future space missions. Utilizing TankSIM, the following processes can be modeled: tank self-pressurization, boiloff, ullage venting, mixing, and condensation on the tank wall. This TM also includes comparisons of TankSIM program predictions with the test data andexamples of multiphase mission calculations.

  15. Estimation of the Viscosities of Liquid Sn-Based Binary Lead-Free Solder Alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Li, Jinquan

    2018-01-01

    The viscosity of a binary Sn-based lead-free solder alloy was calculated by combining the predicted model with the Miedema model. The viscosity factor was proposed and the relationship between the viscosity and surface tension was analyzed as well. The investigation result shows that the viscosity of Sn-based lead-free solders predicted from the predicted model shows excellent agreement with the reported values. The viscosity factor is determined by three physical parameters: atomic volume, electronic density, and electro-negativity. In addition, the apparent correlation between the surface tension and viscosity of the binary Sn-based Pb-free solder was obtained based on the predicted model.

  16. Vibration analysis of the SA349/2 helicopter

    NASA Technical Reports Server (NTRS)

    Heffernan, Ruth; Precetti, Dominique; Johnson, Wayne

    1991-01-01

    Helicopter airframe vibration is examined using calculations and measurements for the SA349/2 research helicopter. The hub loads, which transmit excitations to the fuselage, are predicted using a comprehensive rotorcraft analysis and correlated with measuring hub loads. The predicted and measured hub loads are then coupled with finite element models representing the SA349/2 fuselage. The resulting vertical acceleration at the pilot seat is examined. Adjustments are made to the airframe structural models to examine the sensitivity of predicted vertical acceleration to the model. Changes of a few percent to the damping and frequency of specific models lead to large reductions in predicted vibration, and to major improvements in the correlations with measured pilot-seat vertical acceleration.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei

    The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less

  18. Evaluation of relative response factor methodology for demonstrating attainment of ozone in Houston, Texas.

    PubMed

    Vizuete, William; Biton, Leiran; Jeffries, Harvey E; Couzo, Evan

    2010-07-01

    In 2007, the U.S. Environmental Protection Agency (EPA) released guidance on demonstrating attainment of the federal ozone (O3) standard. This guidance recommended a change in the use of air quality model (AQM) predictions from an absolute to a relative way. This was accomplished by using a ratio, and not the absolute difference of AQM O3 predictions from a historical year to an attainment year. This ratio of O3 concentrations, labeled the relative response factor (RRF), is multiplied by an average of observed concentrations at every monitor. In this analysis, whether the methodology used to calculate RRFs is severing the source-receptor relationship for a given monitor was investigated. Model predictions were generated with a regulatory AQM system used to support the 2004 Houston-Galveston-Brazoria State Implementation Plan. Following the procedures in the EPA guidance, an attainment demonstration was completed using regulatory AQM predictions and measurements from the Houston ground-monitoring network. Results show that the model predictions used for the RRF calculation were often based on model conditions that were geographically remote from observations and counter to wind flow. Many of the monitors used the same model predictions for an RRF, even if that O3 plume did not impact it. The RRF methodology resulted in severing the true source-receptor relationship for a monitor. This analysis also showed that model performance could influence RRF values, and values at monitoring sites appear to be sensitive to model bias. Results indicate an inverse linear correlation of RRFs with model bias at each monitor (R2 = 0.47), resulting in a change in future O3 design values up to 5 parts per billion (ppb). These results suggest that the application of RRF methodology in Houston, TX, should be changed from using all model predictions above 85 ppb to a method that removes any predictions that are not relevant to the observed source-receptor relationship.

  19. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906

  20. Increased Fidelity in Prediction Methods For Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V.; Brentner, Kenneth S.; Morris, Philip J.; Lockard, David P.

    2006-01-01

    An aeroacoustic prediction scheme has been developed for landing gear noise. The method is designed to handle the complex landing gear geometry of current and future aircraft. The gear is represented by a collection of subassemblies and simple components that are modeled using acoustic elements. These acoustic elements are generic, but generate noise representative of the physical components on a landing gear. The method sums the noise radiation from each component of the undercarriage in isolation accounting for interference with adjacent components through an estimate of the local upstream and downstream flows and turbulence intensities. The acoustic calculations are made in the code LGMAP, which computes the sound pressure levels at various observer locations. The method can calculate the noise from the undercarriage in isolation or installed on an aircraft for both main and nose landing gear. Comparisons with wind tunnel and flight data are used to initially calibrate the method, then it may be used to predict the noise of any landing gear. In this paper, noise predictions are compared with wind tunnel data for model landing gears of various scales and levels of fidelity, as well as with flight data on fullscale undercarriages. The present agreement between the calculations and measurements suggests the method has promise for future application in the prediction of airframe noise.

  1. A two-component rain model for the prediction of attenuation and diversity improvement

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A new model was developed to predict attenuation statistics for a single Earth-satellite or terrestrial propagation path. The model was extended to provide predictions of the joint occurrences of specified or higher attenuation values on two closely spaced Earth-satellite paths. The joint statistics provide the information required to obtain diversity gain or diversity advantage estimates. The new model is meteorologically based. It was tested against available Earth-satellite beacon observations and terrestrial path measurements. The model employs the rain climate region descriptions of the Global rain model. The rms deviation between the predicted and observed attenuation values for the terrestrial path data was 35 percent, a result consistent with the expectations of the Global model when the rain rate distribution for the path is not used in the calculation. Within the United States the rms deviation between measurement and prediction was 36 percent but worldwide it was 79 percent.

  2. Direct calculation of 1-octanol-water partition coefficients from adaptive biasing force molecular dynamics simulations.

    PubMed

    Bhatnagar, Navendu; Kamath, Ganesh; Chelst, Issac; Potoff, Jeffrey J

    2012-07-07

    The 1-octanol-water partition coefficient log K(ow) of a solute is a key parameter used in the prediction of a wide variety of complex phenomena such as drug availability and bioaccumulation potential of trace contaminants. In this work, adaptive biasing force molecular dynamics simulations are used to determine absolute free energies of hydration, solvation, and 1-octanol-water partition coefficients for n-alkanes from methane to octane. Two approaches are evaluated; the direct transfer of the solute from 1-octanol to water phase, and separate transfers of the solute from the water or 1-octanol phase to vacuum, with both methods yielding statistically indistinguishable results. Calculations performed with the TIP4P and SPC∕E water models and the TraPPE united-atom force field for n-alkanes show that the choice of water model has a negligible effect on predicted free energies of transfer and partition coefficients for n-alkanes. A comparison of calculations using wet and dry octanol phases shows that the predictions for log K(ow) using wet octanol are 0.2-0.4 log units lower than for dry octanol, although this is within the statistical uncertainty of the calculation.

  3. Multifactorial disease risk calculator: Risk prediction for multifactorial disease pedigrees.

    PubMed

    Campbell, Desmond D; Li, Yiming; Sham, Pak C

    2018-03-01

    Construction of multifactorial disease models from epidemiological findings and their application to disease pedigrees for risk prediction is nontrivial for all but the simplest of cases. Multifactorial Disease Risk Calculator is a web tool facilitating this. It provides a user-friendly interface, extending a reported methodology based on a liability-threshold model. Multifactorial disease models incorporating all the following features in combination are handled: quantitative risk factors (including polygenic scores), categorical risk factors (including major genetic risk loci), stratified age of onset curves, and the partition of the population variance in disease liability into genetic, shared, and unique environment effects. It allows the application of such models to disease pedigrees. Pedigree-related outputs are (i) individual disease risk for pedigree members, (ii) n year risk for unaffected pedigree members, and (iii) the disease pedigree's joint liability distribution. Risk prediction for each pedigree member is based on using the constructed disease model to appropriately weigh evidence on disease risk available from personal attributes and family history. Evidence is used to construct the disease pedigree's joint liability distribution. From this, lifetime and n year risk can be predicted. Example disease models and pedigrees are provided at the website and are used in accompanying tutorials to illustrate the features available. The website is built on an R package which provides the functionality for pedigree validation, disease model construction, and risk prediction. Website: http://grass.cgs.hku.hk:3838/mdrc/current. © 2017 WILEY PERIODICALS, INC.

  4. NIS, IR and Raman spectra with quantum mechanical calculations for analyzing the force field of hypericin model compounds

    NASA Astrophysics Data System (ADS)

    Ulicny, Jozef; Leulliot, Nicolas; Grajcar, Lydie; Baron, Marie-Hélène; Jobic, Hervé; Ghomi, Mahmoud

    1999-06-01

    Geometry optimization as well as harmonic force field calculations at HF and DFT levels of theory have been performed in order to elucidate the ground state properties of anthrone and emodin, two polycyclic conjugated molecules considered as hypericin model compounds. NIS, IR and FT-Raman spectra of these compounds have been recorded to validate the calculated results (geometry and vibrational modes). Calculated NIS spectra using the lowest energy conformers are in agreement with experiment. In addition, the intramolecular H-bonds in emodin predicted by the calculations can be evidenced using IR spectra as a function of temperature.

  5. NIS, IR and Raman spectra with quantum mechanical calculations for analyzing the force field of hypericin model compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulicny, Jozef; Department of Biophysics, Safarik University, Jesenna 5, 04154 Kosice; Leulliot, Nicolas

    1999-06-15

    Geometry optimization as well as harmonic force field calculations at HF and DFT levels of theory have been performed in order to elucidate the ground state properties of anthrone and emodin, two polycyclic conjugated molecules considered as hypericin model compounds. NIS, IR and FT-Raman spectra of these compounds have been recorded to validate the calculated results (geometry and vibrational modes). Calculated NIS spectra using the lowest energy conformers are in agreement with experiment. In addition, the intramolecular H-bonds in emodin predicted by the calculations can be evidenced using IR spectra as a function of temperature.

  6. Group additivity calculations of the thermodynamic properties of unfolded proteins in aqueous solution: a critical comparison of peptide-based and HKF models.

    PubMed

    Hakin, A W; Hedwig, G R

    2001-02-15

    A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.

  7. Radiation model predictions and validation using LDEF satellite data

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1993-01-01

    Predictions and comparisons with the radiation dose measurements on Long Duration Exposure Facility (LDEF) by thermoluminescent dosimeters were made to evaluate the accuracy of models currently used in defining the ionizing radiation environment for low Earth orbit missions. The calculations include a detailed simulation of the radiation exposure (altitude and solar cycle variations, directional dependence) and shielding effects (three-dimensional LDEF geometry model) so that differences in the predicted and observed doses can be attributed to environment model uncertainties. The LDEF dose data are utilized to assess the accuracy of models describing the trapped proton flux, the trapped proton directionality, and the trapped electron flux.

  8. Prediction of wastewater treatment plants performance based on artificial fish school neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Ruicheng; Li, Chong

    2011-10-01

    A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.

  9. Ablation Predictions for Carbonaceous Materials Using Two Databases for Species Thermodynamics

    NASA Technical Reports Server (NTRS)

    Milos, F. S.; Chen, Y.-K.

    2013-01-01

    During previous work at NASA Ames Research Center, most ablation predictions were obtained using a species thermodynamics database derived primarily from the JANAF thermochemical tables. However, the chemical equilibrium with applications thermodynamics database, also used by NASA, is considered more up to date. In this work, ablation analyses were performed for carbon and carbon phenolic materials using both sets of species thermodynamics. The ablation predictions are comparable at low and moderate heat fluxes, where the dominant mechanism is carbon oxidation. For high heat fluxes where sublimation is important, the predictions differ, with the chemical equilibrium with applications model predicting a lower ablation rate. The disagreement is greater for carbon phenolic than for carbon, and this difference is attributed to hydrocarbon species that may contribute to the ablation rate. Sample calculations for representative Orion and Stardust environments show significant differences only in the sublimation regime. For Stardust, if the calculations include a nominal environmental uncertainty for aeroheating, then the chemical equilibrium with applications model predicts a range of recession that is consistent with measurements for both heatshield cores.

  10. Predictions of a Large Magnetocaloric Effect in Co- and Cr-Substituted Heusler Alloys Using First-Principles and Monte Carlo Approaches

    NASA Astrophysics Data System (ADS)

    Sokolovskiy, Vladimir V.; Buchelnikov, Vasiliy D.; Zagrebin, Mikhail A.; Grünebohm, Anna; Entel, Peter

    The effect of Co- and Cr-doping on magnetic and magnetocaloric poperties of Ni-Mn-(In, Ga, Sn, and Al) Heusler alloys has been theoretically studied by combining first principles with Monte Carlo approaches. The magnetic and magnetocaloric properties are obtained as a function of temperature and magnetic field using a mixed type of Potts and Blume-Emery-Griffiths model where the model parameters are obtained from ab initio calculations. The Monte Carlo calculations allowed to make predictions of a giant inverse magnetocaloric effect in partially new hypothetical magnetic Heusler alloys across the martensitic transformation.

  11. Semimicroscopic, Lane-consistent nucleon-nucleus optical model potential up to 200 MeV

    NASA Astrophysics Data System (ADS)

    Bauge, Eric; Delaroche, Jean-Paul; Girod, Michel

    2000-10-01

    Our semimicroscopic optical model potential (E. Bauge et al., Phys. Rev. C 58), 1118 (1998). is re-evaluated in order to obtain a Lane-consistent description of (p,p), (n,n) and (p,n IAS) elastic scattering and reaction observables. The re-assessed nuclear matter interaction (which includes sizable renormalizations of the isovector potentials) is folded with microscopic HFB nuclear densities, producing OMPs that are free of adjustable parameters for nuclei with A >= 40. With Lane-consistency of the interaction, and the predictive nature of our HFB calculations, this scheme can be used to calculate observables for nuclei far from the stability line with good predictivity.

  12. Fluorescence quenching near small metal nanoparticles.

    PubMed

    Pustovit, V N; Shahbazyan, T V

    2012-05-28

    We develop a microscopic model for fluorescence of a molecule (or semiconductor quantum dot) near a small metal nanoparticle. When a molecule is situated close to metal surface, its fluorescence is quenched due to energy transfer to the metal. We perform quantum-mechanical calculations of energy transfer rates for nanometer-sized Au nanoparticles and find that nonlocal and quantum-size effects significantly enhance dissipation in metal as compared to those predicted by semiclassical electromagnetic models. However, the dependence of transfer rates on molecule's distance to metal nanoparticle surface, d, is significantly weaker than the d(-4) behavior for flat metal surface with a sharp boundary predicted by previous calculations within random phase approximation.

  13. Numerical modeling of oil shale fragmentation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuszmaul, J.S.

    The economic development of modified in situ oil shale retorting will benefit from the ability to design a blasting scheme that creates a rubble bed of uniform permeability. Preparing such a design depends upon successfully predicting how a given explosive charge and firing sequence will fracture the oil shale. Numerical models are used to predict the extent of damage caused by a particular explosive charge. Recent single-blastwell cratering tests provided experimental measurements of the extent of damage induced by an explosion. Measuring rock damage involved crater excavation, rubble screening, crater elevation surveys, and posttest extraction of cores. These measurements weremore » compared to the damage calculated by the numerical model. Core analyses showed that the damage varied greatly from layer to layer. The numerical results also show this effect, indicating that rock damage is highly dependent on oil shale grade. The computer simulation also calculated particle velocities and dynamic stress amplitudes in the rock; predicted values agree with experimental measurements. Calculated rock fragmentation compared favorably with fragmentation measured by crater excavation and by core analysis. Because coring provides direct inspection of rock fragmentation, the use of posttest coring in future experiments is recommended.« less

  14. Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots

    NASA Astrophysics Data System (ADS)

    Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-09-01

    Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.

  15. Evaluation of the National Surgical Quality Improvement Program Universal Surgical Risk Calculator for a gynecologic oncology service.

    PubMed

    Szender, J Brian; Frederick, Peter J; Eng, Kevin H; Akers, Stacey N; Lele, Shashikant B; Odunsi, Kunle

    2015-03-01

    The National Surgical Quality Improvement Program is aimed at preventing perioperative complications. An online calculator was recently published, but the primary studies used limited gynecologic surgery data. The purpose of this study was to evaluate the performance of the National Surgical Quality Improvement Program Universal Surgical Risk Calculator (URC) on the patients of a gynecologic oncology service. We reviewed 628 consecutive surgeries performed by our gynecologic oncology service between July 2012 and June 2013. Demographic data including diagnosis and cancer stage, if applicable, were collected. Charts were reviewed to determine complication rates. Specific complications were as follows: death, pneumonia, cardiac complications, surgical site infection (SSI) or urinary tract infection, renal failure, or venous thromboembolic event. Data were compared with modeled outcomes using Brier scores and receiver operating characteristic curves. Significance was declared based on P < 0.05. The model accurately predicated death and venous thromboembolic event, with Brier scores of 0.004 and 0.003, respectively. Predicted risk was 50% greater than experienced for urinary tract infection; the experienced SSI and pneumonia rates were 43% and 36% greater than predicted. For any complication, the Brier score 0.023 indicates poor performance of the model. In this study of gynecologic surgeries, we could not verify the predictive value of the URC for cardiac complications, SSI, and pneumonia. One disadvantage of applying a URC to multiple subspecialties is that with some categories, complications are not accurately estimated. Our data demonstrate that some predicted risks reported by the calculator need to be interpreted with reservation.

  16. Prediction of visual evoked potentials at any surface location from a set of three recording electrodes.

    PubMed

    Mazinani, Babac A E; Waberski, Till D; van Ooyen, Andre; Walter, Peter

    2008-05-01

    Purpose of this study was to introduce a mathematical model which allows the calculation of a source dipole as the origin of the evoked activity based on the data of three simultaneously recorded VEPs from different locations at the scalp surface to predict field potentials at any neighboring location and to validate this model by comparison with actual recordings. In 10 healthy subjects (25-38, mean 29 years) continuous VEPs were recorded via 96 channels. On the base of the recordings at the positions POz', O1' and O2', a source dipole vector was calculated for each time point of the recordings and VEP responses were back projected for any of the 96 electrode positions. Differences between the calculated and the actually recorded responses were quantified by coefficients of variation (CV). The prediction precision and response size depended on the distance between the electrode of the predicted response and the recording electrodes. After compensating this relationship using a polynomial function, the CV of the mean difference between calculated and recorded responses of the 10 subjects was 2.8 +/- 1.2%. In conclusion, the "Mini-Brainmapping" model can provide precise topographical information with minimal additional recording efforts with good reliability. The implementation of this method in a routine diagnostic setting as an "easy-to-do" procedure would allow to examine a large number of patients and normal subjects in a short time, and thus, a solid data base could be created to correlate well defined pathologies with topographical VEP changes.

  17. An analytical method to predict efficiency of aircraft gearboxes

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.; Black, J. D.

    1984-01-01

    A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.

  18. High-fidelity modeling and impact footprint prediction for vehicle breakup analysis

    NASA Astrophysics Data System (ADS)

    Ling, Lisa

    For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.

  19. Evaluation of trace metals bioavailability in Japanese river waters using DGT and a chemical equilibrium model.

    PubMed

    Han, Shuping; Naito, Wataru; Hanai, Yoshimichi; Masunaga, Shigeki

    2013-09-15

    To develop efficient and effective methods of assessing and managing the risk posed by metals to aquatic life, it is important to determine the effects of water chemistry on the bioavailability of metals in surface water. In this study, we employed the diffusive gradients in thin-films (DGT) to determine the bioavailability of metals (Ni, Cu, Zn, and Pb) in Japanese water systems. The DGT results were compared with a chemical equilibrium model (WHAM 7.0) calculation to examine its robustness and utility to predict dynamic metal speciation. The DGT measurements showed that biologically available fractions of metals in the rivers impacted by mine drainage and metal industries were relatively high compared with those in urban rivers. Comparison between the DGT results and the model calculation indicated good agreement for Zn. The model calculation concentrations for Ni and Cu were higher than the DGT concentrations at most sites. As for Pb, the model calculation depended on whether the precipitated iron(III) hydroxide or precipitated aluminum(III) hydroxide was assumed to have an active surface. Our results suggest that the use of WHAM 7.0 combined with the DGT method can predict bioavailable concentrations of most metals (except for Pb) with reasonable accuracy. Copyright © 2013. Published by Elsevier Ltd.

  20. A patient-specific EMG-driven neuromuscular model for the potential use of human-inspired gait rehabilitation robots.

    PubMed

    Ma, Ye; Xie, Shengquan; Zhang, Yanxin

    2016-03-01

    A patient-specific electromyography (EMG)-driven neuromuscular model (PENm) is developed for the potential use of human-inspired gait rehabilitation robots. The PENm is modified based on the current EMG-driven models by decreasing the calculation time and ensuring good prediction accuracy. To ensure the calculation efficiency, the PENm is simplified into two EMG channels around one joint with minimal physiological parameters. In addition, a dynamic computation model is developed to achieve real-time calculation. To ensure the calculation accuracy, patient-specific muscle kinematics information, such as the musculotendon lengths and the muscle moment arms during the entire gait cycle, are employed based on the patient-specific musculoskeletal model. Moreover, an improved force-length-velocity relationship is implemented to generate accurate muscle forces. Gait analysis data including kinematics, ground reaction forces, and raw EMG signals from six adolescents at three different speeds were used to evaluate the PENm. The simulation results show that the PENm has the potential to predict accurate joint moment in real-time. The design of advanced human-robot interaction control strategies and human-inspired gait rehabilitation robots can benefit from the application of the human internal state provided by the PENm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Advanced earth observation spacecraft computer-aided design software: Technical, user and programmer guide

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.; Krauze, L. D.

    1983-01-01

    The IDEAS computer of NASA is a tool for interactive preliminary design and analysis of LSS (Large Space System). Nine analysis modules were either modified or created. These modules include the capabilities of automatic model generation, model mass properties calculation, model area calculation, nonkinematic deployment modeling, rigid-body controls analysis, RF performance prediction, subsystem properties definition, and EOS science sensor selection. For each module, a section is provided that contains technical information, user instructions, and programmer documentation.

  2. Lower-extremity musculoskeletal geometry affects the calculation of patellofemoral forces in vertical jumping and weightlifting.

    PubMed

    Cleather, D I; Bull, A M J

    2010-01-01

    The calculation of the patellofemoral joint contact force using three-dimensional (3D) modelling techniques requires a description of the musculoskeletal geometry of the lower limb. In this study, the influence of the complexity of the muscle model was studied by considering two different muscle models, the Delp and Horsman models. Both models were used to calculate the patellofemoral force during standing, vertical jumping, and Olympic-style weightlifting. The patellofemoral forces predicted by the Horsman model were markedly lower than those predicted by the Delp model in all activities and represented more realistic values when compared with previous work. This was found to be a result of a lower level of redundancy in the Delp model, which forced a higher level of muscular activation in order to allow a viable solution. The higher level of complexity in the Horsman model resulted in a greater degree of redundancy and consequently lower activation and patellofemoral forces. The results of this work demonstrate that a well-posed muscle model must have an adequate degree of complexity to create a sufficient independence, variability, and number of moment arms in order to ensure adequate redundancy of the force-sharing problem such that muscle forces are not overstated.

  3. Precipitating Condensation Clouds in Substellar Atmospheres

    NASA Technical Reports Server (NTRS)

    Ackerman, Andrew S.; Marley, Mark S.; Gore, Warren J. (Technical Monitor)

    2000-01-01

    We present a method to calculate vertical profiles of particle size distributions in condensation clouds of giant planets and brown dwarfs. The method assumes a balance between turbulent diffusion and precipitation in horizontally uniform cloud decks. Calculations for the Jovian ammonia cloud are compared with previous methods. An adjustable parameter describing the efficiency of precipitation allows the new model to span the range of predictions from previous models. Calculations for the Jovian ammonia cloud are found to be consistent with observational constraints. Example calculations are provided for water, silicate, and iron clouds on brown dwarfs and on a cool extrasolar giant planet.

  4. Bayesian model checking: A comparison of tests

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    2018-06-01

    Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.

  5. Wine grape cultivar influence on the performance of models that predict the lower threshold canopy temperature of a water stress index

    USDA-ARS?s Scientific Manuscript database

    The calculation of a thermal based Crop Water Stress Index (CWSI) requires an estimate of canopy temperature under non-water stressed conditions. The objective of this study was to assess the influence of different wine grape cultivars on the performance of models that predict canopy temperature non...

  6. Failure Criteria for FRP Laminates in Plane Stress

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2003-01-01

    A new set of six failure criteria for fiber reinforced polymer laminates is described. Derived from Dvorak's fracture mechanics analyses of cracked plies and from Puck's action plane concept, the physically-based criteria, denoted LaRC03, predict matrix and fiber failure accurately without requiring curve-fitting parameters. For matrix failure under transverse compression, the fracture plane is calculated by maximizing the Mohr-Coulomb effective stresses. A criterion for fiber kinking is obtained by calculating the fiber misalignment under load, and applying the matrix failure criterion in the coordinate frame of the misalignment. Fracture mechanics models of matrix cracks are used to develop a criterion for matrix in tension and to calculate the associated in-situ strengths. The LaRC03 criteria are applied to a few examples to predict failure load envelopes and to predict the failure mode for each region of the envelope. The analysis results are compared to the predictions using other available failure criteria and with experimental results. Predictions obtained with LaRC03 correlate well with the experimental results.

  7. Flow studies in canine artery bifurcations using a numerical simulation method.

    PubMed

    Xu, X Y; Collins, M W; Jones, C J

    1992-11-01

    Three-dimensional flows through canine femoral bifurcation models were predicted under physiological flow conditions by solving numerically the time-dependent three-dimensional Navier-stokes equations. In the calculations, two models were assumed for the blood, those of (a) a Newtonian fluid, and (b) a non-Newtonian fluid obeying the power law. The blood vessel wall was assumed to be rigid this being the only approximation to the prediction model. The numerical procedure utilized a finite volume approach on a finite element mesh to discretize the equations, and the code used (ASTEC) incorporated the SIMPLE velocity-pressure algorithm in performing the calculations. The predicted velocity profiles were in good qualitative agreement with the in vivo measurements recently obtained by Jones et al. The non-Newtonian effects on the bifurcation flow field were also investigated, and no great differences in velocity profiles were observed. This indicated that the non-Newtonian characteristics of the blood might not be an important factor in determining the general flow patterns for these bifurcations, but could have local significance. Current work involves modeling wall distensibility in an empirically valid manner. Predictions accommodating these will permit a true quantitative comparison with experiment.

  8. Fragment-based 13C nuclear magnetic resonance chemical shift predictions in molecular crystals: An alternative to planewave methods

    NASA Astrophysics Data System (ADS)

    Hartman, Joshua D.; Monaco, Stephen; Schatschneider, Bohdan; Beran, Gregory J. O.

    2015-09-01

    We assess the quality of fragment-based ab initio isotropic 13C chemical shift predictions for a collection of 25 molecular crystals with eight different density functionals. We explore the relative performance of cluster, two-body fragment, combined cluster/fragment, and the planewave gauge-including projector augmented wave (GIPAW) models relative to experiment. When electrostatic embedding is employed to capture many-body polarization effects, the simple and computationally inexpensive two-body fragment model predicts both isotropic 13C chemical shifts and the chemical shielding tensors as well as both cluster models and the GIPAW approach. Unlike the GIPAW approach, hybrid density functionals can be used readily in a fragment model, and all four hybrid functionals tested here (PBE0, B3LYP, B3PW91, and B97-2) predict chemical shifts in noticeably better agreement with experiment than the four generalized gradient approximation (GGA) functionals considered (PBE, OPBE, BLYP, and BP86). A set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided based on these benchmark calculations. Statistical cross-validation procedures are used to demonstrate the robustness of these fits.

  9. Fragment-based (13)C nuclear magnetic resonance chemical shift predictions in molecular crystals: An alternative to planewave methods.

    PubMed

    Hartman, Joshua D; Monaco, Stephen; Schatschneider, Bohdan; Beran, Gregory J O

    2015-09-14

    We assess the quality of fragment-based ab initio isotropic (13)C chemical shift predictions for a collection of 25 molecular crystals with eight different density functionals. We explore the relative performance of cluster, two-body fragment, combined cluster/fragment, and the planewave gauge-including projector augmented wave (GIPAW) models relative to experiment. When electrostatic embedding is employed to capture many-body polarization effects, the simple and computationally inexpensive two-body fragment model predicts both isotropic (13)C chemical shifts and the chemical shielding tensors as well as both cluster models and the GIPAW approach. Unlike the GIPAW approach, hybrid density functionals can be used readily in a fragment model, and all four hybrid functionals tested here (PBE0, B3LYP, B3PW91, and B97-2) predict chemical shifts in noticeably better agreement with experiment than the four generalized gradient approximation (GGA) functionals considered (PBE, OPBE, BLYP, and BP86). A set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided based on these benchmark calculations. Statistical cross-validation procedures are used to demonstrate the robustness of these fits.

  10. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    PubMed

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  11. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    PubMed Central

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803

  12. Prediction of cavity growth by solution of salt around boreholes. (Report No. IITRI-C--6313-14)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, R.H. Chang, D.S.

    1975-06-30

    A mathematical model is developed to simulate the process of salt dissolution in a salt formation. The calibration of this model using Detroit Mine data is done systematically by the method of nonlinear regression. The brine concentrations calculated from the regression fit the measured data from Detroit Mine experiment within 10 percent. Because the Detroit data includes periods when the inlet flow is shut off, the agreement with Detroit data indicates that the model adequately represents natural convection effects to predict the cavity growth at very slow feed rates. The prediction has been done to calculate the cavity growth atmore » feed rate of one gal/h and one gal/day over a period of 10,000 y. Result shows that the cavity growth is a wide-flaring type and that the significant growth of the cavity only occurs at top layer. The prediction involves a very great extrapolation of time from the Detroit data, but it will be valid if the mechanism of solution does not change.« less

  13. External validation of the Garvan nomograms for predicting absolute fracture risk: the Tromsø study.

    PubMed

    Ahmed, Luai A; Nguyen, Nguyen D; Bjørnerem, Åshild; Joakimsen, Ragnar M; Jørgensen, Lone; Størmer, Jan; Bliuc, Dana; Center, Jacqueline R; Eisman, John A; Nguyen, Tuan V; Emaus, Nina

    2014-01-01

    Absolute risk estimation is a preferred approach for assessing fracture risk and treatment decision making. This study aimed to evaluate and validate the predictive performance of the Garvan Fracture Risk Calculator in a Norwegian cohort. The analysis included 1637 women and 1355 aged 60+ years from the Tromsø study. All incident fragility fractures between 2001 and 2009 were registered. The predicted probabilities of non-vertebral osteoporotic and hip fractures were determined using models with and without BMD. The discrimination and calibration of the models were assessed. Reclassification analysis was used to compare the models performance. The incidence of osteoporotic and hip fracture was 31.5 and 8.6 per 1000 population in women, respectively; in men the corresponding incidence was 12.2 and 5.1. The predicted 5-year and 10-year probability of fractures was consistently higher in the fracture group than the non-fracture group for all models. The 10-year predicted probabilities of hip fracture in those with fracture was 2.8 (women) to 3.1 times (men) higher than those without fracture. There was a close agreement between predicted and observed risk in both sexes and up to the fifth quintile. Among those in the highest quintile of risk, the models over-estimated the risk of fracture. Models with BMD performed better than models with body weight in correct classification of risk in individuals with and without fracture. The overall net decrease in reclassification of the model with weight compared to the model with BMD was 10.6% (p = 0.008) in women and 17.2% (p = 0.001) in men for osteoporotic fractures, and 13.3% (p = 0.07) in women and 17.5% (p = 0.09) in men for hip fracture. The Garvan Fracture Risk Calculator is valid and clinically useful in identifying individuals at high risk of fracture. The models with BMD performed better than those with body weight in fracture risk prediction.

  14. Analytical approach to calculation of response spectra from seismological models of ground motion

    USGS Publications Warehouse

    Safak, Erdal

    1988-01-01

    An analytical approach to calculate response spectra from seismological models of ground motion is presented. Seismological models have three major advantages over empirical models: (1) they help in an understanding of the physics of earthquake mechanisms, (2) they can be used to predict ground motions for future earthquakes and (3) they can be extrapolated to cases where there are no data available. As shown with this study, these models also present a convenient form for the calculation of response spectra, by using the methods of random vibration theory, for a given magnitude and site conditions. The first part of the paper reviews the past models for ground motion description, and introduces the available seismological models. Then, the random vibration equations for the spectral response are presented. The nonstationarity, spectral bandwidth and the correlation of the peaks are considered in the calculation of the peak response.

  15. A statistical learning approach to the modeling of chromatographic retention of oligonucleotides incorporating sequence and secondary structure data

    PubMed Central

    Sturm, Marc; Quinten, Sascha; Huber, Christian G.; Kohlbacher, Oliver

    2007-01-01

    We propose a new model for predicting the retention time of oligonucleotides. The model is based on ν support vector regression using features derived from base sequence and predicted secondary structure of oligonucleotides. Because of the secondary structure information, the model is applicable even at relatively low temperatures where the secondary structure is not suppressed by thermal denaturing. This makes the prediction of oligonucleotide retention time for arbitrary temperatures possible, provided that the target temperature lies within the temperature range of the training data. We describe different possibilities of feature calculation from base sequence and secondary structure, present the results and compare our model to existing models. PMID:17567619

  16. Development of Web tools to predict axillary lymph node metastasis and pathological response to neoadjuvant chemotherapy in breast cancer patients.

    PubMed

    Sugimoto, Masahiro; Takada, Masahiro; Toi, Masakazu

    2014-12-09

    Nomograms are a standard computational tool to predict the likelihood of an outcome using multiple available patient features. We have developed a more powerful data mining methodology, to predict axillary lymph node (AxLN) metastasis and response to neoadjuvant chemotherapy (NAC) in primary breast cancer patients. We developed websites to use these tools. The tools calculate the probability of AxLN metastasis (AxLN model) and pathological complete response to NAC (NAC model). As a calculation algorithm, we employed a decision tree-based prediction model known as the alternative decision tree (ADTree), which is an analog development of if-then type decision trees. An ensemble technique was used to combine multiple ADTree predictions, resulting in higher generalization abilities and robustness against missing values. The AxLN model was developed with training datasets (n=148) and test datasets (n=143), and validated using an independent cohort (n=174), yielding an area under the receiver operating characteristic curve (AUC) of 0.768. The NAC model was developed and validated with n=150 and n=173 datasets from a randomized controlled trial, yielding an AUC of 0.787. AxLN and NAC models require users to input up to 17 and 16 variables, respectively. These include pathological features, including human epidermal growth factor receptor 2 (HER2) status and imaging findings. Each input variable has an option of "unknown," to facilitate prediction for cases with missing values. The websites developed facilitate the use of these tools, and serve as a database for accumulating new datasets.

  17. Computational biomechanics to simulate the femoropopliteal intersection during knee flexion: a preliminary study.

    PubMed

    Diehm, Nicolas; Sin, Sangmun; Hoppe, Hanno; Baumgartner, Iris; Büchler, Philippe

    2011-06-01

    To assess if finite element (FE) models can be used to predict deformation of the femoropopliteal segment during knee flexion. Magnetic resonance angiography (MRA) images were acquired on the lower limbs of 8 healthy volunteers (5 men; mean age 28 ± 4 years). Images were taken in 2 natural positions, with the lower limb fully extended and with the knee bent at ~ 40°. Patient-specific FE models were developed and used to simulate the experimental situation. The displacements of the artery during knee bending as predicted by the numerical model were compared to the corresponding positions measured on the MRA images. The numerical predictions showed a good overall agreement between the calculated displacements of the motion measures from MRA images. The average position error comparing the calculated vs. actual displacements of the femoropopliteal intersection measured on the MRA was 8 ± 4 mm. Two of the 8 subjects showed large prediction errors (average 13 ± 5 mm); these 2 volunteers were the tallest subjects involved in the study and had a low body mass index (20.5 kg/m²). The present computational model is able to capture the gross mechanical environment of the femoropopliteal intersection during knee bending and provide a better understanding of the complex biomechanical behavior. However, results suggest that patient-specific mechanical properties and detailed muscle modeling are required to provide accurate patient-specific numerical predictions of arterial displacement. Further adaptation of this model is expected to provide an improved ability to predict the multiaxial deformation of this arterial segment during leg movements and to optimize future stent designs.

  18. Global hot-star wind models for stars from Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Krtička, J.; Kubát, J.

    2018-04-01

    We provide mass-loss rate predictions for O stars from Large and Small Magellanic Clouds. We calculate global (unified, hydrodynamic) model atmospheres of main sequence, giant, and supergiant stars for chemical composition corresponding to Magellanic Clouds. The models solve radiative transfer equation in comoving frame, kinetic equilibrium equations (also known as NLTE equations), and hydrodynamical equations from (quasi-)hydrostatic atmosphere to expanding stellar wind. The models allow us to predict wind density, velocity, and temperature (consequently also the terminal wind velocity and the mass-loss rate) just from basic global stellar parameters. As a result of their lower metallicity, the line radiative driving is weaker leading to lower wind mass-loss rates with respect to the Galactic stars. We provide a formula that fits the mass-loss rate predicted by our models as a function of stellar luminosity and metallicity. On average, the mass-loss rate scales with metallicity as Ṁ Z0.59. The predicted mass-loss rates are lower than mass-loss rates derived from Hα diagnostics and can be reconciled with observational results assuming clumping factor Cc = 9. On the other hand, the predicted mass-loss rates either agree or are slightly higher than the mass-loss rates derived from ultraviolet wind line profiles. The calculated P V ionization fractions also agree with values derived from observations for LMC stars with Teff ≤ 40 000 K. Taken together, our theoretical predictions provide reasonable models with consistent mass-loss rate determination, which can be used for quantitative study of stars from Magellanic Clouds.

  19. Allometric Convergence in Savanna Trees and Implications for the Use of Plant Scaling Models in Variable Ecosystems

    PubMed Central

    Tredennick, Andrew T.; Bentley, Lisa Patrick; Hanan, Niall P.

    2013-01-01

    Theoretical models of allometric scaling provide frameworks for understanding and predicting how and why the morphology and function of organisms vary with scale. It remains unclear, however, if the predictions of ‘universal’ scaling models for vascular plants hold across diverse species in variable environments. Phenomena such as competition and disturbance may drive allometric scaling relationships away from theoretical predictions based on an optimized tree. Here, we use a hierarchical Bayesian approach to calculate tree-specific, species-specific, and ‘global’ (i.e. interspecific) scaling exponents for several allometric relationships using tree- and branch-level data harvested from three savanna sites across a rainfall gradient in Mali, West Africa. We use these exponents to provide a rigorous test of three plant scaling models (Metabolic Scaling Theory (MST), Geometric Similarity, and Stress Similarity) in savanna systems. For the allometric relationships we evaluated (diameter vs. length, aboveground mass, stem mass, and leaf mass) the empirically calculated exponents broadly overlapped among species from diverse environments, except for the scaling exponents for length, which increased with tree cover and density. When we compare empirical scaling exponents to the theoretical predictions from the three models we find MST predictions are most consistent with our observed allometries. In those situations where observations are inconsistent with MST we find that departure from theory corresponds with expected tradeoffs related to disturbance and competitive interactions. We hypothesize savanna trees have greater length-scaling exponents than predicted by MST due to an evolutionary tradeoff between fire escape and optimization of mechanical stability and internal resource transport. Future research on the drivers of systematic allometric variation could reconcile the differences between observed scaling relationships in variable ecosystems and those predicted by ideal models such as MST. PMID:23484003

  20. Building the ensemble flood prediction system by using numerical weather prediction data: Case study in Kinu river basin, Japan

    NASA Astrophysics Data System (ADS)

    Ishitsuka, Y.; Yoshimura, K.

    2016-12-01

    Floods have a potential to be a major source of economic or human damage caused by natural disasters. Flood prediction systems were developed all over the world and to treat the uncertainty of the prediction ensemble simulation is commonly adopted. In this study, ensemble flood prediction system using global scale land surface and hydrodynamic model was developed. The system requests surface atmospheric forcing and Land Surface Model, MATSIRO, calculates runoff. Those generated runoff is inputted to hydrodynamic model CaMa-Flood to calculate discharge and flood inundation. CaMa-Flood can simulate flood area and its fraction by introducing floodplain connected to river channel. Forecast leadtime was set 39hours according to forcing data. For the case study, the flood occurred at Kinu river basin, Japan in 2015 was hindcasted. In a 1761 km² Kinu river basin, 3-days accumulated average rainfall was 384mm and over 4000 people was left in the inundated area. Available ensemble numerical weather prediction data at that time was inputted to the system in a resolution of 0.05 degrees and 1hour time step. As a result, the system predicted the flood occurrence by 45% and 84% at 23 and 11 hours before the water level exceeded the evacuation threshold, respectively. Those prediction lead time may provide the chance for early preparation for the floods such as levee reinforcement or evacuation. Adding to the discharge, flood area predictability was also analyzed. Although those models were applied for Japan region, this system can be applied easily to other region or even global scale. The areal flood prediction in meso to global scale would be useful for detecting hot zones or vulnerable areas over each region.

  1. Modelling of plasma-based dry reforming: how do uncertainties in the input data affect the calculation results?

    NASA Astrophysics Data System (ADS)

    Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie

    2018-05-01

    One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.

  2. BiPPred: Combined sequence- and structure-based prediction of peptide binding to the Hsp70 chaperone BiP.

    PubMed

    Schneider, Markus; Rosam, Mathias; Glaser, Manuel; Patronov, Atanas; Shah, Harpreet; Back, Katrin Christiane; Daake, Marina Angelika; Buchner, Johannes; Antes, Iris

    2016-10-01

    Substrate binding to Hsp70 chaperones is involved in many biological processes, and the identification of potential substrates is important for a comprehensive understanding of these events. We present a multi-scale pipeline for an accurate, yet efficient prediction of peptides binding to the Hsp70 chaperone BiP by combining sequence-based prediction with molecular docking and MMPBSA calculations. First, we measured the binding of 15mer peptides from known substrate proteins of BiP by peptide array (PA) experiments and performed an accuracy assessment of the PA data by fluorescence anisotropy studies. Several sequence-based prediction models were fitted using this and other peptide binding data. A structure-based position-specific scoring matrix (SB-PSSM) derived solely from structural modeling data forms the core of all models. The matrix elements are based on a combination of binding energy estimations, molecular dynamics simulations, and analysis of the BiP binding site, which led to new insights into the peptide binding specificities of the chaperone. Using this SB-PSSM, peptide binders could be predicted with high selectivity even without training of the model on experimental data. Additional training further increased the prediction accuracies. Subsequent molecular docking (DynaDock) and MMGBSA/MMPBSA-based binding affinity estimations for predicted binders allowed the identification of the correct binding mode of the peptides as well as the calculation of nearly quantitative binding affinities. The general concept behind the developed multi-scale pipeline can readily be applied to other protein-peptide complexes with linearly bound peptides, for which sufficient experimental binding data for the training of classical sequence-based prediction models is not available. Proteins 2016; 84:1390-1407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Optimizing finite element predictions of local subchondral bone structural stiffness using neural network-derived density-modulus relationships for proximal tibial subchondral cortical and trabecular bone.

    PubMed

    Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D

    2017-01-01

    Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain. However, it is unclear what density-modulus equation(s) should be applied with subchondral cortical and subchondral trabecular bone when constructing finite element models of the tibia. Using a novel approach applying neural networks, optimization, and back-calculation against in situ experimental testing results, the objective of this study was to identify subchondral-specific equations that optimized finite element predictions of local structural stiffness at the proximal tibial subchondral surface. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using multiple density-modulus equations (93 total variations) then mapped to corresponding finite element models. For each variation, root mean squared error was calculated between finite element prediction and in situ measured stiffness at 47 indentation sites. Resulting errors were used to train an artificial neural network, which provided an unlimited number of model variations, with corresponding error, for predicting stiffness at the subchondral bone surface. Nelder-Mead optimization was used to identify optimum density-modulus equations for predicting stiffness. Finite element modeling predicted 81% of experimental stiffness variance (with 10.5% error) using optimized equations for subchondral cortical and trabecular bone differentiated with a 0.5g/cm 3 density. In comparison with published density-modulus relationships, optimized equations offered improved predictions of local subchondral structural stiffness. Further research is needed with anisotropy inclusion, a smaller voxel size and de-blurring algorithms to improve predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    PubMed Central

    2011-01-01

    Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778

  6. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    PubMed

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.

  7. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  8. An evaluation of the Johnson-Cook model to simulate puncture of 7075 aluminum plates.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo; Orient, George Edgar

    The objective of this project was to evaluate the use of the Johnson-Cook strength and failure models in an adiabatic finite element model to simulate the puncture of 7075- T651 aluminum plates that were studied as part of an ASC L2 milestone by Corona et al (2012). The Johnson-Cook model parameters were determined from material test data. The results show a marked improvement, in particular in the calculated threshold velocity between no puncture and puncture, over those obtained in 2012. The threshold velocity calculated using a baseline model is just 4% higher than the mean value determined from experiment, inmore » contrast to 60% in the 2012 predictions. Sensitivity studies showed that the threshold velocity predictions were improved by calibrating the relations between the equivalent plastic strain at failure and stress triaxiality, strain rate and temperature, as well as by the inclusion of adiabatic heating.« less

  9. Galileo probe forebody thermal protection - Benchmark heating environment calculations

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A.; Nicolet, W. E.

    1981-01-01

    Solutions are presented for the aerothermal heating environment for the forebody heatshield of candidate Galileo probe. Entry into both the nominal and cool-heavy model atmospheres were considered. Solutions were obtained for the candidate heavy probe with a weight of 310 kg and a lighter probe with a weight of 290 kg. In the flowfield analysis, a finite difference procedure was employed to obtain benchmark predictions of pressure, radiative and convective heating rates, and the steady-state wall blowing rates. Calculated heating rates for entry into the cool-heavy model atmosphere were about 60 percent higher than those predicted for the entry into the nominal atmosphere. The total mass lost for entry into the cool-heavy model atmosphere was about 146 kg and the mass lost for entry into the nominal model atmosphere was about 101 kg.

  10. Problems With Risk Reclassification Methods for Evaluating Prediction Models

    PubMed Central

    Pepe, Margaret S.

    2011-01-01

    For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714

  11. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  12. Temperature-dependent infrared optical properties of 3C-, 4H- and 6H-SiC

    NASA Astrophysics Data System (ADS)

    Tong, Zhen; Liu, Linhua; Li, Liangsheng; Bao, Hua

    2018-05-01

    The temperature-dependent optical properties of cubic (3C) and hexagonal (4H and 6H) silicon carbide are investigated in the infrared range of 2-16 μm both by experimental measurements and numerical simulations. The temperature in experimental measurement is up to 593 K, while the numerical method can predict the optical properties at elevated temperatures. To investigate the temperature effect, the temperature-dependent damping parameter in the Lorentz model is calculated based on anharmonic lattice dynamics method, in which the harmonic and anharmonic interatomic force constants are determined from first-principles calculations. The infrared phonon modes of silicon carbide are determined from first-principles calculations. Based on first-principles calculations, the Lorentz model is parameterized without any experimental fitting data and the temperature effect is considered. In our investigations, we find that the increasing temperature induces a small reduction of the reflectivity in the range of 10-13 μm. More importantly, it also shows that our first-principles calculations can predict the infrared optical properties at high-temperature effectively which is not easy to be obtained through experimental measurements.

  13. An implementation of an aeroacoustic prediction model for broadband noise from a vertical axis wind turbine using a CFD informed methodology

    NASA Astrophysics Data System (ADS)

    Botha, J. D. M.; Shahroki, A.; Rice, H.

    2017-12-01

    This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.

  14. Wind turbine rotor simulation using the actuator disk and actuator line methods

    NASA Astrophysics Data System (ADS)

    Tzimas, M.; Prospathopoulos, J.

    2016-09-01

    The present paper focuses on wind turbine rotor modeling for loads and wake flow prediction. Two steady-state models based on the actuator disk approach are considered, using either a uniform thrust or a blade element momentum calculation of the wind turbine loads. A third model is based on the unsteady-state actuator line approach. Predictions are compared with measurements in wind tunnel experiments and in atmospheric environment and the capabilities and weaknesses of the different models are addressed.

  15. Investigation of Iso-octane Ignition and Validation of a Multizone Modeling Method in an Ignition Quality Tester

    DOE PAGES

    Osecky, Eric M.; Bogin, Gregory E.; Villano, Stephanie M.; ...

    2016-08-18

    An ignition quality tester was used to characterize the autoignition delay times of iso-octane. The experimental data were characterized between temperatures of 653 and 996 K, pressures of 1.0 and 1.5 MPa, and global equivalence ratios of 0.7 and 1.05. A clear negative temperature coefficient behavior was seen at both pressures in the experimental data. These data were used to characterize the effectiveness of three modeling methods: a single-zone homogeneous batch reactor, a multizone engine model, and a three-dimensional computational fluid dynamics (CFD) model. A detailed 874 species iso-octane ignition mechanism (Mehl, M.; Curran, H. J.; Pitz, W. J.; Westbrook,more » C. K.Chemical kinetic modeling of component mixtures relevant to gasoline. Proceedings of the European Combustion Meeting; Vienna, Austria, April 14-17, 2009) was reduced to 89 species for use in these models, and the predictions of the reduced mechanism were consistent with ignition delay times predicted by the detailed chemical mechanism across a broad range of temperatures, pressures, and equivalence ratios. The CFD model was also run without chemistry to characterize the extent of mixing of fuel and air in the chamber. The calculations predicted that the main part of the combustion chamber was fairly well-mixed at longer times (> ~30 ms), suggesting that the simpler models might be applicable in this quasi-homogeneous region. The multizone predictions, where the combustion chamber was divided into 20 zones of temperature and equivalence ratio, were quite close to the coupled CFD-kinetics results, but the calculation time was ~11 times faster than the coupled CFD-kinetics model. Although the coupled CFD-kinetics model captured the observed negative temperature coefficient behavior and pressure dependence, discrepancies remain between the predictions and the observed ignition time delays, suggesting improvements are still needed in the kinetic mechanism and/or the CFD model. This approach suggests a combined modeling approach, wherein the CFD calculations (without chemistry) can be used to examine the sensitivity of various model inputs to in-cylinder temperature and equivalence ratios. In conclusion, these values can be used as inputs to the multizone model to examine the impact on ignition delay. Additionally, the speed of the multizone model also makes it feasible to quickly test more detailed kinetic mechanisms for comparison to experimental data and sensitivity analysis.« less

  16. Investigation of Iso-octane Ignition and Validation of a Multizone Modeling Method in an Ignition Quality Tester

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osecky, Eric M.; Bogin, Gregory E.; Villano, Stephanie M.

    An ignition quality tester was used to characterize the autoignition delay times of iso-octane. The experimental data were characterized between temperatures of 653 and 996 K, pressures of 1.0 and 1.5 MPa, and global equivalence ratios of 0.7 and 1.05. A clear negative temperature coefficient behavior was seen at both pressures in the experimental data. These data were used to characterize the effectiveness of three modeling methods: a single-zone homogeneous batch reactor, a multizone engine model, and a three-dimensional computational fluid dynamics (CFD) model. A detailed 874 species iso-octane ignition mechanism (Mehl, M.; Curran, H. J.; Pitz, W. J.; Westbrook,more » C. K.Chemical kinetic modeling of component mixtures relevant to gasoline. Proceedings of the European Combustion Meeting; Vienna, Austria, April 14-17, 2009) was reduced to 89 species for use in these models, and the predictions of the reduced mechanism were consistent with ignition delay times predicted by the detailed chemical mechanism across a broad range of temperatures, pressures, and equivalence ratios. The CFD model was also run without chemistry to characterize the extent of mixing of fuel and air in the chamber. The calculations predicted that the main part of the combustion chamber was fairly well-mixed at longer times (> ~30 ms), suggesting that the simpler models might be applicable in this quasi-homogeneous region. The multizone predictions, where the combustion chamber was divided into 20 zones of temperature and equivalence ratio, were quite close to the coupled CFD-kinetics results, but the calculation time was ~11 times faster than the coupled CFD-kinetics model. Although the coupled CFD-kinetics model captured the observed negative temperature coefficient behavior and pressure dependence, discrepancies remain between the predictions and the observed ignition time delays, suggesting improvements are still needed in the kinetic mechanism and/or the CFD model. This approach suggests a combined modeling approach, wherein the CFD calculations (without chemistry) can be used to examine the sensitivity of various model inputs to in-cylinder temperature and equivalence ratios. In conclusion, these values can be used as inputs to the multizone model to examine the impact on ignition delay. Additionally, the speed of the multizone model also makes it feasible to quickly test more detailed kinetic mechanisms for comparison to experimental data and sensitivity analysis.« less

  17. Statistical mechanics of light elements at high pressure. V Three-dimensional Thomas-Fermi-Dirac theory. [relevant to Jovian planetary interiors

    NASA Technical Reports Server (NTRS)

    Macfarlane, J. J.; Hubbard, W. B.

    1983-01-01

    A numerical technique for solving the Thomas-Fermi-Dirac (TED) equation in three dimensions, for an array of ions obeying periodic boundary conditions, is presented. The technique is then used to calculate deviations from ideal mixing for an alloy of hydrogen and helium at zero temperature and high presures. Results are compared with alternative models which apply perturbation theory to calculation of the electron distribution, based upon the assumption of weak response of the electron gas to the ions. The TFD theory, which permits strong electron response, always predicts smaller deviations from ideal mixing than would be predicted by perturbation theory. The results indicate that predicted phase separation curves for hydrogen-helium alloys under conditions prevailing in the metallic zones of Jupiter and Saturn are very model dependent.

  18. First Principle and Experimental Study for Site Preferences of Formability Improved Alloying Elements in Mg Crystal

    NASA Astrophysics Data System (ADS)

    Zeng, Ying; Jiang, Bin; Shi, Ouling; Quan, Gaofen; Al-Ezzi, Salih; Pan, FuSheng

    2018-07-01

    Some alloying elements (Al, Er, Gd, Li, Mn, Sn, Y, Zn) were proved recently by calculations or experiments to improve the formability of Mg alloys, but ignoring their site preference in Mg crystals during the calculated process. A crystallographic model was built via first principle calculations to predict the site preferences of these elements. Regularities between doping elements and site preferences were summarized. Meanwhile, in the basis of the crystallographic model, a series of formulas were deduced combining the diffraction law. It predicted that a crystal plane with abnormal XRD peak intensity of the Mg-based solid solutions, compared to that of the pure Mg, prefers to possess solute atoms. Thus, three single-phase solid solution alloys were then prepared through an original In-situ Solution Treatment, and their XRD patterns were compared. Finally, the experiment further described the site preferences of these solute atoms in Mg crystal, verifying the calculation results.

  19. First Principle and Experimental Study for Site Preferences of Formability Improved Alloying Elements in Mg Crystal

    NASA Astrophysics Data System (ADS)

    Zeng, Ying; Jiang, Bin; Shi, Ouling; Quan, Gaofen; Al-Ezzi, Salih; Pan, FuSheng

    2018-03-01

    Some alloying elements (Al, Er, Gd, Li, Mn, Sn, Y, Zn) were proved recently by calculations or experiments to improve the formability of Mg alloys, but ignoring their site preference in Mg crystals during the calculated process. A crystallographic model was built via first principle calculations to predict the site preferences of these elements. Regularities between doping elements and site preferences were summarized. Meanwhile, in the basis of the crystallographic model, a series of formulas were deduced combining the diffraction law. It predicted that a crystal plane with abnormal XRD peak intensity of the Mg-based solid solutions, compared to that of the pure Mg, prefers to possess solute atoms. Thus, three single-phase solid solution alloys were then prepared through an original In-situ Solution Treatment, and their XRD patterns were compared. Finally, the experiment further described the site preferences of these solute atoms in Mg crystal, verifying the calculation results.

  20. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  1. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  2. Application of Taguchi L32 orthogonal array design to optimize copper biosorption by using Spaghnum moss.

    PubMed

    Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil

    2014-09-01

    In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Theoretical study of hull-rotor aerodynamic interference on semibuoyant vehicles

    NASA Technical Reports Server (NTRS)

    Spangler, S. B.; Smith, C. A.

    1978-01-01

    Analytical methods are developed to predict the pressure distribution and overall loads on the hulls of airships which have close coupled, relatively large and/or high disk loading propulsors for attitude control, station keeping, and partial support of total weight as well as provision of thrust in cruise. The methods comprise a surface-singularity, potential-flow model for the hull and lifting surfaces (such as tails) and a rotor model which calculates the velocity induced by the rotor and its wake at points adjacent to the wake. Use of these two models provides an inviscid pressure distribution on the hull with rotor interference. A boundary layer separation prediction method is used to locate separation on the hull, and a wake pressure is imposed on the separated region for purposes of calculating hull loads. Results of calculations are shown to illustrate various cases of rotor-hull interference and comparisons with small scale data are made to evaluate the method.

  4. Continuum Lowering and Fermi-Surface Rising in Strongly Coupled and Degenerate Plasmas

    NASA Astrophysics Data System (ADS)

    Hu, S. X.

    2017-08-01

    Continuum lowering is a well known and important physics concept that describes the ionization potential depression (IPD) in plasmas caused by thermal- or pressure-induced ionization of outer-shell electrons. The existing IPD models are often used to characterize plasma conditions and to gauge opacity calculations. Recent precision measurements have revealed deficits in our understanding of continuum lowering in dense hot plasmas. However, these investigations have so far been limited to IPD in strongly coupled but nondegenerate plasmas. Here, we report a first-principles study of the K -edge shifting in both strongly coupled and fully degenerate carbon plasmas, with quantum molecular dynamics calculations based on the all-electron density-functional theory. The resulting K -edge shifting versus plasma density, as a probe to the continuum lowering and the Fermi-surface rising, is found to be significantly different from predictions of existing IPD models. In contrast, a simple model of "single-atom-in-box," developed in this work, accurately predicts K -edge locations as ab initio calculations provide.

  5. Application of Artificial Neural Network and Support Vector Machines in Predicting Metabolizable Energy in Compound Feeds for Pigs.

    PubMed

    Ahmadi, Hamed; Rodehutscord, Markus

    2017-01-01

    In the nutrition literature, there are several reports on the use of artificial neural network (ANN) and multiple linear regression (MLR) approaches for predicting feed composition and nutritive value, while the use of support vector machines (SVM) method as a new alternative approach to MLR and ANN models is still not fully investigated. The MLR, ANN, and SVM models were developed to predict metabolizable energy (ME) content of compound feeds for pigs based on the German energy evaluation system from analyzed contents of crude protein (CP), ether extract (EE), crude fiber (CF), and starch. A total of 290 datasets from standardized digestibility studies with compound feeds was provided from several institutions and published papers, and ME was calculated thereon. Accuracy and precision of developed models were evaluated, given their produced prediction values. The results revealed that the developed ANN [ R 2  = 0.95; root mean square error (RMSE) = 0.19 MJ/kg of dry matter] and SVM ( R 2  = 0.95; RMSE = 0.21 MJ/kg of dry matter) models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR ( R 2  = 0.89; RMSE = 0.27 MJ/kg of dry matter). The developed ANN and SVM models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR; however, there were not obvious differences between performance of ANN and SVM models. Thus, SVM model may also be considered as a promising tool for modeling the relationship between chemical composition and ME of compound feeds for pigs. To provide the readers and nutritionist with the easy and rapid tool, an Excel ® calculator, namely, SVM_ME_pig, was created to predict the metabolizable energy values in compound feeds for pigs using developed support vector machine model.

  6. Spatial Resolution Effect on Forest Road Gradient Calculation and Erosion Modelling

    NASA Astrophysics Data System (ADS)

    Cao, L.; Elliot, W.

    2017-12-01

    Road erosion is one of the main sediment sources in a forest watershed and should be properly evaluated. With the help of GIS technology, road topography can be determined and soil loss can be predicted at a watershed scale. As a vector geographical feature, the road gradient should be calculated following road direction rather than hillslope direction. This calculation might be difficult with a coarse (30-m) DEM which only provides the underlying topography information. This study was designed to explore the effect of road segmentation and DEM resolution on the road gradient calculation and erosion prediction at a watershed scale. The Water Erosion Prediction Project (WEPP) model was run on road segments of 9 lengths ranging from 40m to 200m. Road gradient was calculated from three DEM data sets: 1m LiDAR, and 10m and 30m USGS DEMs. The 1m LiDAR DEM calculated gradients were very close to the field observed road gradients, so we assumed the 1m LiDAR DEM predicted the true road gradient. The results revealed that longer road segments skipped detail topographical undulations and resulted in lower road gradients. Coarser DEMs computed steeper road gradients as larger grid cells covered more adjacent areas outside road resulting in larger elevation differences. Field surveyed results also revealed that coarser DEM might result in more gradient deviation in a curved road segment when it passes through a convex or concave slope. As road segment length increased, the gradient difference between three DEMs was reduced. There were no significant differences between road gradients of different segment lengths and DEM resolution when segments were longer than 100m. For long segments, the 10m DEM calculated road gradient was similar to the 1m LiDAR gradient. When evaluating the effects of road segment length, the predicted erosion rate decreased with increasing length when road gradient was less than 3%. In cases where the road gradients exceed 3% and rill erosion dominates, predicted erosion rates exponentially increased with segment length. At the watershed scale, most of the predicted soil loss occurred on segments with gradients ranging from 3% to 9%. Based on the road gradient calculated with the 10-m and 30-m DEMs, soil loss was overestimated when compared to the 1m LiDAR DEM. Both the 10m and 30m DEM result in similar total road soil loss.

  7. [Risk factor analysis of the patients with solitary pulmonary nodules and establishment of a prediction model for the probability of malignancy].

    PubMed

    Wang, X; Xu, Y H; Du, Z Y; Qian, Y J; Xu, Z H; Chen, R; Shi, M H

    2018-02-23

    Objective: This study aims to analyze the relationship among the clinical features, radiologic characteristics and pathological diagnosis in patients with solitary pulmonary nodules, and establish a prediction model for the probability of malignancy. Methods: Clinical data of 372 patients with solitary pulmonary nodules who underwent surgical resection with definite postoperative pathological diagnosis were retrospectively analyzed. In these cases, we collected clinical and radiologic features including gender, age, smoking history, history of tumor, family history of cancer, the location of lesion, ground-glass opacity, maximum diameter, calcification, vessel convergence sign, vacuole sign, pleural indentation, speculation and lobulation. The cases were divided to modeling group (268 cases) and validation group (104 cases). A new prediction model was established by logistic regression analying the data from modeling group. Then the data of validation group was planned to validate the efficiency of the new model, and was compared with three classical models(Mayo model, VA model and LiYun model). With the calculated probability values for each model from validation group, SPSS 22.0 was used to draw the receiver operating characteristic curve, to assess the predictive value of this new model. Results: 112 benign SPNs and 156 malignant SPNs were included in modeling group. Multivariable logistic regression analysis showed that gender, age, history of tumor, ground -glass opacity, maximum diameter, and speculation were independent predictors of malignancy in patients with SPN( P <0.05). We calculated a prediction model for the probability of malignancy as follow: p =e(x)/(1+ e(x)), x=-4.8029-0.743×gender+ 0.057×age+ 1.306×history of tumor+ 1.305×ground-glass opacity+ 0.051×maximum diameter+ 1.043×speculation. When the data of validation group was added to the four-mathematical prediction model, The area under the curve of our mathematical prediction model was 0.742, which is greater than other models (Mayo 0.696, VA 0.634, LiYun 0.681), while the differences between any two of the four models were not significant ( P >0.05). Conclusions: Age of patient, gender, history of tumor, ground-glass opacity, maximum diameter and speculation are independent predictors of malignancy in patients with solitary pulmonary nodule. This logistic regression prediction mathematic model is not inferior to those classical models in estimating the prognosis of SPNs.

  8. Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities

    NASA Technical Reports Server (NTRS)

    Richter, Hanz

    2004-01-01

    A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.

  9. Planning, creating and documenting a NASTRAN finite element model of a modern helicopter

    NASA Technical Reports Server (NTRS)

    Gabal, R.; Reed, D.; Ricks, R.; Kesack, W.

    1985-01-01

    Mathematical models based on the finite element method of structural analysis as embodied in the NASTRAN computer code are widely used by the helicopter industry to calculate static internal loads and vibration of airframe structure. The internal loads are routinely used for sizing structural members. The vibration predictions are not yet relied on during design. NASA's Langley Research Center sponsored a program to conduct an application of the finite element method with emphasis on predicting structural vibration. The Army/Boeing CH-47D helicopter was used as the modeling subject. The objective was to engender the needed trust in vibration predictions using these models and establish a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process.

  10. Collisional-radiative modeling of tungsten at temperatures of 1200–2400 eV

    DOE PAGES

    Colgan, James; Fontes, Christopher; Zhang, Honglin; ...

    2015-04-30

    We discuss new collisional-radiative modeling calculations of tungsten at moderate temperatures of 1200 to 2400 eV. Such plasma conditions are relevant to ongoing experimental work at ASDEX Upgrade and are expected to be relevant for ITER. Our calculations are made using the Los Alamos National Laboratory (LANL) collisional-radiative modeling ATOMIC code. These calculations formed part of a submission to the recent NLTE-8 workshop that was held in November 2013. This series of workshops provides a forum for detailed comparison of plasma and spectral quantities from NLTE collisional-radiative modeling codes. We focus on the LANL ATOMIC calculations for tungsten that weremore » submitted to the NLTE-8 workshop and discuss different models that were constructed to predict the tungsten emission. In particular, we discuss comparisons between semi-relativistic configuration-average and fully relativistic configuration-average calculations. As a result, we also present semi-relativistic calculations that include fine-structure detail, and discuss the difficult problem of ensuring completeness with respect to the number of configurations included in a CR calculation.« less

  11. Molecular simulation of the thermophysical properties and phase behaviour of impure CO2 relevant to CCS.

    PubMed

    Cresswell, Alexander J; Wheatley, Richard J; Wilkinson, Richard D; Graham, Richard S

    2016-10-20

    Impurities from the CCS chain can greatly influence the physical properties of CO 2 . This has important design, safety and cost implications for the compression, transport and storage of CO 2 . There is an urgent need to understand and predict the properties of impure CO 2 to assist with CCS implementation. However, CCS presents demanding modelling requirements. A suitable model must both accurately and robustly predict CO 2 phase behaviour over a wide range of temperatures and pressures, and maintain that predictive power for CO 2 mixtures with numerous, mutually interacting chemical species. A promising technique to address this task is molecular simulation. It offers a molecular approach, with foundations in firmly established physical principles, along with the potential to predict the wide range of physical properties required for CCS. The quality of predictions from molecular simulation depends on accurate force-fields to describe the interactions between CO 2 and other molecules. Unfortunately, there is currently no universally applicable method to obtain force-fields suitable for molecular simulation. In this paper we present two methods of obtaining force-fields: the first being semi-empirical and the second using ab initio quantum-chemical calculations. In the first approach we optimise the impurity force-field against measurements of the phase and pressure-volume behaviour of CO 2 binary mixtures with N 2 , O 2 , Ar and H 2 . A gradient-free optimiser allows us to use the simulation itself as the underlying model. This leads to accurate and robust predictions under conditions relevant to CCS. In the second approach we use quantum-chemical calculations to produce ab initio evaluations of the interactions between CO 2 and relevant impurities, taking N 2 as an exemplar. We use a modest number of these calculations to train a machine-learning algorithm, known as a Gaussian process, to describe these data. The resulting model is then able to accurately predict a much broader set of ab initio force-field calculations at comparatively low numerical cost. Although our method is not yet ready to be implemented in a molecular simulation, we outline the necessary steps here. Such simulations have the potential to deliver first-principles simulation of the thermodynamic properties of impure CO 2 , without fitting to experimental data.

  12. Modeling of indoor radon concentration from radon exhalation rates of building materials and validation through measurements.

    PubMed

    Kumar, Amit; Chauhan, R P; Joshi, Manish; Sahoo, B K

    2014-01-01

    Building materials are the second major source of indoor radon after soil. The contribution of building materials towards indoor radon depends upon the radium content and exhalation rates and can be used as a primary index for radon levels in the dwellings. The radon flux data from the building materials was used for calculation of the indoor radon concentrations and doses by many researchers using one and two dimensional model suggested by various researchers. In addition to radium content, the radon wall flux from a surface strongly depends upon the radon diffusion length (L) and thickness of the wall (2d). In the present work the indoor radon concentrations from the measured radon exhalation rate of building materials calculated using different models available in literature and validation of models was made through measurement. The variation in the predicted radon flux from different models was compared with d/L value for wall and roofs of different dwellings. The results showed that the radon concentrations predicted by models agree with experimental value. The applicability of different model with d/L ratio was discussed. The work aims to select a more appropriate and general model among available models in literature for the prediction of indoor radon. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Analysis of an unswept propfan blade with a semiempirical dynamic stall model

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Kaza, K. R. V.

    1989-01-01

    The time history response of a propfan wind tunnel model with dynamic stall is studied analytically. The response obtained from the analysis is compared with available experimental data. The governing equations of motion are formulated in terms of blade normal modes which are calculated using the COSMIC-NASTRAN computer code. The response analysis considered the blade plunging and pitching motions. The lift, drag and moment coefficients for angles of attack below the static stall angle are obtained from a quasi-steady theory. For angles above static stall angles, a semiempirical dynamic stall model based on a correction to angle of attack is used to obtain lift, drag and moment coefficients. Using these coefficients, the aerodynamic forces are calculated at a selected number of strips, and integrated to obtain the total generalized forces. The combined momentum-blade element theory is used to calculate the induced velocity. The semiempirical stall model predicted a limit cycle oscillation near the setting angle at which large vibratory stresses were observed in an experiment. The predicted mode and frequency of oscillation also agreed with those measured in the experiment near the setting angle.

  14. Model Verification and Validation Concepts for a Probabilistic Fracture Assessment Model to Predict Cracking of Knife Edge Seals in the Space Shuttle Main Engine High Pressure Oxidizer

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Riha, David S.

    2013-01-01

    Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification.

  15. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  16. TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.

  17. QSPR models for various physical properties of carbohydrates based on molecular mechanics and quantum chemical calculations.

    PubMed

    Dyekjaer, Jane Dannow; Jónsdóttir, Svava Osk

    2004-01-22

    Quantitative Structure-Property Relationships (QSPR) have been developed for a series of monosaccharides, including the physical properties of partial molar heat capacity, heat of solution, melting point, heat of fusion, glass-transition temperature, and solid state density. The models were based on molecular descriptors obtained from molecular mechanics and quantum chemical calculations, combined with other types of descriptors. Saccharides exhibit a large degree of conformational flexibility, therefore a methodology for selecting the energetically most favorable conformers has been developed, and was used for the development of the QSPR models. In most cases good correlations were obtained for monosaccharides. For five of the properties predictions were made for disaccharides, and the predicted values for the partial molar heat capacities were in excellent agreement with experimental values.

  18. Calibration and validation of a voxel phantom for use in the Monte Carlo modeling and optimization of x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Dance, David R.; McVey, Graham; Sandborg, Michael P.; Persliden, Jan; Carlsson, Gudrun A.

    1999-05-01

    A Monte Carlo program has been developed to model X-ray imaging systems. It incorporates an adult voxel phantom and includes anti-scatter grid, radiographic screen and film. The program can calculate contrast and noise for a series of anatomical details. The use of measured H and D curves allows the absolute calculation of the patient entrance air kerma for a given film optical density (or vice versa). Effective dose can also be estimated. In an initial validation, the program was used to predict the optical density for exposures with plastic slabs of various thicknesses. The agreement between measurement and calculation was on average within 5%. In a second validation, a comparison was made between computer simulations and measurements for chest and lumbar spine patient radiographs. The predictions of entrance air kerma mostly fell within the range of measured values (e.g. chest PA calculated 0.15 mGy, measured 0.12 - 0.17 mGy). Good agreement was also obtained for the calculated and measured contrasts for selected anatomical details and acceptable agreement for dynamic range. It is concluded that the program provides a realistic model of the patient and imaging system. It can thus form the basis of a detailed study and optimization of X-ray imaging systems.

  19. Working memory components that predict word problem solving: Is it merely a function of reading, calculation, and fluid intelligence?

    PubMed

    Fung, Wenson; Swanson, H Lee

    2017-07-01

    The purpose of this study was to assess whether the differential effects of working memory (WM) components (the central executive, phonological loop, and visual-spatial sketchpad) on math word problem-solving accuracy in children (N = 413, ages 6-10) are completely mediated by reading, calculation, and fluid intelligence. The results indicated that all three WM components predicted word problem solving in the nonmediated model, but only the storage component of WM yielded a significant direct path to word problem-solving accuracy in the fully mediated model. Fluid intelligence was found to moderate the relationship between WM and word problem solving, whereas reading, calculation, and related skills (naming speed, domain-specific knowledge) completely mediated the influence of the executive system on problem-solving accuracy. Our results are consistent with findings suggesting that storage eliminates the predictive contribution of executive WM to various measures Colom, Rebollo, Abad, & Shih (Memory & Cognition, 34: 158-171, 2006). The findings suggest that the storage component of WM, rather than the executive component, has a direct path to higher-order processing in children.

  20. Fatigue-Life Prediction Methodology Using Small-Crack Theory

    NASA Technical Reports Server (NTRS)

    Newmann, James C., Jr.; Phillips, Edward P.; Swain, M. H.

    1997-01-01

    This paper reviews the capabilities of a plasticity-induced crack-closure model to predict fatigue lives of metallic materials using 'small-crack theory' for various materials and loading conditions. Crack-tip constraint factors, to account for three-dimensional state-of-stress effects, were selected to correlate large-crack growth rate data as a function of the effective-stress-intensity factor range (delta K(eff)) under constant-amplitude loading. Some modifications to the delta k(eff)-rate relations were needed in the near-threshold regime to fit measured small-crack growth rate behavior and fatigue endurance limits. The model was then used to calculate small- and large-crack growth rates, and to predict total fatigue lives, for notched and un-notched specimens made of two aluminum alloys and a steel under constant-amplitude and spectrum loading. Fatigue lives were calculated using the crack-growth relations and microstructural features like those that initiated cracks for the aluminum alloys and steel for edge-notched specimens. An equivalent-initial-flaw-size concept was used to calculate fatigue lives in other cases. Results from the tests and analyses agreed well.

  1. Estimating the Octanol/Water Partition Coefficient for Aliphatic Organic Compounds Using Semi-Empirical Electrotopological Index

    PubMed Central

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945

  2. Estimating the octanol/water partition coefficient for aliphatic organic compounds using semi-empirical electrotopological index.

    PubMed

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.

  3. A viable method to predict acoustic streaming in presence of cavitation.

    PubMed

    Louisnard, O

    2017-03-01

    The steady liquid flow observed under ultrasonic emitters generating acoustic cavitation can be successfully predicted by a standard turbulent flow calculation. The flow is driven by the classical averaged volumetric force density calculated from the acoustic field, but the inertial term in Navier-Stokes equations must be kept, and a turbulent solution must be sought. The acoustic field must be computed with a realistic model, properly accounting for dissipation by the cavitation bubbles [Louisnard, Ultrason. Sonochem., 19, (2012) 56-65]. Comparison with 20kHz experiments, involving the combination of acoustic streaming and a perpendicular forced flow in a duct, shows reasonably good agreement. Moreover, the persistence of the cavitation effects on the wall facing the emitter, in spite of the deflection of the streaming jet, is correctly reproduced by the model. It is also shown that predictions based either on linear acoustics with the correct turbulent solution, or with Louisnard's model with Eckart-Nyborg's theory yields unrealistic results. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Comparison of UWCC MOX fuel measurements to MCNP-REN calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.; Baker, M.; Jie, R.

    1998-12-31

    The development of neutron coincidence counting has greatly improved the accuracy and versatility of neutron-based techniques to assay fissile materials. Today, the shift register analyzer connected to either a passive or active neutron detector is widely used by both domestic and international safeguards organizations. The continued development of these techniques and detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model, as it is currently used, fails to accurately predict detector response in highly multiplying mediums such as mixed-oxide (MOX) lightmore » water reactor fuel assemblies. For this reason, efforts have been made to modify the currently used Monte Carlo codes and to develop new analytical methods so that this model is not required to predict detector response. The authors describe their efforts to modify a widely used Monte Carlo code for this purpose and also compare calculational results with experimental measurements.« less

  5. Growth/reflectance model interface for wheat and corresponding model

    NASA Technical Reports Server (NTRS)

    Suits, G. H.; Sieron, R.; Odenweller, J.

    1984-01-01

    The use of modeling to explore the possibility of discovering new and useful crop condition indicators which might be available from the Thematic Mapper and to connect these symptoms to the biological causes in the crop is discussed. A crop growth model was used to predict the day to day growth features of the crop as it responds biologically to the various environmental factors. A reflectance model was used to predict the character of the interaction of daylight with the predicted growth features. An atmospheric path radiance was added to the reflected daylight to simulate the radiance appearing at the sensor. Finally, the digitized data sent to a ground station were calculated. The crop under investigation is wheat.

  6. A Model to Predict Final Cost Growth in a Weapon System Development Program

    DTIC Science & Technology

    1975-08-01

    Manual Calculation ..... .............. ... 117 11. Data and Results of 3 x 2 Manual Calculation .................... .. 119 12. Quarterly F-5E... manually calculated. The data and results are in Table 10. The computer program for these calculations is listed in Figure 12 with results in Figure...13. Table 10 Data and Results of 2 x 2 Manual Calculation Outcome Aspect Poor Acceptable 1 .5 .5 2 .3 .7 The total number of events possible are: (2)2

  7. Kinetically accessible yield (KAY) for redirection of metabolism to produce exo-metabolites

    DOE PAGES

    Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei; ...

    2017-04-05

    The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less

  8. FPGA implementation of predictive degradation model for engine oil lifetime

    NASA Astrophysics Data System (ADS)

    Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.

    2018-03-01

    This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.

  9. Dispersion model studies for Space Shuttle environmental effects activities

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The NASA/MSFC REED computer code was developed for predicting concentrations, dosage, and deposition downwind from rocket vehicle launches. The calculation procedures and results of nine studies using the code are presented. Topics include plume expansion, hydrazine concentrations, and hazard calculations for postulated fuel spills.

  10. [Reevaluation of the time course of the effect of propofol described with the Schnider pharmacokinetic model].

    PubMed

    Sepúlveda, P O; Mora, X

    2012-12-01

    The first order plasma-effect-site equilibration rate constant (k(e0)) links the pharmacokinetics (PK) and pharmacodynamics (PD) of a given drug. This constant, calculated for each specific PK drug model, allowed us to predict the course of the effect in a target controlled infusion (TCI). The PK-PD model of propofol, published by Schnider et al., calculated a k(e0) value of 0.456min(-1) and a corresponding time to peak effect (t peak) of 1.6min. The aim of this study was to reevaluate the k(e0) value for the predicted Schnider model of propofol, with data from a complete effect curve obtained by monitoring the bispectral index (BIS). The study included 35 healthy adult patients (18-90 years) scheduled for elective surgery with standard monitoring and using the BIS XP(®) (Aspect), and who received a propofol infusion to reach a plasma target of 12 μg/ml in 4min. The infusion was then stopped, obtaining a complete effect curve when the patient woke up. The Anestfusor™ (University of Chile) software was used to control the infusion pumps, calculate the plasma concentration plotted by Schnider PK model, and to store the BIS data every second. Loss (LOC) and recovery (ROC) of consciousness was assessed and recorded. Using a traditional parametric method using the "k(e0) Objective function" of the PK-PD tools for Excel, the individual and population k(e0) was calculated. Predictive Smith tests (Pk) and Student t test were used for statistical analysis. A P<.05 indicated significance. The evaluation included 21 male and 14 female patients (18 to 90 years). We obtained 1,001 (±182) EEG data and the corresponding calculated plasma concentration for each case. The population k(e0) obtained was 0.144min(-1) (SD±0.048), very different from the original model (P<.001). This value corresponds with a t peak of 2.45min. The predictive performance (Pk) for the new model was 0.9 (SD±0.03), but only 0.78 (SD±0.06) for the original (P<.001). With a baseline BIS of 95.8 (SD±2.34), the BIS at LOC was 77.48 (SD±9.6) and 74.65(SD±6.3) at ROC (P=.027). The calculated Ce in the original model at LOC and ROC were 5.9 (SD±1.35)/1.08 μg/ml (SD±0.32) (P<.001), respectively, and 2.3 (SD±0.63)/2.0 μg/ml (SD±0.65) (NS) for the new model. The values between LOC/ROC were significantly different between the 2 models (P<.001). No differences in k(e0) value were found between males and females, but in the new model the k(e0) was affected by age as a covariable (0.26-[age×0.0022]) (P<.05). The dynamic relationship between propofol plasma concentrations predicted by Schnider's pharmacokinetic model and its hypnotic effect measured with BIS was better characterized with a smaller k(e0) value (slower t½k(e0)) than that present in the original model, with an age effect also not described before. Copyright © 2011 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.

  11. An attempt for modeling the atmospheric transport of 3H around Kakrapar Atomic Power Station.

    PubMed

    Patra, A K; Nankar, D P; Joshi, C P; Venkataraman, S; Sundar, D; Hegde, A G

    2008-01-01

    Prediction of downwind tritium air concentrations in the environment around Kakrapar Atomic Power Station (KAPS) was studied on the basis of Gaussian plume dispersion model. The tritium air concentration by field measurement [measured tritium air concentrations in the areas adjacent to KAPS] were compared with the theoretically calculated values (predicted) to validate the model. This approach will be useful in evaluating environmental radiological impacts due to pressurised heavy water reactors.

  12. Highly ionized atoms in cooling gas

    NASA Technical Reports Server (NTRS)

    Edgar, R. J.; Chevalier, R. A.

    1986-01-01

    The ionization of low density gas cooling from a high temperature was calculated. The evolution during the cooling is assumed to be isochoric, isobaric, or a combination of these cases. The calculations are used to predict the column densities and ultraviolet line luminosities of highly ionized atoms in cooling gas. In a model for cooling of a hot galactic corona, it is shown that the observed value of N(N V) can be produced in the cooling gas, while the predicted value of N(Si IV) falls short of the observed value by a factor of about 5. The same model predicts fluxes of ultraviolet emission lines that are a factor of 10 lower than the claimed detections of Feldman, Brune, and Henry. Predictions are made for ultraviolet lines in cooling flows in early-type galaxies and clusters of galaxies. It is shown that the column densities of interest vary over a fairly narrow range, while the emission line luminosities are simply proportional to the mass inflow rate.

  13. Finite element validation of stress intensity factor calculation models for thru-thickness and thumb-nail cracks in double edge notch specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beres, W.; Koul, A.K.

    1994-09-01

    Stress intensity factors for thru-thickness and thumb-nail cracks in the double edge notch specimens, containing two different notch radius (R) to specimen width (W) ratios (R/W = 1/8 and 1/16), are calculated through finite element analysis. The finite element results are compared with predictions based on existing empirical models for SIF calculations. The effects of a change in R/W ratio on SIF of thru-thickness and thumb-nail cracks are also discussed. 34 refs.

  14. The solid-state terahertz spectrum of MDMA (Ecstasy) - A unique test for molecular modeling assignments

    NASA Astrophysics Data System (ADS)

    Allis, Damian G.; Hakey, Patrick M.; Korter, Timothy M.

    2008-10-01

    The terahertz (THz, far-infrared) spectrum of 3,4-methylene-dioxymethamphetamine hydrochloride (Ecstasy) is simulated using solid-state density functional theory. While a previously reported isolated-molecule calculation is noteworthy for the precision of its solid-state THz reproduction, the solid-state calculation predicts that the isolated-molecule modes account for only half of the spectral features in the THz region, with the remaining structure arising from lattice vibrations that cannot be predicted without solid-state molecular modeling. The molecular origins of the internal mode contributions to the solid-state THz spectrum, as well as the proper consideration of the protonation state of the molecule, are also considered.

  15. Sakurai Prize: The Future of Higgs Physics

    NASA Astrophysics Data System (ADS)

    Dawson, Sally

    2017-01-01

    The discovery of the Higgs boson relied critically on precision calculations. The quantum contributions from the Higgs boson to the W and top quark masses suggested long before the Higgs discovery that a Standard Model Higgs boson should have a mass in the 100-200 GeV range. The experimental extraction of Higgs properties requires normalization to the predicted Higgs production and decay rates, for which higher order corrections are also essential. As Higgs physics becomes a mature subject, more and more precise calculations will be required. If there is new physics at high scales, it will contribute to the predictions and precision Higgs physics will be a window to beyond the Standard Model physics.

  16. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  17. 3D gain modeling of LMJ and NIF amplifiers

    NASA Astrophysics Data System (ADS)

    LeTouze, Geoffroy; Cabourdin, Olivier; Mengue, J. F.; Guenet, Mireille; Grebot, Eric; Seznec, Stephane E.; Jancaitis, Kenneth S.; Marshall, Christopher D.; Zapata, Luis E.; Erlandson, A. E.

    1999-07-01

    A 3D ray-trace model has been developed to predict the performance of flashlamp pumped laser amplifiers. The computer program, written in C++, includes a graphical display option using the Open Inventor library, as well as a parser and a loader allowing the user to easily model complex multi-segment amplifier systems. It runs both on a workstation cluster at LLNL, and on the T3E Cray at CEA. We will discuss how we have reduce the required computation time without changing precision by optimizing the parameters which set the discretization level of the calculation. As an example, the sample of calculation points is chosen to fit the pumping profile through the thickness of amplifier slabs. We will show the difference in pump rates with our latest model as opposed to those produced by our earlier 2.5D code AmpModel. We will also present the results of calculations which model surfaces and other 3D effects such as top and bottom refelcotr positions and reflectivity which could not be included in the 2.5D model. This new computer model also includes a full 3D calculation of the amplified spontaneous emission rate in the laser slab, as opposed to the 2.5D model which tracked only the variation in the gain across the transverse dimensions of the slab. We will present the impact of this evolution of the model on the predicted stimulated decay rate and the resulting gain distribution. Comparison with most recent AmpLab experimental result will be presented, in the different typical NIF and LMJ configurations.

  18. Musculoskeletal Modeling of the Lumbar Spine to Explore Functional Interactions between Back Muscle Loads and Intervertebral Disk Multiphysics

    PubMed Central

    Toumanidou, Themis; Noailly, Jérôme

    2015-01-01

    During daily activities, complex biomechanical interactions influence the biophysical regulation of intervertebral disks (IVDs), and transfers of mechanical loads are largely controlled by the stabilizing action of spine muscles. Muscle and other internal forces cannot be easily measured directly in the lumbar spine. Hence, biomechanical models are important tools for the evaluation of the loads in those tissues involved in low-back disorders. Muscle force estimations in most musculoskeletal models mainly rely, however, on inverse calculations and static optimizations that limit the predictive power of the numerical calculations. In order to contribute to the development of predictive systems, we coupled a predictive muscle model with the passive resistance of the spine tissues, in a L3–S1 musculoskeletal finite element model with osmo-poromechanical IVD descriptions. The model included 46 fascicles of the major back muscles that act on the lower spine. The muscle model interacted with activity-related loads imposed to the osteoligamentous structure, as standing position and night rest were simulated through distributed upper body mass and free IVD swelling, respectively. Calculations led to intradiscal pressure values within ranges of values measured in vivo. Disk swelling led to muscle activation and muscle force distributions that seemed particularly appropriate to counterbalance the anterior body mass effect in standing. Our simulations pointed out a likely existence of a functional balance between stretch-induced muscle activation and IVD multiphysics toward improved mechanical stability of the lumbar spine understanding. This balance suggests that proper night rest contributes to mechanically strengthen the spine during day activity. PMID:26301218

  19. Predicted phototoxicities of carbon nano-material by quantum mechanical calculations.

    PubMed

    Betowski, Don

    2017-08-01

    The purpose of this research was to develop a predictive model for the phototoxicity potential of carbon nanomaterials (fullerenols and single-walled carbon nanotubes). This model is based on the quantum mechanical (ab initio) calculations on these carbon-based materials and comparison of the triplet excited states of these materials to published work relating phototoxicity of polynuclear aromatic hydrocarbons (PAH) to their predictive triplet excited state energy. A successful outcome will add another tool to the arsenal of predictive methods for the U.S. EPA program offices as they assess the toxicity of compounds in use or coming into commerce. The basis of this research was obtaining the best quantum mechanical structure of the carbon nanomaterial and was fundamental in determining the triplet excited state energy. The triplet excited state, in turn, is associated with the phototoxicity of the material. This project relies heavily on the interaction of the predictive results (physical chemistry) and the experimental results obtained by biologists and toxicologists. The results of the experiments (toxicity testing) will help refine the predictive model, while the predictions will alert the scientists to red flag compounds. It is hoped that a guidance document for the U.S. EPA will be forthcoming to help determine the toxicity of compounds. This can be a screening tool that would rely on further testing for those compounds found by these predictions to be a phototoxic danger to health and the environment. Copyright © 2017. Published by Elsevier Inc.

  20. Predicting the melting temperature of ice-Ih with only electronic structure information as input.

    PubMed

    Pinnick, Eric R; Erramilli, Shyamsunder; Wang, Feng

    2012-07-07

    The melting temperature of ice-Ih was calculated with only electronic structure information as input by creating a problem-specific force field. The force field, Water model by AFM for Ice and Liquid (WAIL), was developed with the adaptive force matching (AFM) method by fitting to post-Hartree-Fock quality forces obtained in quantum mechanics∕molecular mechanics calculations. WAIL predicts the ice-Ih melting temperature to be 270 K. The model also predicts the densities of ice and water, the temperature of maximum density of water, the heat of vaporizations, and the radial distribution functions for both ice and water in good agreement with experimental measurements. The non-dissociative WAIL model is very similar to a flexible version of the popular TIP4P potential and has comparable computational cost. By customizing to problem-specific configurations with the AFM approach, the resulting model is remarkably more accurate than any variants of TIP4P for simulating ice-Ih and water in the temperature range from 253 K and 293 K under ambient pressure.

  1. A near-wall four-equation turbulence model for compressible boundary layers

    NASA Technical Reports Server (NTRS)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1992-01-01

    A near-wall four-equation turbulence model is developed for the calculation of high-speed compressible turbulent boundary layers. The four equations used are the k-epsilon equations and the theta(exp 2)-epsilon(sub theta) equations. These equations are used to define the turbulent diffusivities for momentum and heat fluxes, thus allowing the assumption of dynamic similarity between momentum and heat transport to be relaxed. The Favre-averaged equations of motion are solved in conjunction with the four transport equations. Calculations are compared with measurements and with another model's predictions where the assumption of the constant turbulent Prandtl number is invoked. Compressible flat plate turbulent boundary layers with both adiabatic and constant temperature wall boundary conditions are considered. Results for the range of low Mach numbers and temperature ratios investigated are essentially the same as those obtained using an identical near-wall k-epsilon model. In general, the numerical predictions are in very good agreement with measurements and there are significant improvements in the predictions of mean flow properties at high Mach numbers.

  2. Predictive landslide susceptibility mapping using spatial information in the Pechabun area of Thailand

    NASA Astrophysics Data System (ADS)

    Oh, Hyun-Joo; Lee, Saro; Chotikasathien, Wisut; Kim, Chang Hwan; Kwon, Ju Hyoung

    2009-04-01

    For predictive landslide susceptibility mapping, this study applied and verified probability model, the frequency ratio and statistical model, logistic regression at Pechabun, Thailand, using a geographic information system (GIS) and remote sensing. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys, and maps of the topography, geology and land cover were constructed to spatial database. The factors that influence landslide occurrence, such as slope gradient, slope aspect and curvature of topography and distance from drainage were calculated from the topographic database. Lithology and distance from fault were extracted and calculated from the geology database. Land cover was classified from Landsat TM satellite image. The frequency ratio and logistic regression coefficient were overlaid for landslide susceptibility mapping as each factor’s ratings. Then the landslide susceptibility map was verified and compared using the existing landslide location. As the verification results, the frequency ratio model showed 76.39% and logistic regression model showed 70.42% in prediction accuracy. The method can be used to reduce hazards associated with landslides and to plan land cover.

  3. Steady and Transient Performance Prediction of Gas Turbine Engines Held in Cambridge, Massachusetts on 27-28 May 1992; in Neubiberg, Germany on 9-10 June 1992; and in Chatillon/Bagneux, France on 11-12 June 1992 (Prediction des Performances des Moteurs a Turbine a Gaz en Regimes Etabli et Transitoire)

    DTIC Science & Technology

    1992-05-01

    the basis of gas generator speed implies both reduction in centrifugal stress and turbine inlet temperature . Calculations yield the values of all...and Transient Performance Calculation Method for Prediction, Analysis 3 and Identification by J.-P. Duponchel, J.I oisy and R.Carrillo Component...thrust changes without over- temperature or flame out. Comprehensive mathematical models of the complete power plant (intake-gas generator -exhaust) plus

  4. Prediction of SA 349/2 GV blade loads in high speed flight using several rotor analyses

    NASA Technical Reports Server (NTRS)

    Gaubert, Michel; Yamauchi, Gloria K.

    1987-01-01

    The influence of blade dynamics, dynamic stall, and transonic aerodynamics on the predictions of rotor loads in high-speed flight are presented. Data were obtained from an Aerospatiale Gazelle SA 349/2 helicopter with three Grande Vitesse blades. Several analyses are used for this investigation. First, blade dynamics effects on the correlation are studied using three rotor analyses which differ mainly in the method of calculating the blade elastic response. Next, an ONERA dynamic stall model is used to predict retreating blade stall. Finally, advancing blade aerodynamic loads are calculated using a NASA-developed rotorcraft analysis coupled with two transonic finite-difference analyses.

  5. A CFD Study on the Prediction of Cyclone Collection Efficiency

    NASA Astrophysics Data System (ADS)

    Gimbun, Jolius; Chuah, T. G.; Choong, Thomas S. Y.; Fakhru'L-Razi, A.

    2005-09-01

    This work presents a Computational Fluid Dynamics calculation to predict and to evaluate the effects of temperature, operating pressure and inlet velocity on the collection efficiency of gas cyclones. The numerical solutions were carried out using spreadsheet and commercial CFD code FLUENT 6.0. This paper also reviews four empirical models for the prediction of cyclone collection efficiency, namely Lapple [1], Koch and Licht [2], Li and Wang [3], and Iozia and Leith [4]. All the predictions proved to be satisfactory when compared with the presented experimental data. The CFD simulations predict the cyclone cut-off size for all operating conditions with a deviation of 3.7% from the experimental data. Specifically, results obtained from the computer modelling exercise have demonstrated that CFD model is the best method of modelling the cyclones collection efficiency.

  6. Predicting the Structure of the Solar Corona for the Total Solar Eclipse of March 29,2006

    NASA Technical Reports Server (NTRS)

    Mikic, Z.; Linker, J. a.; Lionello, R.; Riley, P.; TItov, V.

    2007-01-01

    We describe the use of a three-dimensional MHD model to predict the s tructure of the corona prior to the total solar eclipse of March 29, 2006. The calculation uses the observed photospheric radial magnetic f ield as a boundary condition. We use a new version of our model that has an improved description of energy transport in the corona. The mo del allows us to predict the emission of X-ray and EUV radiation in t he corona. We compare the predicted polarization brightness in the co rona with four observations of the eclipse from Greece, Egypt, and Li bya, and we demonstrate that the model accurately predicts the largescale structure of the corona. We also compare X-ray emission from the model with GOES/SXI images.

  7. Prediction of the fate of p,p'-DDE in sediment on the Palos Verdes shelf, California, USA

    USGS Publications Warehouse

    Sherwood, C.R.; Drake, D.E.; Wiberg, P.L.; Wheatcroft, R.A.

    2002-01-01

    Long-term (60-yr) predictions of vertical profiles of p,p???-DDE concentrations in contaminated bottom sediments on the Palos Verdes shelf were calculated for three locations along the 60-m isobath using a numerical solution of the one-dimensional advection-diffusion equation. The calculations incorporated the following processes: sediment deposition (or erosion), depth-dependent solid-phase biodiffusive mixing, in situ diagenetic transformation, and loss of p,p???-DDE across the sediment-water interface by two mechanisms (resuspension of sediments by wave action and subsequent loss of p,p???-DDE to the water column by desorption, and desorption from sediments to porewater and subsequent molecular diffusion to the water column). A combination of field measurements, laboratory analyses, and calculations with supporting models was used to set parameters for the model. The model explains significant features observed in measurements made every 2 years from 1981 to 1997 by the County Sanitation Districts of Los Angeles (LACSD). Analyses of available data suggest that two sites northwest of the Whites Point sewage outfalls will remain depositional, even as particulate supply from the sewage-treatment plant and nearby Portuguese Bend Landslide decreases. At these sites, model predictions for 1991-2050 indicate that most of the existing inventory of p,p???-DDE will remain buried and that surface concentrations will gradually decrease. Analyses of data southeast of the outfalls suggest that erosion is likely to occur somewhere on the southeast edge of the existing effluent-affected deposit, and model predictions for such a site showed that erosion and biodiffusion will reintroduce the p,p???-DDE to the upper layer of sediments, with subsequent increases in surface concentrations and loss to the overlying water column.

  8. Accommodation and age-dependent eye model based on in vivo measurements.

    PubMed

    Zapata-Díaz, Juan F; Radhakrishnan, Hema; Charman, W Neil; López-Gil, Norberto

    2018-03-21

    To develop a flexible model of the average eye that incorporates changes with age and accommodation in all optical parameters, including entrance pupil diameter, under photopic, natural, environmental conditions. We collated retrospective in vivo measurements of all optical parameters, including entrance pupil diameter. Ray-tracing was used to calculate the wavefront aberrations of the eye model as a function of age, stimulus vergence and pupil diameter. These aberrations were used to calculate objective refraction using paraxial curvature matching. This was also done for several stimulus positions to calculate the accommodation response/stimulus curve. The model predicts a hyperopic change in distance refraction as the eye ages (+0.22D every 10 years) between 20 and 65 years. The slope of the accommodation response/stimulus curve was 0.72 for a 25 years-old subject, with little change between 20 and 45 years. A trend to a more negative value of primary spherical aberration as the eye accommodates is predicted for all ages (20-50 years). When accommodation is relaxed, a slight increase in primary spherical aberration (0.008μm every 10 years) between 20 and 65 years is predicted, for an age-dependent entrance pupil diameter ranging between 3.58mm (20 years) and 3.05mm (65 years). Results match reasonably well with studies performed in real eyes, except that spherical aberration is systematically slightly negative as compared with the practical data. The proposed eye model is able to predict changes in objective refraction and accommodation response. It has the potential to be a useful design and testing tool for devices (e.g. intraocular lenses or contact lenses) designed to correct the eye's optical errors. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  9. Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC

    NASA Astrophysics Data System (ADS)

    Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina

    2016-11-01

    New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.

  10. Phase-field based Multiscale Modeling of Heterogeneous Solid Electrolytes: Applications to Nanoporous Li 3 PS 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jia-Mian; Wang, Bo; Ji, Yanzhou

    Modeling the effective ion conductivities of heterogeneous solid electrolytes typically involves the use of a computer-generated microstructure consisting of randomly or uniformly oriented fillers in a matrix. But, the structural features of the filler/matrix interface, which critically determine the interface ion conductivity and the microstructure morphology, have not been considered during the microstructure generation. In using nanoporous β-Li 3PS 4 electrolyte as an example, we develop a phase-field model that enables generating nanoporous microstructures of different porosities and connectivity patterns based on the depth and the energy of the surface (pore/electrolyte interface), both of which are predicted through density functionalmore » theory (DFT) calculations. Room-temperature effective ion conductivities of the generated microstructures are then calculated numerically, using DFT-estimated surface Li-ion conductivity (3.14×10 -3 S/cm) and experimentally measured bulk Li-ion conductivity (8.93×10 -7 S/cm) of β-Li 3PS 4 as the inputs. We also use the generated microstructures to inform effective medium theories to rapidly predict the effective ion conductivity via analytical calculations. Furthemore, when porosity approaches the percolation threshold, both the numerical and analytical methods predict a significantly enhanced Li-ion conductivity (1.74×10 -4 S/cm) that is in good agreement with experimental data (1.64×10 -4 S/cm). The present phase-field based multiscale model is generally applicable to predict both the microstructure patterns and the effective properties of heterogeneous solid electrolytes.« less

  11. Phase-field based Multiscale Modeling of Heterogeneous Solid Electrolytes: Applications to Nanoporous Li 3 PS 4

    DOE PAGES

    Hu, Jia-Mian; Wang, Bo; Ji, Yanzhou; ...

    2017-09-07

    Modeling the effective ion conductivities of heterogeneous solid electrolytes typically involves the use of a computer-generated microstructure consisting of randomly or uniformly oriented fillers in a matrix. But, the structural features of the filler/matrix interface, which critically determine the interface ion conductivity and the microstructure morphology, have not been considered during the microstructure generation. In using nanoporous β-Li 3PS 4 electrolyte as an example, we develop a phase-field model that enables generating nanoporous microstructures of different porosities and connectivity patterns based on the depth and the energy of the surface (pore/electrolyte interface), both of which are predicted through density functionalmore » theory (DFT) calculations. Room-temperature effective ion conductivities of the generated microstructures are then calculated numerically, using DFT-estimated surface Li-ion conductivity (3.14×10 -3 S/cm) and experimentally measured bulk Li-ion conductivity (8.93×10 -7 S/cm) of β-Li 3PS 4 as the inputs. We also use the generated microstructures to inform effective medium theories to rapidly predict the effective ion conductivity via analytical calculations. Furthemore, when porosity approaches the percolation threshold, both the numerical and analytical methods predict a significantly enhanced Li-ion conductivity (1.74×10 -4 S/cm) that is in good agreement with experimental data (1.64×10 -4 S/cm). The present phase-field based multiscale model is generally applicable to predict both the microstructure patterns and the effective properties of heterogeneous solid electrolytes.« less

  12. TU-D-207B-01: A Prediction Model for Distinguishing Radiation Necrosis From Tumor Progression After Gamma Knife Radiosurgery Based On Radiomics Features From MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z; MD Anderson Cancer Center, Houston, TX; Ho, A

    Purpose: To develop and validate a prediction model using radiomics features extracted from MR images to distinguish radiation necrosis from tumor progression for brain metastases treated with Gamma knife radiosurgery. Methods: The images used to develop the model were T1 post-contrast MR scans from 71 patients who had had pathologic confirmation of necrosis or progression; 1 lesion was identified per patient (17 necrosis and 54 progression). Radiomics features were extracted from 2 images at 2 time points per patient, both obtained prior to resection. Each lesion was manually contoured on each image, and 282 radiomics features were calculated for eachmore » lesion. The correlation for each radiomics feature between two time points was calculated within each group to identify a subset of features with distinct values between two groups. The delta of this subset of radiomics features, characterizing changes from the earlier time to the later one, was included as a covariate to build a prediction model using support vector machines with a cubic polynomial kernel function. The model was evaluated with a 10-fold cross-validation. Results: Forty radiomics features were selected based on consistent correlation values of approximately 0 for the necrosis group and >0.2 for the progression group. In performing the 10-fold cross-validation, we narrowed this number down to 11 delta radiomics features for the model. This 11-delta-feature model showed an overall prediction accuracy of 83.1%, with a true positive rate of 58.8% in predicting necrosis and 90.7% for predicting tumor progression. The area under the curve for the prediction model was 0.79. Conclusion: These delta radiomics features extracted from MR scans showed potential for distinguishing radiation necrosis from tumor progression. This tool may be a useful, noninvasive means of determining the status of an enlarging lesion after radiosurgery, aiding decision-making regarding surgical resection versus conservative medical management.« less

  13. Mathematical modeling of high and low temperature heat pipes

    NASA Technical Reports Server (NTRS)

    Chi, S. W.

    1971-01-01

    Mathematical models are developed for calculating heat-transfer limitations of high-temperature heat pipes and heat-transfer limitations and temperature gradient of low temperature heat pipes. Calculated results are compared with the available experimental data from various sources to increase confidence in the present math models. Complete listings of two computer programs for high- and low-temperature heat pipes respectively are appended. These programs enable the performance of heat pipes with wrapped-screen, rectangular-groove or screen-covered rectangular-groove wick to be predicted.

  14. [Study of building quantitative analysis model for chlorophyll in winter wheat with reflective spectrum using MSC-ANN algorithm].

    PubMed

    Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui

    2010-01-01

    Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.

  15. Modeling and analysis of the TF30-P-3 compressor system with inlet pressure distortion

    NASA Technical Reports Server (NTRS)

    Mazzawy, R. S.; Banks, G. A.

    1976-01-01

    Circumferential inlet distortion testing of a TF30-P-3 afterburning turbofan engine was conducted at NASA-Lewis Research Center. Pratt and Whitney Aircraft analyzed the data using its multiple segment parallel compressor model and classical compressor theory. Distortion attenuation analysis resulted in a detailed flow field calculation with good agreement between multiple segment model predictions and the test data. Sensitivity of the engine stall line to circumferential inlet distortion was calculated on the basis of parallel compressor theory to be more severe than indicated by the data. However, the calculated stall site location was in agreement with high response instrumentation measurements.

  16. Ionizing radiation calculations and comparisons with LDEF data

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Watts, J. W., Jr.

    1992-01-01

    In conjunction with the analysis of LDEF ionizing radiation dosimetry data, a calculational program is in progress to aid in data interpretation and to assess the accuracy of current radiation models for future mission applications. To estimate the ionizing radiation environment at the LDEF dosimeter locations, scoping calculations for a simplified (one dimensional) LDEF mass model were made of the primary and secondary radiations produced as a function of shielding thickness due to trapped proton, galactic proton, and atmospheric (neutron and proton cosmic ray albedo) exposures. Preliminary comparisons of predictions with LDEF induced radioactivity and dose measurements were made to test a recently developed model of trapped proton anisotropy.

  17. Nuclear matrix elements for 0νβ{sup −}β{sup −} decays: Comparative analysis of the QRPA, shell model and IBM predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Civitarese, Osvaldo; Suhonen, Jouni

    In this work we report on general properties of the nuclear matrix elements involved in the neutrinoless double β{sup −} decays (0νβ{sup −}β{sup −} decays) of several nuclei. A summary of the values of the NMEs calculated along the years by the Jyväskylä-La Plata collaboration is presented. These NMEs, calculated in the framework of the quasiparticle random phase approximation (QRPA), are compared with those of the other available calculations, like the Shell Model (ISM) and the interacting boson model (IBA-2)

  18. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  19. Using ‘particle in a box’ models to calculate energy levels in semiconductor quantum well structures

    NASA Astrophysics Data System (ADS)

    Ebbens, A. T.

    2018-07-01

    Although infinite potential ‘particle in a box’ models are widely used to introduce quantised energy levels their predictions cannot be quantitatively compared with atomic emission spectra. Here, this problem is overcome by describing how both infinite and finite potential well models can be used to calculate the confined energy levels of semiconductor quantum wells. This is done by using physics and mathematics concepts that are accessible to pre-university students. The results of the models are compared with experimental data and their accuracy discussed.

  20. Validation of a Solid Rocket Motor Internal Environment Model

    NASA Technical Reports Server (NTRS)

    Martin, Heath T.

    2017-01-01

    In a prior effort, a thermal/fluid model of the interior of Penn State University's laboratory-scale Insulation Test Motor (ITM) was constructed to predict both the convective and radiative heat transfer to the interior walls of the ITM with a minimum of empiricism. These predictions were then compared to values of total and radiative heat flux measured in a previous series of ITM test firings to assess the capabilities and shortcomings of the chosen modeling approach. Though the calculated fluxes reasonably agreed with those measured during testing, this exercise revealed means of improving the fidelity of the model to, in the case of the thermal radiation, enable direct comparison of the measured and calculated fluxes and, for the total heat flux, compute a value indicative of the average measured condition. By replacing the P1-Approximation with the discrete ordinates (DO) model for the solution of the gray radiative transfer equation, the radiation intensity field in the optically thin region near the radiometer is accurately estimated, allowing the thermal radiation flux to be calculated on the heat-flux sensor itself, which was then compared directly to the measured values. Though the fully coupling the wall thermal response with the flow model was not attempted due to the excessive computational time required, a separate wall thermal response model was used to better estimate the average temperature of the graphite surfaces upstream of the heat flux gauges and improve the accuracy of both the total and radiative heat flux computations. The success of this modeling approach increases confidence in the ability of state-of-the-art thermal and fluid modeling to accurately predict SRM internal environments, offers corrections to older methods, and supplies a tool for further studies of the dynamics of SRM interiors.

  1. Creep Damage Analysis of a Lattice Truss Panel Structure

    NASA Astrophysics Data System (ADS)

    Jiang, Wenchun; Li, Shaohua; Luo, Yun; Xu, Shugen

    2017-01-01

    The creep failure for a lattice truss sandwich panel structure has been predicted by finite element method (FEM). The creep damage is calculated by three kinds of stresses: as-brazed residual stress, operating thermal stress and mechanical load. The creep damage at tensile and compressive loads have been calculated and compared. The creep rate calculated by FEM, Gibson-Ashby and Hodge-Dunand models have been compared. The results show that the creep failure is located at the fillet at both tensile and creep loads. The damage rate at the fillet at tensile load is 50 times as much as that at compressive load. The lattice truss panel structure has a better creep resistance to compressive load than tensile load, because the creep and stress triaxiality at the fillet has been decreased at compressive load. The maximum creep strain at the fillet and the equivalent creep strain of the panel structure increase with the increase of applied load. Compared with Gibson-Ashby model and Hodge-Dunand models, the modified Gibson-Ashby model has a good prediction result compared with FEM. However, a more accurate model considering the size effect of the structure still needs to be developed.

  2. Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.

    PubMed

    Mitchell, Ross N; Kilian, Taylor M; Evans, David A D

    2012-02-08

    Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities.

  3. Dilution physics modeling: Dissolution/precipitation chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onishi, Y.; Reid, H.C.; Trent, D.S.

    This report documents progress made to date on integrating dilution/precipitation chemistry and new physical models into the TEMPEST thermal-hydraulics computer code. Implementation of dissolution/precipitation chemistry models is necessary for predicting nonhomogeneous, time-dependent, physical/chemical behavior of tank wastes with and without a variety of possible engineered remediation and mitigation activities. Such behavior includes chemical reactions, gas retention, solids resuspension, solids dissolution and generation, solids settling/rising, and convective motion of physical and chemical species. Thus this model development is important from the standpoint of predicting the consequences of various engineered activities, such as mitigation by dilution, retrieval, or pretreatment, that can affectmore » safe operations. The integration of a dissolution/precipitation chemistry module allows the various phase species concentrations to enter into the physical calculations that affect the TEMPEST hydrodynamic flow calculations. The yield strength model of non-Newtonian sludge correlates yield to a power function of solids concentration. Likewise, shear stress is concentration-dependent, and the dissolution/precipitation chemistry calculations develop the species concentration evolution that produces fluid flow resistance changes. Dilution of waste with pure water, molar concentrations of sodium hydroxide, and other chemical streams can be analyzed for the reactive species changes and hydrodynamic flow characteristics.« less

  4. Statistical models for predicting pair dispersion and particle clustering in isotropic turbulence and their applications

    NASA Astrophysics Data System (ADS)

    Zaichik, Leonid I.; Alipchenkov, Vladimir M.

    2009-10-01

    The purpose of this paper is twofold: (i) to advance and extend the statistical two-point models of pair dispersion and particle clustering in isotropic turbulence that were previously proposed by Zaichik and Alipchenkov (2003 Phys. Fluids15 1776-87 2007 Phys. Fluids 19, 113308) and (ii) to present some applications of these models. The models developed are based on a kinetic equation for the two-point probability density function of the relative velocity distribution of two particles. These models predict the pair relative velocity statistics and the preferential accumulation of heavy particles in stationary and decaying homogeneous isotropic turbulent flows. Moreover, the models are applied to predict the effect of particle clustering on turbulent collisions, sedimentation and intensity of microwave radiation as well as to calculate the mean filtered subgrid stress of the particulate phase. Model predictions are compared with direct numerical simulations and experimental measurements.

  5. A nonlinear fracture mechanics approach to the growth of small cracks

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1983-01-01

    An analytical model of crack closure is used to study the crack growth and closure behavior of small cracks in plates and at notches. The calculated crack opening stresses for small and large cracks, together with elastic and elastic plastic fracture mechanics analyses, are used to correlate crack growth rate data. At equivalent elastic stress intensity factor levels, calculations predict that small cracks in plates and at notches should grow faster than large cracks because the applied stress needed to open a small crack is less than that needed to open a large crack. These predictions agree with observed trends in test data. The calculations from the model also imply that many of the stress intensity factor thresholds that are developed in tests with large cracks and with load reduction schemes do not apply to the growth of small cracks. The current calculations are based upon continuum mechanics principles and, thus, some crack size and grain structure exist where the underlying fracture mechanics assumptions become invalid because of material inhomogeneity (grains, inclusions, etc.). Admittedly, much more effort is needed to develop the mechanics of a noncontinuum. Nevertheless, these results indicate the importance of crack closure in predicting the growth of small cracks from large crack data.

  6. Importance of Ambipolar Electric Field in the Ion Loss from Mars- Results from a Multi-fluid MHD Model with the Electron Pressure Equation Included

    NASA Astrophysics Data System (ADS)

    Ma, Y.; Dong, C.; van der Holst, B.; Nagy, A. F.; Bougher, S. W.; Toth, G.; Cravens, T.; Yelle, R. V.; Jakosky, B. M.

    2017-12-01

    The multi-fluid (MF) magnetohydrodynamic (MHD) model of Mars is further improved by solving an additional electron pressure equation. Through the electron pressure equation, the electron temperature is calculated based on the effects from various electrons related heating and cooling processes (e.g. photo-electron heating, electron-neutral collision and electron-ion collision), and thus the improved model is able to calculate the electron temperature and the electron pressure force self-consistently. Electron thermal conductivity is also considered in the calculation. Model results of a normal case with electron pressure equation included (MFPe) are compared in detail to an identical case using the regular MF model to identify the effect of the improved physics. We found that when the electron pressure equation is included, the general interaction patterns are similar to that of the case with no electron pressure equation. The model with electron pressure equation predicts that electron temperature is much larger than the ion temperature in the ionosphere, consistent with both Viking and MAVEN observations. The inclusion of electron pressure equation significantly increases the total escape fluxes predicted by the model, indicating the importance of the ambipolar electric field(electron pressure gradient) in driving the ion loss from Mars.

  7. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. A narrow-band k-distribution model with single mixture gas assumption for radiative flows

    NASA Astrophysics Data System (ADS)

    Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon

    2018-06-01

    In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.

  9. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less

  10. Decay properties of 256-339Ds superheavy nuclei

    NASA Astrophysics Data System (ADS)

    Santhosh, K. P.; Nithya, C.

    2017-09-01

    The decay properties of 84 isotopes of darmstadtium superheavy nuclei ( Z = 110) have been studied using various theoretical models. The proton emission half-lives, the alpha decay half-lives, the spontaneous fission half-lives and the cluster decay half-lives of all the isotopes are evaluated. The one-proton emission half-lives and the alpha decay half-lives are predicted using the Coulomb and proximity potential model for deformed nuclei (CPPMDN). The calculated alpha half-lives are compared with the available experimental results as well as with the predictions of other theoretical models. The predicted half-lives matches well with the experimental results. The one-proton half-lives are also compared with the predictions using other formalisms. The shell-effect-dependent formula of Santhosh et al. has been employed for calculating the spontaneous fission half-lives. A theoretical comparison of spontaneous fission half-lives with four different formalisms is performed. By comparing the one-proton emission half-lives, the alpha decay half-lives and the spontaneous fission half-lives decay modes are predicted for all the isotopes of Ds. It is seen that the isotopes within the range 256 ≤ A ≤ 263 and 279 ≤ A ≤ 339 decay through spontaneous fission and the isotopes 264 ≤ A ≤ 278 exhibit alpha decay. Cluster decay half-lives are calculated using different models including the Coulomb and proximity potential (CPPM), for determining the magicities in the superheavy region. The effect of magicity at N = 184 and N = 202 were confirmed from the plot of log_{10}T_{1/2} versus neutron number of the daughter nuclei for the emission of different clusters. We hope that the systematic and detailed study of all the possible decay modes of 256-339Ds using various theoretical models will be helpful in the experimental identification of the isotopes of the element in the future.

  11. Calculations on the Back of an Envelope Model: Applying Seasonal Fecundity Models to Species’ Range Limits

    EPA Science Inventory

    Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...

  12. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poludniowski, Gavin G.; Evans, Philip M.

    2013-04-15

    Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less

  13. THREE-DIMENSIONAL MODEL FOR HYPERTHERMIA CALCULATIONS

    EPA Science Inventory

    Realistic three-dimensional models that predict temperature distributions with a high degree of spatial resolution in bodies exposed to electromagnetic (EM) fields are required in the application of hyperthermia for cancer treatment. To ascertain the thermophysiologic response of...

  14. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  15. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  16. Sex-Specific Prediction Models for Sleep Apnea From the Hispanic Community Health Study/Study of Latinos.

    PubMed

    Shah, Neomi; Hanna, David B; Teng, Yanping; Sotres-Alvarez, Daniela; Hall, Martica; Loredo, Jose S; Zee, Phyllis; Kim, Mimi; Yaggi, H Klar; Redline, Susan; Kaplan, Robert C

    2016-06-01

    We developed and validated the first-ever sleep apnea (SA) risk calculator in a large population-based cohort of Hispanic/Latino subjects. Cross-sectional data on adults from the Hispanic Community Health Study/Study of Latinos (2008-2011) were analyzed. Subjective and objective sleep measurements were obtained. Clinically significant SA was defined as an apnea-hypopnea index ≥ 15 events per hour. Using logistic regression, four prediction models were created: three sex-specific models (female-only, male-only, and a sex × covariate interaction model to allow differential predictor effects), and one overall model with sex included as a main effect only. Models underwent 10-fold cross-validation and were assessed by using the C statistic. SA and its predictive variables; a total of 17 variables were considered. A total of 12,158 participants had complete sleep data available; 7,363 (61%) were women. The population-weighted prevalence of SA (apnea-hypopnea index ≥ 15 events per hour) was 6.1% in female subjects and 13.5% in male subjects. Male-only (C statistic, 0.808) and female-only (C statistic, 0.836) prediction models had the same predictor variables (ie, age, BMI, self-reported snoring). The sex-interaction model (C statistic, 0.836) contained sex, age, age × sex, BMI, BMI × sex, and self-reported snoring. The final overall model (C statistic, 0.832) contained age, BMI, snoring, and sex. We developed two websites for our SA risk calculator: one in English (https://www.montefiore.org/sleepapneariskcalc.html) and another in Spanish (http://www.montefiore.org/sleepapneariskcalc-es.html). We created an internally validated, highly discriminating, well-calibrated, and parsimonious prediction model for SA. Contrary to the study hypothesis, the variables did not have different predictive magnitudes in male and female subjects. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  17. Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering

    PubMed Central

    Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.

    2011-01-01

    Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761

  18. Overview of the 1986--1987 atomic mass predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haustein, P.E.

    1988-07-01

    The need for a comprehensive update of earlier sets of atomic mass predictions is documented. A project that grew from this need and which resulted in the preparation of the 1986--1987 Atomic Mass Predictions is summarized. Ten sets of new mass predictions and expository text from a variety of types of mass models are combined with the latest evaluation of experimentally determined atomic masses. The methodology employed in constructing these mass predictions is outlined. The models are compared with regard to their reproduction of the experimental mass surface and their use of varying numbers of adjustable parameters. Plots are presented,more » for each set of predictions, of differences between model calculations and the measured masses. These plots may be used to estimate the reliability of the new mass predictions in unmeasured regions that border the experimetally known mass surface. copyright 1988 Academic Press, Inc.« less

  19. Quality assessment of butter cookies applying multispectral imaging

    PubMed Central

    Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne

    2013-01-01

    A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036

  20. A prescribed wake rotor inflow and flow field prediction analysis, user's manual and technical approach

    NASA Technical Reports Server (NTRS)

    Egolf, T. A.; Landgrebe, A. J.

    1982-01-01

    A user's manual is provided which includes the technical approach for the Prescribed Wake Rotor Inflow and Flow Field Prediction Analysis. The analysis is used to provide the rotor wake induced velocities at the rotor blades for use in blade airloads and response analyses and to provide induced velocities at arbitrary field points such as at a tail surface. This analysis calculates the distribution of rotor wake induced velocities based on a prescribed wake model. Section operating conditions are prescribed from blade motion and controls determined by a separate blade response analysis. The analysis represents each blade by a segmented lifting line, and the rotor wake by discrete segmented trailing vortex filaments. Blade loading and circulation distributions are calculated based on blade element strip theory including the local induced velocity predicted by the numerical integration of the Biot-Savart Law applied to the vortex wake model.

  1. Fatigue life and crack growth prediction methodology

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Phillips, E. P.; Everett, R. A., Jr.

    1993-01-01

    The capabilities of a plasticity-induced crack-closure model and life-prediction code to predict fatigue crack growth and fatigue lives of metallic materials are reviewed. Crack-tip constraint factors, to account for three-dimensional effects, were selected to correlate large-crack growth rate data as a function of the effective-stress-intensity factor range (delta(K(sub eff))) under constant-amplitude loading. Some modifications to the delta(K(sub eff))-rate relations were needed in the near threshold regime to fit small-crack growth rate behavior and endurance limits. The model was then used to calculate small- and large-crack growth rates, and in some cases total fatigue lives, for several aluminum and titanium alloys under constant-amplitude, variable-amplitude, and spectrum loading. Fatigue lives were calculated using the crack growth relations and microstructural features like those that initiated cracks. Results from the tests and analyses agreed well.

  2. Aeroacoustic Analysis of a Simplified Landing Gear

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Khorrami, Mehdi, R.; Li, Fei

    2004-01-01

    A hybrid approach is used to investigate the noise generated by a simplified landing gear without small scale parts such as hydraulic lines and fasteners. The Ffowcs Williams and Hawkings equation is used to predict the noise at far-field observer locations from flow data provided by an unsteady computational fluid dynamics calculation. A simulation with 13 million grid points has been completed, and comparisons are made between calculations with different turbulence models. Results indicate that the turbulence model has a profound effect on the levels and character of the unsteadiness. Flow data on solid surfaces and a set of permeable surfaces surrounding the gear have been collected. Noise predictions using the porous surfaces appear to be contaminated by errors caused by large wake fluctuations passing through the surfaces. However, comparisons between predictions using the solid surfaces with the near-field CFD solution are in good agreement giving confidence in the far-field results.

  3. Numerical analysis of hypersonic turbulent film cooling flows

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Chen, C. P.; Wei, H.

    1992-01-01

    As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.

  4. Preoperative predictive model of cervical lymph node metastasis combining fluorine-18 fluorodeoxyglucose positron-emission tomography/computerized tomography findings and clinical factors in patients with oral or oropharyngeal squamous cell carcinoma.

    PubMed

    Mochizuki, Yumi; Omura, Ken; Nakamura, Shin; Harada, Hiroyuki; Shibuya, Hitoshi; Kurabayashi, Toru

    2012-02-01

    This study aimed to construct a preoperative predictive model of cervical lymph node metastasis using fluorine-18 fluorodeoxyglucose positron-emission tomography/computerized tomography ((18)F-FDG PET/CT) findings in patients with oral or oropharyngeal squamous cell carcinoma (SCC). Forty-nine such patients undergoing preoperative (18)F-FDG PET/CT and neck dissection or lymph node biopsy were enrolled. Retrospective comparisons with spatial correlation between PET/CT and the anatomical sites based on histopathological examinations of surgical specimens were performed. We calculated a logistic regression model, including the SUVmax-related variable. When using the optimal cutoff point criterion of probabilities calculated from the model that included either clinical factors and delayed-phase SUVmax ≥0.087 or clinical factors and maximum standardized uptake (SUV) increasing rate (SUV-IR) ≥ 0.100, it significantly increased the sensitivity, specificity, and accuracy (87.5%, 65.7%, and 75.2%, respectively). The use of predictive models that include clinical factors and delayed-phase SUVmax and SUV-IR improve preoperative nodal diagnosis. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Estimation of groundwater recharge to chalk and sandstone aquifers using simple soil models

    NASA Astrophysics Data System (ADS)

    Ragab, R.; Finch, J.; Harding, R.

    1997-03-01

    On the assumption that the water draining below the root zone is potentially available for groundwater recharge, two current UK methods for estimating annual groundwater recharge have been compared with a new soil model using data from four sites under permanent grass in the UK: two sites representative of the Chalk aquifer at Bridgest Farm (Hampshire) and Fleam Dyke (Cambridgeshire), and two sites on the Triassic sandstone at Bicton College (Devon) and Bacon Hall (Shropshire). A Four Root Layers Model (FRLM), the Penman-Grindley model and the UK Meteorological Office Rainfall and Evaporation Calculation System (MORECS) were used. The new soil model was run with potential evaporation as input both from the MORECS and from the Penman-Monteith equation. The models were run for the Chalk sites both with and without a bypass flow of 15% of rainfall. Bypass was not considered for the sandstone sites. The performance of the models was tested against neutron probes measurements of soil moisture deficits. In addition, the annual groundwater recharge estimated from the models was compared with the published values obtained from the 'zero flux plane' method. Generally, the Penman-Grindley model was more successful in predicting the time for soil to return to its field capacity than in predicting the magnitude of the soil moisture deficit. The annual groundwater recharge was predicted with reasonable accuracy. The MORECS relatively tended to overestimate the soil moisture deficits and to delay the time at which the soil returns to its field capacity. The consequences were underestimates of annual groundwater recharge, owing either to the higher values of potential evaporation calculated from the MORECS or tothe high available water capacity values associated with the soils under consideration. The new soil model (FRLM) predicts the soil moisture deficits successfully and hence is reliable in estimating the annual groundwater recharge. The model is capable of doing this with potential evaporation input calculated either from the MORECS or from the Penman-Monteith equation. The model also demonstrated that the inclusion of 15% of rainfall as bypass flow is viable for Chalk sites.

  6. Calculating the Responses of Self-Powered Radiation Detectors.

    NASA Astrophysics Data System (ADS)

    Thornton, D. A.

    Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.

  7. Evaluation and Enhancement of Calibration in the American College of Surgeons NSQIP Surgical Risk Calculator.

    PubMed

    Liu, Yaoming; Cohen, Mark E; Hall, Bruce L; Ko, Clifford Y; Bilimoria, Karl Y

    2016-08-01

    The American College of Surgeon (ACS) NSQIP Surgical Risk Calculator has been widely adopted as a decision aid and informed consent tool by surgeons and patients. Previous evaluations showed excellent discrimination and combined discrimination and calibration, but model calibration alone, and potential benefits of recalibration, were not explored. Because lack of calibration can lead to systematic errors in assessing surgical risk, our objective was to assess calibration and determine whether spline-based adjustments could improve it. We evaluated Surgical Risk Calculator model calibration, as well as discrimination, for each of 11 outcomes modeled from nearly 3 million patients (2010 to 2014). Using independent random subsets of data, we evaluated model performance for the Development (60% of records), Validation (20%), and Test (20%) datasets, where prediction equations from the Development dataset were recalibrated using restricted cubic splines estimated from the Validation dataset. We also evaluated performance on data subsets composed of higher-risk operations. The nonrecalibrated Surgical Risk Calculator performed well, but there was a slight tendency for predicted risk to be overestimated for lowest- and highest-risk patients and underestimated for moderate-risk patients. After recalibration, this distortion was eliminated, and p values for miscalibration were most often nonsignificant. Calibration was also excellent for subsets of higher-risk operations, though observed calibration was reduced due to instability associated with smaller sample sizes. Performance of NSQIP Surgical Risk Calculator models was shown to be excellent and improved with recalibration. Surgeons and patients can rely on the calculator to provide accurate estimates of surgical risk. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Predicting flight delay based on multiple linear regression

    NASA Astrophysics Data System (ADS)

    Ding, Yi

    2017-08-01

    Delay of flight has been regarded as one of the toughest difficulties in aviation control. How to establish an effective model to handle the delay prediction problem is a significant work. To solve the problem that the flight delay is difficult to predict, this study proposes a method to model the arriving flights and a multiple linear regression algorithm to predict delay, comparing with Naive-Bayes and C4.5 approach. Experiments based on a realistic dataset of domestic airports show that the accuracy of the proposed model approximates 80%, which is further improved than the Naive-Bayes and C4.5 approach approaches. The result testing shows that this method is convenient for calculation, and also can predict the flight delays effectively. It can provide decision basis for airport authorities.

  9. Predictive power of the grace score in population with diabetes.

    PubMed

    Baeza-Román, Anna; de Miguel-Balsa, Eva; Latour-Pérez, Jaime; Carrillo-López, Andrés

    2017-12-01

    Current clinical practice guidelines recommend risk stratification in patients with acute coronary syndrome (ACS) upon admission to hospital. Diabetes mellitus (DM) is widely recognized as an independent predictor of mortality in these patients, although it is not included in the GRACE risk score. The objective of this study is to validate the GRACE risk score in a contemporary population and particularly in the subgroup of patients with diabetes, and to test the effects of including the DM variable in the model. Retrospective cohort study in patients included in the ARIAM-SEMICYUC registry, with a diagnosis of ACS and with available in-hospital mortality data. We tested the predictive power of the GRACE score, calculating the area under the ROC curve. We assessed the calibration of the score and the predictive ability based on type of ACS and the presence of DM. Finally, we evaluated the effect of including the DM variable in the model by calculating the net reclassification improvement. The GRACE score shows good predictive power for hospital mortality in the study population, with a moderate degree of calibration and no significant differences based on ACS type or the presence of DM. Including DM as a variable did not add any predictive value to the GRACE model. The GRACE score has an appropriate predictive power, with good calibration and clinical applicability in the subgroup of diabetic patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  10. Acceleration of Relativistic Electrons: A Comparison of Two Models

    NASA Astrophysics Data System (ADS)

    Green, J. C.; Kivelson, M. G.

    2001-12-01

    Observations of relativistic electron fluxes show order of magnitude increases during some geomagnetic storms. Many electron acceleration models have been proposed to explain the flux enhancements but attempts to validate these models have yielded ambiguous results. Here we examine two models of electron acceleration, radial diffusion via enhanced ULF wave activity [Elkington et al.,1999] and acceleration by resonant interaction with whistler waves[Summers,1998; Roth et al.,1999]. Two methods are used to compare observations with features predicted by the models. First, the evolution of phase space density as a function of L during flux enhancement events is evaluated. The phase space density (PSD) is calculated at constant first, second and third adiabatic invariants using data obtained by the CEPPAD-HIST instrument and the MFE instrument onboard the Polar spacecraft. Liouville's theorem states that PSD calculated at constant adiabatic invariants does not change with time unless some mechanism violates one of the invariants. The radial diffusion model predicts that only the flux invariant will be violated during the acceleration process while acceleration by whistler waves violates the first invariant. Therefore, the two models predict a different evolution of the PSD as a function of time and L. Previous examinations of the evolution of PSD have yielded ambiguous results because PSD calculations are highly dependent on the global accuracy of magnetic field models. We examine the PSD versus L profiles for a series of geomagnetic storms and in addition determine how errors in the Tsyganenko 96 field model affect the results by comparing the measured magnetic field to the model magnetic field used in the calculations. Second, the evolution of the relativistic electron pitch angle distributions is evaluated. Previous studies of pitch angle distributions were limited because few spacecraft have the necessary instrumentation and global coverage. The CEPPAD-HIST instrument measures 16 look directions and along with measurements from the MFE experiment allows calculation of complete pitch angle distributions. The evolving orbit of the Polar spacecraft over the 6 years mission has given measurements over a wide range of L and local time. Using data extending over the entire mission we use superposed epoch analysis to examine the evolution of pitch angle distributions during flux enhancement events as a function of L, magnetic local time, and storm phase.

  11. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  12. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  13. Assessment of a numerical model to reproduce event‐scale erosion and deposition distributions in a braided river

    PubMed Central

    Measures, R.; Hicks, D. M.; Brasington, J.

    2016-01-01

    Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477

  14. Assessment of a numerical model to reproduce event-scale erosion and deposition distributions in a braided river.

    PubMed

    Williams, R D; Measures, R; Hicks, D M; Brasington, J

    2016-08-01

    Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.

  15. Impact of microbial efficiency to predict MP supply when estimating protein requirements of growing beef cattle from performance.

    PubMed

    Watson, A K; Klopfenstein, T J; Erickson, G E; MacDonald, J C; Wilkerson, V A

    2017-07-01

    Data from 16 trials were compiled to calculate microbial CP (MCP) production and MP requirements of growing cattle on high-forage diets. All cattle were individually fed diets with 28% to 72% corn cobs in addition to either alfalfa, corn silage, or sorghum silage at 18% to 60% of the diet (DM basis). The remainder of the diet consisted of protein supplement. Source of protein within the supplement varied and included urea, blood meal, corn gluten meal, dry distillers grains, feather meal, meat and bone meal, poultry by-product meal, soybean meal, and wet distillers grains. All trials included a urea-only treatment. Intake of all cattle within an experiment was held constant, as a percentage of BW, established by the urea-supplemented group. In each trial the base diet (forage and urea supplement) was MP deficient. Treatments consisted of increasing amounts of test protein replacing the urea supplement. As protein in the diet increased, ADG plateaued. Among experiments, ADG ranged from 0.11 to 0.73 kg. Three methods of calculating microbial efficiency were used to determine MP supply. Gain was then regressed against calculated MP supply to determine MP requirement for maintenance and gain. Method 1 (based on a constant 13% microbial efficiency as used by the beef NRC model) predicted an MP maintenance requirement of 3.8 g/kg BW and 385 g MP/kg gain. Method 2 calculated microbial efficiency using low-quality forage diets and predicted MP requirements of 3.2 g/kg BW for maintenance and 448 g/kg for gain. Method 3 (based on an equation predicting MCP yield from TDN intake, proposed by the Beef Cattle Nutrient Requirements Model [BCNRM]) predicted MP requirements of 3.1 g/kg BW for maintenance and 342 g/kg for gain. The factorial method of calculating MP maintenance requirements accounts for scurf, endogenous urinary, and metabolic fecal protein losses and averaged 4.2 g/kg BW. Cattle performance data demonstrate formulating diets to meet the beef NRC model recommended MP maintenance requirement (3.8 g/kg S) works well when using 13% microbial efficiency. Therefore, a change in how microbial efficiency is calculated necessitates a change in the proposed MP maintenance requirement to not oversupply or undersupply RUP. Using the 2016 BCNRM to predict MCP production and formulate diets to meet MP requirements also requires changing the MP maintenance requirement to 3.1 g/kg BW.

  16. Fragment-based {sup 13}C nuclear magnetic resonance chemical shift predictions in molecular crystals: An alternative to planewave methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, Joshua D.; Beran, Gregory J. O., E-mail: gregory.beran@ucr.edu; Monaco, Stephen

    2015-09-14

    We assess the quality of fragment-based ab initio isotropic {sup 13}C chemical shift predictions for a collection of 25 molecular crystals with eight different density functionals. We explore the relative performance of cluster, two-body fragment, combined cluster/fragment, and the planewave gauge-including projector augmented wave (GIPAW) models relative to experiment. When electrostatic embedding is employed to capture many-body polarization effects, the simple and computationally inexpensive two-body fragment model predicts both isotropic {sup 13}C chemical shifts and the chemical shielding tensors as well as both cluster models and the GIPAW approach. Unlike the GIPAW approach, hybrid density functionals can be used readilymore » in a fragment model, and all four hybrid functionals tested here (PBE0, B3LYP, B3PW91, and B97-2) predict chemical shifts in noticeably better agreement with experiment than the four generalized gradient approximation (GGA) functionals considered (PBE, OPBE, BLYP, and BP86). A set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided based on these benchmark calculations. Statistical cross-validation procedures are used to demonstrate the robustness of these fits.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blijderveen, Maarten van; University of Twente, Department of Thermal Engineering, Drienerlolaan 5, 7522 NB Enschede; Bramer, Eddy A.

    Highlights: Black-Right-Pointing-Pointer We model piloted ignition times of wood and plastics. Black-Right-Pointing-Pointer The model is applied on a packed bed. Black-Right-Pointing-Pointer When the air flow is above a critical level, no ignition can take place. - Abstract: To gain insight in the startup of an incinerator, this article deals with piloted ignition. A newly developed model is described to predict the piloted ignition times of wood, PMMA and PVC. The model is based on the lower flammability limit and the adiabatic flame temperature at this limit. The incoming radiative heat flux, sample thickness and moisture content are some of themore » used variables. Not only the ignition time can be calculated with the model, but also the mass flux and surface temperature at ignition. The ignition times for softwoods and PMMA are mainly under-predicted. For hardwoods and PVC the predicted ignition times agree well with experimental results. Due to a significant scatter in the experimental data the mass flux and surface temperature calculated with the model are hard to validate. The model is applied on the startup of a municipal waste incineration plant. For this process a maximum allowable primary air flow is derived. When the primary air flow is above this maximum air flow, no ignition can be obtained.« less

  18. A Wall-Distance-Free k-ω SST Turbulence Model

    NASA Astrophysics Data System (ADS)

    Gleize, Vincent; Burnley, Victor

    2001-11-01

    In the calculation of flows around aircraft and aerodynamic bodies, the Shear-Stress Transport (SST) model by Menter has been used extensively due to its good prediction of flows with adverse pressure gradients. One main drawback of this model is the need to calculate the distance from the wall. While this is not a serious drawback for steady state calculations on non-moving grids, this calculation can become very cumbersome and expensive for unsteady simulations, especially when using unstructured grids. In this case, the wall-distance needs to be determined after each iteration. To avoid this problem, a new model is proposed which provides the benefits of the SST correction and avoids the freestream dependency of the solution, while not requiring the wall-distance. The first results for a wide range of test cases show that this model produces very good agreement with experimental data for flows with adverse pressure gradients, separation zones and shock-boundary layer interactions, closely matching the results obtained with the original SST model. This model should be very useful for unsteady calculations, such as store separation, grid adaptation, and other practical flows.

  19. Predicting the velocity and azimuth of fragments generated by the range destruction or random failure of rocket casings and tankage

    NASA Technical Reports Server (NTRS)

    Eck, Marshall; Mukunda, Meera

    1988-01-01

    A calculational method is described which provides a powerful tool for predicting solid rocket motor (SRM) casing and liquid rocket tankage fragmentation response. The approach properly partitions the available impulse to each major system-mass component. It uses the Pisces code developed by Physics International to couple the forces generated by an Eulerian-modeled gas flow field to a Lagrangian-modeled fuel and casing system. The details of the predictive analytical modeling process and the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed. Methods for applying similar modeling techniques to liquid-tankage-overpressure failures are also discussed. Good agreement between predictions and observations are obtained for five specific events.

  20. Standard Model and New physics for ɛ'k/ɛk

    NASA Astrophysics Data System (ADS)

    Kitahara, Teppei

    2018-05-01

    The first result of the lattice simulation and improved perturbative calculations have pointed to a discrepancy between data on ɛ'k/ɛk and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of ℬ(K → πvv) from the SM predictions, which can be probed precisely in the near future by NA62 and KOTO experiments. We present correlations between ɛ'k/ɛk and ℬ(K → πvv) in two types of NP scenarios: a box dominated scenario and a Z-penguin dominated one. It is shown that different correlations are predicted and the future precision measurements of K → πvv can distinguish both scenarios.

  1. Time-dependent Ionization in a Steady Flow in an MHD Model of the Solar Corona and Wind

    NASA Astrophysics Data System (ADS)

    Shen, Chengcai; Raymond, John C.; Mikić, Zoran; Linker, Jon A.; Reeves, Katharine K.; Murphy, Nicholas A.

    2017-11-01

    Time-dependent ionization is important for diagnostics of coronal streamers and pseudostreamers. We describe time-dependent ionization calculations for a three-dimensional magnetohydrodynamic (MHD) model of the solar corona and inner heliosphere. We analyze how non-equilibrium ionization (NEI) influences emission from a pseudostreamer during the Whole Sun Month interval (Carrington rotation CR1913, 1996 August 22 to September 18). We use a time-dependent code to calculate NEI states, based on the plasma temperature, density, velocity, and magnetic field in the MHD model, to obtain the synthetic emissivities and predict the intensities of the Lyα, O VI, Mg x, and Si xii emission lines observed by the SOHO/Ultraviolet Coronagraph Spectrometer (UVCS). At low coronal heights, the predicted intensity profiles of both Lyα and O VI lines match UVCS observations well, but the Mg x and Si xii emission are predicted to be too bright. At larger heights, the O VI and Mg x lines are predicted to be brighter for NEI than equilibrium ionization around this pseudostreamer, and Si xii is predicted to be fainter for NEI cases. The differences of predicted UVCS intensities between NEI and equilibrium ionization are around a factor of 2, but neither matches the observed intensity distributions along the full length of the UVCS slit. Variations in elemental abundances in closed field regions due to the gravitational settling and the FIP effect may significantly contribute to the predicted uncertainty. The assumption of Maxwellian electron distributions and errors in the magnetic field on the solar surface may also have notable effects on the mismatch between observations and model predictions.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jordan N.; Hinderliter, Paul M.; Timchalk, Charles

    Sensitivity to chemicals in animals and humans are known to vary with age. Age-related changes in sensitivity to chlorpyrifos have been reported in animal models. A life-stage physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model was developed to computationally predict disposition of CPF and its metabolites, chlorpyrifos-oxon (the ultimate toxicant) and 3,5,6-trichloro-2-pyridinol (TCPy), as well as B-esterase inhibition by chlorpyrifos-oxon in humans. In this model, age-dependent body weight was calculated from a generalized Gompertz function, and compartments (liver, brain, fat, blood, diaphragm, rapid, and slow) were scaled based on body weight from polynomial functions on a fractional body weight basis. Bloodmore » flows among compartments were calculated as a constant flow per compartment volume. The life-stage PBPK/PD model was calibrated and tested against controlled adult human exposure studies. Model simulations suggest age-dependent pharmacokinetics and response may exist. At oral doses ≥ 0.55 mg/kg of chlorpyrifos (significantly higher than environmental exposure levels), 6 mo old children are predicted to have higher levels of chlorpyrifos-oxon in blood and higher levels of red blood cell cholinesterase inhibition compared to adults from equivalent oral doses of chlorpyrifos. At lower doses that are more relevant to environmental exposures, the model predicts that adults will have slightly higher levels of chlorpyrifos-oxon in blood and greater cholinesterase inhibition. This model provides a computational framework for age-comparative simulations that can be utilized to predict CPF disposition and biological response over various postnatal life-stages.« less

  3. Thermal barrier coating life prediction model development, phase 2

    NASA Technical Reports Server (NTRS)

    Meier, Susan Manning; Sheffler, Keith D.; Nissley, David M.

    1991-01-01

    The objective of this program was to generate a life prediction model for electron-beam-physical vapor deposited (EB-PVD) zirconia thermal barrier coating (TBC) on gas turbine engine components. Specific activities involved in development of the EB-PVD life prediction model included measurement of EB-PVD ceramic physical and mechanical properties and adherence strength, measurement of the thermally grown oxide (TGO) growth kinetics, generation of quantitative cyclic thermal spallation life data, and development of a spallation life prediction model. Life data useful for model development was obtained by exposing instrumented, EB-PVD ceramic coated cylindrical specimens in a jet fueled burner rig. Monotonic compression and tensile mechanical tests and physical property tests were conducted to obtain the EB-PVD ceramic behavior required for burner rig specimen analysis. As part of that effort, a nonlinear constitutive model was developed for the EB-PVD ceramic. Spallation failure of the EB-PVD TBC system consistently occurred at the TGO-metal interface. Calculated out-of-plane stresses were a small fraction of that required to statically fail the TGO. Thus, EB-PVD spallation was attributed to the interfacial cracking caused by in-plane TGO strains. Since TGO mechanical properties were not measured in this program, calculation of the burner rig specimen TGO in-plane strains was performed by using alumina properties. A life model based on maximum in-plane TGO tensile mechanical strain and TGO thickness correlated the burner rig specimen EB-PVD ceramic spallation lives within a factor of about plus or minus 2X.

  4. Application of new nuclear de-excitation model of PHITS for prediction of isomer yield and prompt gamma-ray production

    NASA Astrophysics Data System (ADS)

    Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji

    2014-06-01

    A new nuclear de-excitation model, intended for accurate simulation of isomeric transition of excited nuclei, was incorporated into PHITS and applied to various situations to clarify the impact of the model. The case studies show that precise treatment of gamma de-excitation and consideration for isomer production are important for various applications such as detector performance prediction, radiation shielding calculations and the estimation of radioactive inventory including isomers.

  5. Cell-model prediction of the melting of a Lennard-Jones solid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holian, B.L.

    The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.

  6. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    PubMed

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  7. Electroexcitation of the Δ +(1232) at low momentum transfer

    DOE PAGES

    Blomberg, A.; Anez, D.; Sparveris, N.; ...

    2016-07-05

    We report on new pmore » $$(e,e^\\prime p)\\pi^\\circ$$ measurements at the $$\\Delta^{+}(1232)$$ resonance at the low momentum transfer region. The mesonic cloud dynamics is predicted to be dominant and rapidly changing in this kinematic region offering a test bed for chiral effective field theory calculations. The new data explore the low $Q^2$ dependence of the resonant quadrupole amplitudes while extending the measurements of the Coulomb quadrupole amplitude to the lowest momentum transfer ever reached. The results disagree with predictions of constituent quark models and are in reasonable agreement with dynamical calculations that include pion cloud effects, chiral effective field theory and lattice calculations. The reported measurements suggest that improvement is required to the theoretical calculations and provide valuable input that will allow their refinements.« less

  8. Moist air state above counterflow wet-cooling tower fill based on Merkel, generalised Merkel and Klimanek & Białecky models

    NASA Astrophysics Data System (ADS)

    Hyhlík, Tomáš

    2017-09-01

    The article deals with an evaluation of moist air state above counterflow wet-cooling tower fill. The results based on Klimanek & Białecky model are compared with results of Merkel model and generalised Merkel model. Based on the numerical simulation it is shown that temperature is predicted correctly by using generalised Merkel model in the case of saturated or super-saturated air above the fill, but the temperature is underpredicted in the case of unsaturated moist air above the fill. The classical Merkel model always under predicts temperature above the fill. The density of moist air above the fill, which is calculated using generalised Merkel model, is strongly over predicted in the case of unsaturated moist air above the fill.

  9. A General and Efficient Method for Incorporating Precise Spike Times in Globally Time-Driven Simulations

    PubMed Central

    Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus

    2010-01-01

    Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031

  10. An orientation sensitive approach in biomolecule interaction quantitative structure-activity relationship modeling and its application in ion-exchange chromatography.

    PubMed

    Kittelmann, Jörg; Lang, Katharina M H; Ottens, Marcel; Hubbuch, Jürgen

    2017-01-27

    Quantitative structure-activity relationship (QSAR) modeling for prediction of biomolecule parameters has become an established technique in chromatographic purification process design. Unfortunately available descriptor sets fail to describe the orientation of biomolecules and the effects of ionic strength in the mobile phase on the interaction with the stationary phase. The literature describes several special descriptors used for chromatographic retention modeling, all of these do not describe the screening of electrostatic potential by the mobile phase in use. In this work we introduce two new approaches of descriptor calculations, namely surface patches and plane projection, which capture an oriented binding to charged surfaces and steric hindrance of the interaction with chromatographic ligands with regard to electrostatic potential screening by mobile phase ions. We present the use of the developed descriptor sets for predictive modeling of Langmuir isotherms for proteins at different pH values between pH 5 and 10 and varying ionic strength in the range of 10-100mM. The resulting model has a high correlation of calculated descriptors and experimental results, with a coefficient of determination of 0.82 and a predictive coefficient of determination of 0.92 for unknown molecular structures and conditions. The agreement of calculated molecular interaction orientations with both, experimental results as well as molecular dynamic simulations from literature is shown. The developed descriptors provide the means for improved QSAR models of chromatographic processes, as they reflect the complex interactions of biomolecules with chromatographic phases. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Perspectives on multifield models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, S.

    1997-07-01

    Multifield models for prediction of nuclear reactor thermalhydraulics are reviewed from the viewpoint of their structure and requirements for closure relationships. Their strengths and weaknesses are illustrated with examples, indicating that they are effective in predicting separated and distributed flow regimes, but have problems for flows with large oscillations. Needs for multifield models are also discussed in the context of reactor operations and accident simulations. The highest priorities for future developments appear to relate to closure relationships for three-dimensional multifield models with emphasis on those needed for calculations of phase separation and entrainment/de-entrainment in complex geometries.

  12. Predicting the activity of drugs for a group of imidazopyridine anticoccidial compounds.

    PubMed

    Si, Hongzong; Lian, Ning; Yuan, Shuping; Fu, Aiping; Duan, Yun-Bo; Zhang, Kejun; Yao, Xiaojun

    2009-10-01

    Gene expression programming (GEP) is a novel machine learning technique. The GEP is used to build nonlinear quantitative structure-activity relationship model for the prediction of the IC(50) for the imidazopyridine anticoccidial compounds. This model is based on descriptors which are calculated from the molecular structure. Four descriptors are selected from the descriptors' pool by heuristic method (HM) to build multivariable linear model. The GEP method produced a nonlinear quantitative model with a correlation coefficient and a mean error of 0.96 and 0.24 for the training set, 0.91 and 0.52 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones.

  13. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  14. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    PubMed

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present computational models will provide new tools for predicting accurate functional properties and designing fibrous porous materials, thereby significantly advancing tissue engineering. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Calculation of the Intensity of Physical Time Fluctuations Using the Standard Solar Model and its Comparison with the Results of Experimental Measurements

    NASA Astrophysics Data System (ADS)

    Morozov, A. N.

    2017-11-01

    The article reviews the possibility of describing physical time as a random Poisson process. An equation allowing the intensity of physical time fluctuations to be calculated depending on the entropy production density within irreversible natural processes has been proposed. Based on the standard solar model the work calculates the entropy production density inside the Sun and the dependence of the intensity of physical time fluctuations on the distance to the centre of the Sun. A free model parameter has been established, and the method of its evaluation has been suggested. The calculations of the entropy production density inside the Sun showed that it differs by 2-3 orders of magnitude in different parts of the Sun. The intensity of physical time fluctuations on the Earth's surface depending on the entropy production density during the sunlight-to-Earth's thermal radiation conversion has been theoretically predicted. A method of evaluation of the Kullback's measure of voltage fluctuations in small amounts of electrolyte has been proposed. Using a simple model of the Earth's surface heat transfer to the upper atmosphere, the effective Earth's thermal radiation temperature has been determined. A comparison between the theoretical values of the Kullback's measure derived from the fluctuating physical time model and the experimentally measured values of this measure for two independent electrolytic cells showed a good qualitative and quantitative concurrence of predictions of both theoretical model and experimental data.

  16. CALCULATION OF PHYSICOCHEMICAL PROPERTIES FOR ENVIRONMENTAL MODELING

    EPA Science Inventory

    Recent trends in environmental regulatory strategies dictate that EPA will rely heavily on predictive modeling to carry out the increasingly complex array of exposure and risk assessments necessary to develop scientifically defensible regulations. In response to this need, resea...

  17. Performance of a block detector PET scanner in imaging non-pure positron emitters—modelling and experimental validation with 124I

    NASA Astrophysics Data System (ADS)

    Robinson, S.; Julyan, P. J.; Hastings, D. L.; Zweit, J.

    2004-12-01

    The key performance measures of resolution, count rate, sensitivity and scatter fraction are predicted for a dedicated BGO block detector patient PET scanner (GE Advance) in 2D mode for imaging with the non-pure positron-emitting radionuclides 124I, 55Co, 61Cu, 62Cu, 64Cu and 76Br. Model calculations including parameters of the scanner, decay characteristics of the radionuclides and measured parameters in imaging the pure positron-emitter 18F are used to predict performance according to the National Electrical Manufacturers Association (NEMA) NU 2-1994 criteria. Predictions are tested with measurements made using 124I and show that, in comparison with 18F, resolution degrades by 1.2 mm radially and tangentially throughout the field-of-view (prediction: 1.2 mm), count-rate performance reduces considerably and in close accordance with calculations, sensitivity decreases to 23.4% of that with 18F (prediction: 22.9%) and measured scatter fraction increases from 10.0% to 14.5% (prediction: 14.7%). Model predictions are expected to be equally accurate for other radionuclides and may be extended to similar scanners. Although performance is worse with 124I than 18F, imaging is not precluded in 2D mode. The viability of 124I imaging and performance in a clinical context compared with 18F is illustrated with images of a patient with recurrent thyroid cancer acquired using both [124I]-sodium iodide and [18F]-2-fluoro-2-deoxyglucose.

  18. Leak-off mechanism and pressure prediction for shallow sediments in deepwater drilling

    NASA Astrophysics Data System (ADS)

    Tan, Qiang; Deng, Jingen; Sun, Jin; Liu, Wei; Yu, Baohua

    2018-02-01

    Deepwater sediments are prone to loss circulation in drilling due to a low overburden gradient. How to predict the magnitude of leak-off pressure more accurately is an important issue in the protection of drilling safety and the reduction of drilling cost in deep water. Starting from the mechanical properties of a shallow formation and based on the basic theory of rock-soil mechanics, the stress distribution around a borehole was analyzed. It was found that the rock or soil on a borehole is in the plastic yield state before the effective tensile stress is generated, and the effective tangential and vertical stresses increase as the drilling fluid density increases; thus, tensile failure will not occur on the borehole wall. Based on the results of stress calculation, two mechanisms and leak-off pressure prediction models for shallow sediments in deepwater drilling were put forward, and the calculated values of these models were compared with the measured value of shallow leak-off pressure in actual drilling. The results show that the MHPS (minimum horizontal principle stress) model and the FIF (fracturing in formation) model can predict the lower and upper limits of leak-off pressure. The PLC (permeable lost circulation) model can comprehensively analyze the factors influencing permeable leakage and provide a theoretical basis for leak-off prevention and plugging in deepwater drilling.

  19. Prediction of energy balance and utilization for solar electric cars

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    Solar irradiation and ambient temperature are characterized by region, season and time-domain, which directly affects the performance of solar energy based car system. In this paper, the model of solar electric cars used was based in Xi’an. Firstly, the meteorological data are modelled to simulate the change of solar irradiation and ambient temperature, and then the temperature change of solar cell is calculated using the thermal equilibrium relation. The above work is based on the driving resistance and solar cell power generation model, which is simulated under the varying radiation conditions in a day. The daily power generation and solar electric car cruise mileage can be predicted by calculating solar cell efficiency and power. The above theoretical approach and research results can be used in the future for solar electric car program design and optimization for the future developments.

  20. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  1. Additional strange hadrons from QCD thermodynamics and strangeness freezeout in heavy ion collisions.

    PubMed

    Bazavov, A; Ding, H-T; Hegde, P; Kaczmarek, O; Karsch, F; Laermann, E; Maezawa, Y; Mukherjee, Swagato; Ohno, H; Petreczky, P; Schmidt, C; Sharma, S; Soeldner, W; Wagner, M

    2014-08-15

    We compare lattice QCD results for appropriate combinations of net strangeness fluctuations and their correlations with net baryon number fluctuations with predictions from two hadron resonance gas (HRG) models having different strange hadron content. The conventionally used HRG model based on experimentally established strange hadrons fails to describe the lattice QCD results in the hadronic phase close to the QCD crossover. Supplementing the conventional HRG with additional, experimentally uncharted strange hadrons predicted by quark model calculations and observed in lattice QCD spectrum calculations leads to good descriptions of strange hadron thermodynamics below the QCD crossover. We show that the thermodynamic presence of these additional states gets imprinted in the yields of the ground-state strange hadrons leading to a systematic 5-8 MeV decrease of the chemical freeze-out temperatures of ground-state strange baryons.

  2. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  3. Pretest Predictions for Phase II Ventilation Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yiming Sun

    The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, and concrete pipe walls that will be developed during the Phase II ventilation tests involving various test conditions. The results will be used as inputs to validating numerical approach for modeling continuous ventilation, and be used to support the repository subsurface design. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the Phase II ventilation tests, and describe numerical methods that are used to calculate the effects of continuous ventilation. The calculation is limitedmore » to thermal effect only. This engineering work activity is conducted in accordance with the ''Technical Work Plan for: Subsurface Performance Testing for License Application (LA) for Fiscal Year 2001'' (CRWMS M&O 2000d). This technical work plan (TWP) includes an AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', activity evaluation (CRWMS M&O 2000d, Addendum A) that has determined this activity is subject to the YMP quality assurance (QA) program. The calculation is developed in accordance with the AP-3.12Q procedure, ''Calculations''. Additional background information regarding this activity is contained in the ''Development Plan for Ventilation Pretest Predictive Calculation'' (DP) (CRWMS M&O 2000a).« less

  4. An empirical model for calculation of the collimator contamination dose in therapeutic proton beams

    NASA Astrophysics Data System (ADS)

    Vidal, M.; De Marzi, L.; Szymanowski, H.; Guinement, L.; Nauraye, C.; Hierso, E.; Freud, N.; Ferrand, R.; François, P.; Sarrut, D.

    2016-02-01

    Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.

  5. An ab initio chemical reaction model for the direct simulation Monte Carlo study of non-equilibrium nitrogen flows.

    PubMed

    Mankodi, T K; Bhandarkar, U V; Puranik, B P

    2017-08-28

    A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.

  6. A model of the atmospheric metal deposition by cosmic dust particles

    NASA Astrophysics Data System (ADS)

    McNeil, W. J.

    1993-11-01

    We have developed a model of the deposition of meteoric metals in Earth's atmosphere. The model takes as input the total mass influx of material to the Earth and calculates the deposition rate at all altitudes through solution of the drag and subliminal equations in a Monte Carlo-type computation. The diffusion equation is then solved to give steady state concentration of complexes of specific metal species and kinetics are added to calculate the concentration of individual complexes. Concentrating on sodium, we calculate the Na(D) nightglow predicted by the model, and by introduction of seasonal variations in lower tropospheric ozone based on experimental results, we are able to duplicate the seasonal variation of mid-latitude nightglow data.

  7. Using experimental data to test an n -body dynamical model coupled with an energy-based clusterization algorithm at low incident energies

    NASA Astrophysics Data System (ADS)

    Kumar, Rohit; Puri, Rajeev K.

    2018-03-01

    Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.

  8. Predicting the Rotor-Stator Interaction Acoustics of a Ducted Fan Engine

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Rumsey, Christopher L.; Podboy, Gary G.; Dunn, M. H.

    2001-01-01

    A Navier-Stokes computation is performed for a ducted-fan configuration with the goal of predicting rotor-stator noise generation without having to resort to heuristic modeling. The calculated pressure field in the inlet region is decomposed into classical infinite-duct modes, which are then used in either a hybrid finite-element/Kirchhoff surface method or boundary integral equation method to calculate the far field noise. Comparisons with experimental data are presented, including rotor wake surveys and far field sound pressure levels for two blade passage frequency (BPF) tones.

  9. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  10. Summary of the BIOMOVS A4 scenario: Testing models of the air-pasture-cow milk pathway using Chernobyl fallout data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, S.R.; Hoffman, F.O.; Koehler, H.

    1996-08-01

    A unique opportunity to test dose assessment models arose after the Chernobyl reactor accident. During the passage of the contaminated plume, concentrations of {sup 131}I and {sup 137}Cs in air, pasture, and cow`s milk were collected at various sites in the northern hemisphere. Afterwards, contaminated pasture and milk samples were analyzed over time. Under the auspices of the Biospheric Model Validation Study (BIOMOVS), data from 13 sites for {sup 131}I and 10 sites for {sup 137}Cs were used to test model predictions for the air-pasture-cow milk pathway. Calculations were submitted for 23 models, 10 of which were quasi-steady state. Themore » others were time-dependent. Daily predictions and predictions of time-integrated concentration of {sup 131}I and {sup 137}Cs in pasture grass and milk for six months post-accident were calculated and compared with observed data. Testing against data from several locations over time for several steps in the air-to-milk pathway resulted in a better understanding of important processes and how they should be modeled. This model testing exercise showed both the strengths and weaknesses of the models and revealed the importance of testing all parts of dose assessment models whenever possible. 19 refs., 14 figs., 4 tabs.« less

  11. Simulation of breaking waves using the high-order spectral method with laboratory experiments: Wave-breaking onset

    NASA Astrophysics Data System (ADS)

    Seiffert, Betsy R.; Ducrozet, Guillaume; Bonnefoy, Félicien

    2017-11-01

    This study investigates a wave-breaking onset criteria to be implemented in the non-linear potential flow solver HOS-NWT. The model is a computationally efficient, open source code, which solves for the free surface in a numerical wave tank using the High-Order Spectral (HOS) method. The goal of this study is to determine the best method to identify the onset of random single and multiple breaking waves over a large domain at the exact time they occur. To identify breaking waves, a breaking onset criteria based on the ratio of local energy flux velocity to the local crest velocity, introduced by Barthelemy et al. (2017) is selected. The breaking parameter is uniquely applied in the numerical model in that calculations of the breaking onset criteria ratio are not made only at the location of the wave crest, but at every point in the domain and at every time step. This allows the model to calculate the onset of a breaking wave the moment it happens, and without knowing anything about the wave a priori. The application of the breaking criteria at every point in the domain and at every time step requires the phase velocity to be calculated instantaneously everywhere in the domain and at every time step. This is achieved by calculating the instantaneous phase velocity using the Hilbert transform and dispersion relation. A comparison between more traditional crest-tracking techniques shows the calculation of phase velocity using Hilbert transform at the location of the breaking wave crest provides a good approximation of crest velocity. The ability of the selected wave breaking criteria to predict single and multiple breaking events in two dimensions is validated by a series of large-scale experiments. Breaking waves are generated by energy focusing and modulational instability methods, with a wide range of primary frequencies. Steep irregular waves which lead to breaking waves, and irregular waves with an energy focusing wave superimposed are also generated. This set of waves provides a wide range of breaking-wave strengths, types and scales for validation of the model. A comparison of calculations made using HOS-NWT with experimental measurements show that the model is successful at predicting the occurrence of wave breaking, as well as accurately calculating breaking onset time and location. Although the current study is limited to a unidirectional wave field, the success of the wave-breaking model presented provides the basis for application of the model in a multidirectional wave field. By including wave breaking onset with the addition of an appropriate energy dissipation model into HOS-NWT, we can increase the application range of the model, as well as decrease the occurrence of numerical instabilities that are associated with breaking waves in a potential flow solver. An accurate description of the wave field is useful for predicting the dynamic response of offshore vessels and marine renewable energy devices, predicting loads on marine structures and the general physics of ocean waves, for example.

  12. On the Prediction of α-Martensite Temperatures in Medium Manganese Steels

    NASA Astrophysics Data System (ADS)

    Field, Daniel M.; Baker, Daniel S.; Van Aken, David C.

    2017-05-01

    A new composition-based method for calculating the α-martensite start temperature in medium manganese steel is presented and uses a regular solution model to accurately calculate the chemical driving force for α-martensite formation, Δ G_{{Chem}}^{γ \\to α } . In addition, a compositional relationship for the strain energy contribution during martensitic transformation was developed using measured Young's moduli ( E) reported in literature and measured values for steels produced during this investigation. An empirical relationship was developed to calculate Young's modulus using alloy composition and was used where dilatometry literature did not report Young's moduli. A comparison of the Δ G_{{Chem}}^{γ \\to α } normalized by dividing by the product of Young's modulus, unconstrained lattice misfit squared ( δ 2), and molar volume ( Ω) with respect to the measured α-martensite start temperatures, M_{{S}}^{α } , produced a single linear relationship for 42 alloys exhibiting either lath or plate martensite. A temperature-dependent strain energy term was then formulated as Δ G_{{str}}^{γ \\to α } ( {{{J}}/{{mol}}} ) = EΩ δ2 (14.8 - 0.013T) , which opposed the chemical driving force for α-martensite formation. M_{{S}}^{α } was determined at a temperature where Δ G_{{Chem}}^{γ \\to α } + Δ G_{{str}}^{γ \\to α } = 0 . The proposed M_{{S}}^{α } model shows an extended temperature range of prediction from 170 K to 820 K (-103 °C to 547 °C). The model is then shown to corroborate alloy chemistries that exhibit two-stage athermal martensitic transformations and two-stage TRIP behavior in three previously reported medium manganese steels. In addition, the model can be used to predict the retained γ-austenite in twelve alloys, containing ɛ-martensite, using the difference between the calculated M_{{S}}^{ɛ} and M_{{S}}^{α }.

  13. Proxy-SU(3) symmetry in heavy deformed nuclei

    NASA Astrophysics Data System (ADS)

    Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.

    2017-06-01

    Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.

  14. Longitudinal changes in young children’s 0–100 to 0–1000 number-line error signatures

    PubMed Central

    Reeve, Robert A.; Paul, Jacob M.; Butterworth, Brian

    2015-01-01

    We use a latent difference score (LDS) model to examine changes in young children’s number-line (NL) error signatures (errors marking numbers on a NL) over 18 months. A LDS model (1) overcomes some of the inference limitations of analytic models used in previous research, and in particular (2) provides a more reliable test of hypotheses about the meaning and significance of changes in NL error signatures over time and task. The NL error signatures of 217 6-year-olds’ (on test occasion one) were assessed three times over 18 months, along with their math ability on two occasions. On the first occasion (T1) children completed a 0–100 NL task; on the second (T2) a 0–100 NL and a 0–1000 NL task; on the third (T3) occasion a 0–1000 NL task. On the third and fourth occasions (T3 and T4), children completed mental calculation tasks. Although NL error signatures changed over time, these were predictable from other NL task error signatures, and predicted calculation accuracy at T3, as well as changes in calculation between T3 and T4. Multiple indirect effects (change parameters) showed that associations between initial NL error signatures (0–100 NL) and later mental calculation ability were mediated by error signatures on the 0–1000 NL task. The pattern of findings from the LDS model highlight the value of identifying direct and indirect effects in characterizing changing relationships in cognitive representations over task and time. Substantively, they support the claim that children’s NL error signatures generalize over task and time and thus can be used to predict math ability. PMID:26029152

  15. Predicting mosaics and wildlife diversity resulting from fire disturbance to a forest ecosystem

    NASA Astrophysics Data System (ADS)

    Potter, Meredith W.; Kessell, Stephen R.

    1980-05-01

    A model for predicting community mosaics and wildlife diversity resulting from fire disturbance to a forest ecosystem is presented. It applies an algorithm that delineates the size and shape of each patch from grid-based input data and calculates standard diversity measures for the entire mosaic of community patches and their included animal species. The user can print these diversity calculations, maps of the current community-type-age-class mosaic, and maps of habitat utilization by each animal species. Furthermore, the user can print estimates of changes in each resulting from natural disturbance. Although data and resolution level independent, the model is demonstrated and tested with data from the Lewis and Clark National Forest in Montana.

  16. An Improvement of the Anisotropy and Formability Predictions of Aluminum Alloy Sheets

    NASA Astrophysics Data System (ADS)

    Banabic, D.; Comsa, D. S.; Jurco, P.; Wagner, S.; Vos, M.

    2004-06-01

    The paper presents an yield criterion for orthotropic sheet metals and its implementation in a theoretical model in order to calculate the Forming Limit Curves. The proposed yield criterion has been validated for two aluminum alloys: AA3103-0 and AA5182-0, respectively. The biaxial tensile test of cross specimens has been used for the determination of the experimental yield locus. The new yield criterion has been implemented in the Marciniak-Kuczynski model for the calculus of limit strains. The calculated Forming Limit Curves have been compared with the experimental ones, determined by frictionless test: bulge test, plane strain test and uniaxial tensile test. The predicted Forming Limit Curves using the new yield criterion are in good agreement with the experimental ones.

  17. Sulphur hexaflouride: low energy (e,2e) experiments and molecular three-body distorted wave theory

    NASA Astrophysics Data System (ADS)

    Nixon, Kate L.; Murray, Andrew J.; Chaluvadi, H.; Ning, C. G.; Colgan, James; Madison, Don H.

    2016-10-01

    Experimental and theoretical triple differential ionisation cross-sections (TDCSs) are presented for the highest occupied molecular orbital of sulphur hexafluoride. These measurements were performed in the low energy regime, with outgoing electron energies ranging from 5 to 40 eV in a coplanar geometry, and with energies of 10 and 20 eV in a perpendicular geometry. Complementary theoretical predictions of the TDCS were calculated using the molecular three-body distorted wave formalism. Calculations were performed using a proper average over molecular orientations as well as the orientation-averaged molecular orbital approximation. This more sophisticated model was found to be in closer agreement with the experimental data, however neither model accurately predicts the TDCS over all geometries and energies.

  18. External Validation of the Garvan Nomograms for Predicting Absolute Fracture Risk: The Tromsø Study

    PubMed Central

    Ahmed, Luai A.; Nguyen, Nguyen D.; Bjørnerem, Åshild; Joakimsen, Ragnar M.; Jørgensen, Lone; Størmer, Jan; Bliuc, Dana; Center, Jacqueline R.; Eisman, John A.; Nguyen, Tuan V.; Emaus, Nina

    2014-01-01

    Background Absolute risk estimation is a preferred approach for assessing fracture risk and treatment decision making. This study aimed to evaluate and validate the predictive performance of the Garvan Fracture Risk Calculator in a Norwegian cohort. Methods The analysis included 1637 women and 1355 aged 60+ years from the Tromsø study. All incident fragility fractures between 2001 and 2009 were registered. The predicted probabilities of non-vertebral osteoporotic and hip fractures were determined using models with and without BMD. The discrimination and calibration of the models were assessed. Reclassification analysis was used to compare the models performance. Results The incidence of osteoporotic and hip fracture was 31.5 and 8.6 per 1000 population in women, respectively; in men the corresponding incidence was 12.2 and 5.1. The predicted 5-year and 10-year probability of fractures was consistently higher in the fracture group than the non-fracture group for all models. The 10-year predicted probabilities of hip fracture in those with fracture was 2.8 (women) to 3.1 times (men) higher than those without fracture. There was a close agreement between predicted and observed risk in both sexes and up to the fifth quintile. Among those in the highest quintile of risk, the models over-estimated the risk of fracture. Models with BMD performed better than models with body weight in correct classification of risk in individuals with and without fracture. The overall net decrease in reclassification of the model with weight compared to the model with BMD was 10.6% (p = 0.008) in women and 17.2% (p = 0.001) in men for osteoporotic fractures, and 13.3% (p = 0.07) in women and 17.5% (p = 0.09) in men for hip fracture. Conclusions The Garvan Fracture Risk Calculator is valid and clinically useful in identifying individuals at high risk of fracture. The models with BMD performed better than those with body weight in fracture risk prediction. PMID:25255221

  19. An Improved Radiative Transfer Model for Climate Calculations

    NASA Technical Reports Server (NTRS)

    Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.

    1998-01-01

    This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.

  20. Computation of Turbulent Heat Transfer on the Walls of a 180 Degree Turn Channel With a Low Reynolds Number Reynolds Stress Model

    NASA Technical Reports Server (NTRS)

    Ameri, A. A.; Rigby, D. L.; Steinthorsson, E.; Gaugler, Raymond (Technical Monitor)

    2002-01-01

    The Low Reynolds number version of the Stress-omega model and the two equation k-omega model of Wilcox were used for the calculation of turbulent heat transfer in a 180 degree turn simulating an internal coolant passage. The Stress-omega model was chosen for its robustness. The turbulent thermal fluxes were calculated by modifying and using the Generalized Gradient Diffusion Hypothesis. The results showed that using this Reynolds Stress model allowed better prediction of heat transfer compared to the k-omega two equation model. This improvement however required a finer grid and commensurately more CPU time.

  1. Phenomenology of soft hadron interactions and the relevant EAS data

    NASA Technical Reports Server (NTRS)

    Kalmykov, N. N.; Khristiansen, G. B.; Motova, M. V.

    1984-01-01

    The interpretation of the experimental data in superhigh energy cosmic rays requires the calculations using various models of elementary hadron interaction. One should prefer the models justified by accelerator data and giving definite predictions for superhigh energies. The model of quark-gluon pomeron strings (the QGPS models) satisfies this requirement.

  2. Prediction of Ras-effector interactions using position energy matrices.

    PubMed

    Kiel, Christina; Serrano, Luis

    2007-09-01

    One of the more challenging problems in biology is to determine the cellular protein interaction network. Progress has been made to predict protein-protein interactions based on structural information, assuming that structural similar proteins interact in a similar way. In a previous publication, we have determined a genome-wide Ras-effector interaction network based on homology models, with a high accuracy of predicting binding and non-binding domains. However, for a prediction on a genome-wide scale, homology modelling is a time-consuming process. Therefore, we here successfully developed a faster method using position energy matrices, where based on different Ras-effector X-ray template structures, all amino acids in the effector binding domain are sequentially mutated to all other amino acid residues and the effect on binding energy is calculated. Those pre-calculated matrices can then be used to score for binding any Ras or effector sequences. Based on position energy matrices, the sequences of putative Ras-binding domains can be scanned quickly to calculate an energy sum value. By calibrating energy sum values using quantitative experimental binding data, thresholds can be defined and thus non-binding domains can be excluded quickly. Sequences which have energy sum values above this threshold are considered to be potential binding domains, and could be further analysed using homology modelling. This prediction method could be applied to other protein families sharing conserved interaction types, in order to determine in a fast way large scale cellular protein interaction networks. Thus, it could have an important impact on future in silico structural genomics approaches, in particular with regard to increasing structural proteomics efforts, aiming to determine all possible domain folds and interaction types. All matrices are deposited in the ADAN database (http://adan-embl.ibmc.umh.es/). Supplementary data are available at Bioinformatics online.

  3. Predicting growth of graphene nanostructures using high-fidelity atomistic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarty, Keven F.; Zhou, Xiaowang; Ward, Donald K.

    2015-09-01

    In this project we developed t he atomistic models needed to predict how graphene grows when carbon is deposited on metal and semiconductor surfaces. We first calculated energies of many carbon configurations using first principles electronic structure calculations and then used these energies to construct an empirical bond order potentials that enable s comprehensive molecular dynamics simulation of growth. We validated our approach by comparing our predictions to experiments of graphene growth on Ir, Cu and Ge. The robustness of ou r understanding of graphene growth will enable high quality graphene to be grown on novel substrates which will expandmore » the number of potential types of graphene electronic devices.« less

  4. Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya

    2017-12-01

    We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.

  5. A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite

    NASA Astrophysics Data System (ADS)

    Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.

    1992-05-01

    A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.

  6. Evaluation of a computational model to predict elbow range of motion

    PubMed Central

    Nishiwaki, Masao; Johnson, James A.; King, Graham J. W.; Athwal, George S.

    2014-01-01

    Computer models capable of predicting elbow flexion and extension range of motion (ROM) limits would be useful for assisting surgeons in improving the outcomes of surgical treatment of patients with elbow contractures. A simple and robust computer-based model was developed that predicts elbow joint ROM using bone geometries calculated from computed tomography image data. The model assumes a hinge-like flexion-extension axis, and that elbow passive ROM limits can be based on terminal bony impingement. The model was validated against experimental results with a cadaveric specimen, and was able to predict the flexion and extension limits of the intact joint to 0° and 3°, respectively. The model was also able to predict the flexion and extension limits to 1° and 2°, respectively, when simulated osteophytes were inserted into the joint. Future studies based on this approach will be used for the prediction of elbow flexion-extension ROM in patients with primary osteoarthritis to help identify motion-limiting hypertrophic osteophytes, and will eventually permit real-time computer-assisted navigated excisions. PMID:24841799

  7. A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite

    NASA Technical Reports Server (NTRS)

    Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.

    1992-01-01

    A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.

  8. Estimating habitat volume of living resources using three-dimensional circulation and biogeochemical models

    NASA Astrophysics Data System (ADS)

    Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.

    2018-07-01

    Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.

  9. Oxygen Pickup Ions Measured by MAVEN Outside the Martian Bow Shock

    NASA Astrophysics Data System (ADS)

    Rahmati, A.; Cravens, T.; Larson, D. E.; Lillis, R. J.; Dunn, P.; Halekas, J. S.; Connerney, J. E. P.; Eparvier, F. G.; Thiemann, E.; Mitchell, D. L.; Jakosky, B. M.

    2015-12-01

    The MAVEN (Mars Atmosphere and Volatile EvolutioN) spacecraft entered orbit around Mars on September 21, 2014 and has since been detecting energetic oxygen pickup ions by its SEP (Solar Energetic Particles) and SWIA (Solar Wind Ion Analyzer) instruments. The oxygen pickup ions detected outside the Martian bowshock and in the upstream solar wind are associated with the extended hot oxygen exosphere of Mars, which is created mainly by the dissociative recombination of molecular oxygen ions with electrons in the ionosphere. We use analytic solutions to the equations of motion of pickup ions moving in the undisturbed upstream solar wind magnetic and motional electric fields and calculate the flux of oxygen pickup ions at the location of MAVEN. Our model calculates the ionization rate of oxygen atoms in the exosphere based on the hot oxygen densities predicted by Rahmati et al. (2014), and the sources of ionization include photo-ionization, charge exchange, and electron impact ionization. The photo-ionization frequency is calculated using the FISM (Flare Irradiance Spectral Model) solar flux model, based on MAVEN EUVM (Extreme Ultra-Violet Monitor) measurements. The frequency of charge exchange between a solar wind proton and an oxygen atom is calculated using MAVEN SWIA solar wind proton flux measurements, and the electron impact ionization frequency is calculated based on MAVEN SWEA (Solar Wind Electron Analyzer) solar wind electron flux measurements. The solar wind magnetic field used in the model is from the measurements taken by MAVEN MAG (magnetometer) in the upstream solar wind. The good agreement between our predicted pickup oxygen fluxes and the MAVEN SEP and SWIA measured ones confirms detection of oxygen pickup ions and these model-data comparisons can be used to constrain models of hot oxygen densities and photochemical escape flux.

  10. NUCLEAR AND HEAVY ION PHYSICS: α-decay half-lives of superheavy nuclei and general predictions

    NASA Astrophysics Data System (ADS)

    Dong, Jian-Min; Zhang, Hong-Fei; Wang, Yan-Zhao; Zuo, Wei; Su, Xin-Ning; Li, Jun-Qing

    2009-08-01

    The generalized liquid drop model (GLDM) and the cluster model have been employed to calculate the α-decay half-lives of superheavy nuclei (SHN) using the experimental α-decay Q values. The results of the cluster model are slightly poorer than those from the GLDM if experimental Q values are used. The prediction powers of these two models with theoretical Q values from Audi et al. (QAudi) and Muntian et al. (QM) have been tested to find that the cluster model with QAudi and QM could provide reliable results for Z > 112 but the GLDM with QAudi for Z <= 112. The half-lives of some still unknown nuclei are predicted by these two models and these results may be useful for future experimental assignment and identification.

  11. The accuracy of ab initio calculations without ab initio calculations for charged systems: Kriging predictions of atomistic properties for ions in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Di Pasquale, Nicodemo; Davie, Stuart J.; Popelier, Paul L. A.

    2018-06-01

    Using the machine learning method kriging, we predict the energies of atoms in ion-water clusters, consisting of either Cl- or Na+ surrounded by a number of water molecules (i.e., without Na+Cl- interaction). These atomic energies are calculated following the topological energy partitioning method called Interacting Quantum Atoms (IQAs). Kriging predicts atomic properties (in this case IQA energies) by a model that has been trained over a small set of geometries with known property values. The results presented here are part of the development of an advanced type of force field, called FFLUX, which offers quantum mechanical information to molecular dynamics simulations without the limiting computational cost of ab initio calculations. The results reported for the prediction of the IQA components of the energy in the test set exhibit an accuracy of a few kJ/mol, corresponding to an average error of less than 5%, even when a large cluster of water molecules surrounding an ion is considered. Ions represent an important chemical system and this work shows that they can be correctly taken into account in the framework of the FFLUX force field.

  12. Tree value system: description and assumptions.

    Treesearch

    D.G. Briggs

    1989-01-01

    TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...

  13. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  14. A first-principles examination of the asymmetric induction model in the binap/Rh(I)-catalysed 1,4-addition of phenylboronic acid to cyclic enones by density functional theory calculations.

    PubMed

    Qin, Hua-Li; Chen, Xiao-Qing; Huang, Yi-Zhen; Kantchev, Eric Assen B

    2014-09-26

    First-principles modelling of the diastereomeric transition states in the enantiodiscrimination stage of the catalytic cycle can reveal intimate details about the mechanism of enantioselection. This information can be invaluable for further improvement of the catalytic protocols by rational design. Herein, we present a density functional theory (IEFPCM/PBE0/DGDZVP level of theory) modelling of the carborhodation step for the asymmetric 1,4-arylation of cyclic α,β-unsaturated ketones mediated by a [(binap)Rh(I)] catalyst. The calculations completely support the older, qualitative, pictorial model predicting the sense of the asymmetric induction for both the chelating diphosphane (binap) and the more recent chiral diene (Phbod) ligands, while also permitting quantification of the enantiomeric excess (ee). The effect of dispersion interaction correction and basis sets has been also investigated. Dispersion-corrected functionals and solvation models significantly improve the predicted ee values. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  16. Prediction of Material Properties of Nanostructured Polymer Composites Using Atomistic Simulations

    NASA Technical Reports Server (NTRS)

    Hinkley, J.A.; Clancy, T.C.; Frankland, S.J.V.

    2009-01-01

    Atomistic models of epoxy polymers were built in order to assess the effect of structure at the nanometer scale on the resulting bulk properties such as elastic modulus and thermal conductivity. Atomistic models of both bulk polymer and carbon nanotube polymer composites were built. For the bulk models, the effect of moisture content and temperature on the resulting elastic constants was calculated. A relatively consistent decrease in modulus was seen with increasing temperature. The dependence of modulus on moisture content was less consistent. This behavior was seen for two different epoxy systems, one containing a difunctional epoxy molecule and the other a tetrafunctional epoxy molecule. Both epoxy structures were crosslinked with diamine curing agents. Multifunctional properties were calculated with the nanocomposite models. Molecular dynamics simulation was used to estimate the interfacial thermal (Kapitza) resistance between the carbon nanotube and the surrounding epoxy matrix. These estimated values were used in a multiscale model in order to predict the thermal conductivity of a nanocomposite as a function of the nanometer scaled molecular structure.

  17. Mass and heat transfer model of Tubular Solar Still

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahsan, Amimul; Fukuhara, Teruyuki

    2010-07-15

    In this paper, a new mass and heat transfer model of a Tubular Solar Still (TSS) was proposed incorporating various mass and heat transfer coefficients taking account of the humid air properties inside the still. The heat balance of the humid air and the mass balance of the water vapor in the humid air were formulized for the first time. As a result, the proposed model enabled to calculate the diurnal variations of the temperature, water vapor density and relative humidity of the humid air, and to predict the hourly condensation flux besides the temperatures of the water, cover andmore » trough, and the hourly evaporation flux. The validity of the proposed model was verified using the field experimental results carried out in Fukui, Japan and Muscat, Oman in 2008. The diurnal variations of the calculated temperatures and water vapor densities had a good agreement with the observed ones. Furthermore, the proposed model can predict the daily and hourly production flux precisely. (author)« less

  18. Perspective - Systematic study of Reynolds stress closure models in the computations of plane channel flows

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Sarkar, S.

    1993-01-01

    The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.

  19. Systematic study of Reynolds stress closure models in the computations of plane channel flows

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Sarkar, S.

    1992-01-01

    The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.

  20. Influence of yield surface curvature on the macroscopic yielding and ductile failure of isotropic porous plastic materials

    NASA Astrophysics Data System (ADS)

    Dæhli, Lars Edvard Bryhni; Morin, David; Børvik, Tore; Hopperstad, Odd Sture

    2017-10-01

    Numerical unit cell models of an approximative representative volume element for a porous ductile solid are utilized to investigate differences in the mechanical response between a quadratic and a non-quadratic matrix yield surface. A Hershey equivalent stress measure with two distinct values of the yield surface exponent is employed as the matrix description. Results from the unit cell calculations are further used to calibrate a heuristic extension of the Gurson model which incorporates effects of the third deviatoric stress invariant. An assessment of the porous plasticity model reveals its ability to describe the unit cell response to some extent, however underestimating the effect of the Lode parameter for the lower triaxiality ratios imposed in this study when compared to unit cell simulations. Ductile failure predictions by means of finite element simulations using a unit cell model that resembles an imperfection band are then conducted to examine how the non-quadratic matrix yield surface influences the failure strain as compared to the quadratic matrix yield surface. Further, strain localization predictions based on bifurcation analyses and imperfection band analyses are undertaken using the calibrated porous plasticity model. These simulations are then compared to the unit cell calculations in order to elucidate the differences between the various modelling strategies. The current study reveals that strain localization analyses using an imperfection band model and a spatially discretized unit cell are in reasonable agreement, while the bifurcation analyses predict higher strain levels at localization. Imperfection band analyses are finally used to calculate failure loci for the quadratic and the non-quadratic matrix yield surface under a wide range of loading conditions. The underlying matrix yield surface is demonstrated to have a pronounced influence on the onset of strain localization.

  1. Modeling The Shock Initiation of PBX-9501 in ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, L; Springer, H K; Mace, J

    The SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has determined the 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate the code predictions. The SMIS tests use a powder gun to shoot scaled NATO standard fragments at a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. The SMIS real-world shot scenario creates a unique test-bed because many of the fragments arrivemore » at the impact plate off-center and at an angle of impact. The goal of this model validation experiments is to demonstrate the predictive capability of the Tarver-Lee Ignition and Growth (I&G) reactive flow model [2] in this fully 3-dimensional regime of Shock to Detonation Transition (SDT). The 3-dimensional Arbitrary Lagrange Eulerian hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations accurately reproduce the 'Go/No-Go' threshold of the Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied in a predictive fashion for the response of heterogeneous high explosives in the SDT regime.« less

  2. Method to determine the optimal constitutive model from spherical indentation tests

    NASA Astrophysics Data System (ADS)

    Zhang, Tairui; Wang, Shang; Wang, Weiqiang

    2018-03-01

    The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE) calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang's modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in) into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study.

  3. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  4. Continuum Lowering and Fermi-Surface Rising in Strongly Coupled and Degenerate Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, S. X.

    Here, continuum lowering is a well-known and important physics concept that describes the ionization potential depression (IPD) in plasmas caused by thermal-/pressure-induced ionization of outer-shell electrons. The existing IPD models are often used to characterize plasma conditions and to gauge opacity calculations. Recent precision measurements have revealed deficits in our understanding of continuum lowering in dense hot plasmas. However, these investigations have so far been limited to IPD in strongly coupled but nondegenerate plasmas. Here, we report a first-principles study of the K-edge shifting in both strongly coupled and fully degenerate carbon plasmas, with quantum molecular dynamics (QMD) calculations basedmore » on the all-electron density-functional theory (DFT). The resulted K-edge shifting versus plasma density, as a probe to the continuum lowering and the Fermi-surface rising, is found to be significantly different from predictions of existing IPD models. In contrast, a simple model of “single atom in box” (SAIB), developed in this work, accurately predicts K-edge locations as what ab-initio calculations provide.« less

  5. Energy balance in the core of the Saturn plasma sheet: H2O chemistry

    NASA Astrophysics Data System (ADS)

    Shemansky, D. E.; Yoshii, J.; Liu, X.

    2011-10-01

    A model of the weakly ionized plasma at Saturn has been developed to investigate the properties of the system. Energy balance is a critical consideration. The present model is based on two sources of mass, H2O, and HI. H2O is a variable. HI is a significant volume of gas flowing through the plasma imposed by the source at Saturn [1,2,3]. The energy sources are solar radiation and heterogeneous magnetosphere electrons. The model calculations produce energy rates, species partitioning, and relaxation lifetimes. For the first time the state of the ambient plasma sheet electrons is directly connected to the energy forcing functions. Within limits of knowledge, the predicted state of the core region of the plasma sheet in neutral and ionized gas corresponds satisfactorily to observation. The dominant ions in these calculations are H2O+ and H3O+ with lifetimes of several days. The lifetime of H2O is roughly 60 days. In calculations carried out so far the predicted source rate for H2O is lower than the rates quoted from the Enceladus encounters.

  6. New calculations of gross β-decay properties for astrophysical applications: Speeding-up the classical r process

    NASA Astrophysics Data System (ADS)

    Möller, Peter; Pfeiffer, Bernd; Kratz, Karl-Ludwig

    2003-05-01

    Recent compilations of experimental gross β-decay properties, i.e., half-lives (T1/2) and neutron-emission probabilities (Pn), are compared to improved global macroscopic-microscopic model predictions. The model combines calculations within the quasiparticle (QP) random-phase approximation for the Gamow-Teller (GT) part with an empirical spreading of the QP strength and the gross theory for the first-forbidden part of β- decay. Nuclear masses are either taken from the 1995 data compilation of Audi et al., when available, otherwise from the finite-range droplet model. Especially for spherical and neutron-(sub-)magic isotopes a considerable improvement compared to our earlier predictions for pure GT decay (ADNDT, 1997) is observed. T1/2 and Pn values up to the neutron drip line have been used in r-process calculations within the classical “waiting-point” approximation. With the new nuclear-physics input, a considerable speeding-up of the r-matter flow is observed, in particular at those r-abundance peaks which are related to magic neutron-shell closures.

  7. Exploratory Studies in Generalized Predictive Control for Active Gust Load Alleviation

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.; Eure, Kenneth W.; Juang, Jer-Nan

    2006-01-01

    The results of numerical simulations aimed at assessing the efficacy of Generalized Predictive Control (GPC) for active gust load alleviation using trailing- and leading-edge control surfaces are presented. The equations underlying the method are presented and discussed, including system identification, calculation of control law matrices, and calculation of commands applied to the control effectors. Both embedded and explicit feedforward paths for inclusion of disturbance effects are addressed. Results from two types of simulations are shown. The first used a 3-DOF math model of a mass-spring-dashpot system subject to user-defined external disturbances. The second used open-loop data from a wind-tunnel test in which a wing model was excited by sinusoidal vertical gusts; closed-loop behavior was simulated in post-test calculations. Results obtained from these simulations have been decidedly positive. In particular, results of closed-loop simulations for the wing model showed reductions in root moments by factors as high as 1000, depending on whether the excitation is from a constant- or variable-frequency gust and on the direction of the response.

  8. DSMC simulations of shock tube experiments for the dissociation rate of nitrogen

    NASA Astrophysics Data System (ADS)

    Bird, G. A.

    2012-11-01

    The DSMC method has been used to simulate the flow associated with several experiments that led to predictions of the dissociation rate in nitrogen. One involved optical interferometry to determine the density behind strong shock wave and the other involved the measurement of the shock tube end-wall pressure after the reflection of a similar shock wave. DSMC calculations for the un-reflected shock wave were made with the older TCE model that converts rate coefficients to reaction cross-sections, with the newer Q-K model that predicts the rates and with a set of reaction cross-sections for nitrogen dissociation from QCT calculations. A comparison of the resulting density profiles with the measured profile provides a test of the validity of the DSMC chemistry models. The DSMC reaction rates were sampled directly in the DSMC calculation, both far downstream where the flow is in equilibrium and in the non-equilibrium region immediately behind the shock. This permits a critical evaluation of data reduction procedures that were employed to deduce the dissociation rate from the measured quantities.

  9. Monte Carlo approach in assessing damage in higher order structures of DNA

    NASA Technical Reports Server (NTRS)

    Chatterjee, A.; Schmidt, J. B.; Holley, W. R.

    1994-01-01

    We have developed a computer monitor of nuclear DNA in the form of chromatin fibre. The fibres are modeled as a ideal solenoid consisting of twenty helical turns with six nucleosomes per turn. The chromatin model, in combination with are Monte Carlo theory of radiation damage induces by charged particles, based on general features of tack structure and stopping power theory, has been used to evaluate the influence of DNA structure on initial damage. An interesting has emerged from our calculations. Our calculated results predict the existence of strong spatial correlations in damage sites associated with the symmetries in the solenoidal model. We have calculated spectra of short fragments of double stranded DNA produced by multiple double strand breaks induced by both high and low LET radiation. The spectra exhibit peaks at multiples of approximately 85 base pairs (the nucleosome periodicity), and approximately 1000 base pairs (solenoid periodicity). Preliminary experiments to investigate the fragment distributions from irradiated DNA, made by B. Rydberg at Lawrence Berkeley Laboratory, confirm the existence of short DNA fragments and are in substantial agreement with the predictions of our theory.

  10. Continuum Lowering and Fermi-Surface Rising in Strongly Coupled and Degenerate Plasmas

    DOE PAGES

    Hu, S. X.

    2017-08-10

    Here, continuum lowering is a well-known and important physics concept that describes the ionization potential depression (IPD) in plasmas caused by thermal-/pressure-induced ionization of outer-shell electrons. The existing IPD models are often used to characterize plasma conditions and to gauge opacity calculations. Recent precision measurements have revealed deficits in our understanding of continuum lowering in dense hot plasmas. However, these investigations have so far been limited to IPD in strongly coupled but nondegenerate plasmas. Here, we report a first-principles study of the K-edge shifting in both strongly coupled and fully degenerate carbon plasmas, with quantum molecular dynamics (QMD) calculations basedmore » on the all-electron density-functional theory (DFT). The resulted K-edge shifting versus plasma density, as a probe to the continuum lowering and the Fermi-surface rising, is found to be significantly different from predictions of existing IPD models. In contrast, a simple model of “single atom in box” (SAIB), developed in this work, accurately predicts K-edge locations as what ab-initio calculations provide.« less

  11. Progress in Earth System Modeling since the ENIAC Calculation

    NASA Astrophysics Data System (ADS)

    Fung, I.

    2009-05-01

    The success of the first numerical weather prediction experiment on the ENIAC computer in 1950 was hinged on the expansion of the meteorological observing network, which led to theoretical advances in atmospheric dynamics and subsequently the implementation of the simplified equations on the computer. This paper briefly reviews the progress in Earth System Modeling and climate observations, and suggests a strategy to sustain and expand the observations needed to advance climate science and prediction.

  12. Joule-Thomson effect and internal convection heat transfer in turbulent He II flow

    NASA Technical Reports Server (NTRS)

    Walstrom, P. L.

    1988-01-01

    The temperature rise in highly turbulent He II flowing in tubing was measured in the temperature range 1.6-2.1 K. The effect of internal convection heat transport on the predicted temperature profiles is calculated from the two-fluid model with mutual friction. The model predictions are in good agreement with the measurements, provided that the pressure gradient term is retained in the expression for internal convection heat flow.

  13. A Diagnostic Calculator for Detecting Glaucoma on the Basis of Retinal Nerve Fiber Layer, Optic Disc, and Retinal Ganglion Cell Analysis by Optical Coherence Tomography.

    PubMed

    Larrosa, José Manuel; Moreno-Montañés, Javier; Martinez-de-la-Casa, José María; Polo, Vicente; Velázquez-Villoria, Álvaro; Berrozpe, Clara; García-Granero, Marta

    2015-10-01

    The purpose of this study was to develop and validate a multivariate predictive model to detect glaucoma by using a combination of retinal nerve fiber layer (RNFL), retinal ganglion cell-inner plexiform (GCIPL), and optic disc parameters measured using spectral-domain optical coherence tomography (OCT). Five hundred eyes from 500 participants and 187 eyes of another 187 participants were included in the study and validation groups, respectively. Patients with glaucoma were classified in five groups based on visual field damage. Sensitivity and specificity of all glaucoma OCT parameters were analyzed. Receiver operating characteristic curves (ROC) and areas under the ROC (AUC) were compared. Three predictive multivariate models (quantitative, qualitative, and combined) that used a combination of the best OCT parameters were constructed. A diagnostic calculator was created using the combined multivariate model. The best AUC parameters were: inferior RNFL, average RNFL, vertical cup/disc ratio, minimal GCIPL, and inferior-temporal GCIPL. Comparisons among the parameters did not show that the GCIPL parameters were better than those of the RNFL in early and advanced glaucoma. The highest AUC was in the combined predictive model (0.937; 95% confidence interval, 0.911-0.957) and was significantly (P = 0.0001) higher than the other isolated parameters considered in early and advanced glaucoma. The validation group displayed similar results to those of the study group. Best GCIPL, RNFL, and optic disc parameters showed a similar ability to detect glaucoma. The combined predictive formula improved the glaucoma detection compared to the best isolated parameters evaluated. The diagnostic calculator obtained good classification from participants in both the study and validation groups.

  14. Aerodynamic Drag Scoping Work.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voskuilen, Tyler; Erickson, Lindsay Crowl; Knaus, Robert C.

    This memo summarizes the aerodynamic drag scoping work done for Goodyear in early FY18. The work is to evaluate the feasibility of using Sierra/Low-Mach (Fuego) for drag predictions of rolling tires, particularly focused on the effects of tire features such as lettering, sidewall geometry, rim geometry, and interaction with the vehicle body. The work is broken into two parts. Part 1 consisted of investigation of a canonical validation problem (turbulent flow over a cylinder) using existing tools with different meshes and turbulence models. Part 2 involved calculating drag differences over plate geometries with simple features (ridges and grooves) defined bymore » Goodyear of approximately the size of interest for a tire. The results of part 1 show the level of noise to be expected in a drag calculation and highlight the sensitivity of absolute predictions to model parameters such as mesh size and turbulence model. There is 20-30% noise in the experimental measurements on the canonical cylinder problem, and a similar level of variation between different meshes and turbulence models. Part 2 shows that there is a notable difference in the predicted drag on the sample plate geometries, however, the computational cost of extending the LES model to a full tire would be significant. This cost could be reduced by implementation of more sophisticated wall and turbulence models (e.g. detached eddy simulations - DES) and by focusing the mesh refinement on feature subsets with the goal of comparing configurations rather than absolute predictivity for the whole tire.« less

  15. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  16. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE PAGES

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...

    2017-10-17

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  17. Finite-element model of the active organ of Corti

    PubMed Central

    Elliott, Stephen J.; Baumgart, Johannes

    2016-01-01

    The cochlear amplifier that provides our hearing with its extraordinary sensitivity and selectivity is thought to be the result of an active biomechanical process within the sensory auditory organ, the organ of Corti. Although imaging techniques are developing rapidly, it is not currently possible, in a fully active cochlea, to obtain detailed measurements of the motion of individual elements within a cross section of the organ of Corti. This motion is predicted using a two-dimensional finite-element model. The various solid components are modelled using elastic elements, the outer hair cells (OHCs) as piezoelectric elements and the perilymph and endolymph as viscous and nearly incompressible fluid elements. The model is validated by comparison with existing measurements of the motions within the passive organ of Corti, calculated when it is driven either acoustically, by the fluid pressure or electrically, by excitation of the OHCs. The transverse basilar membrane (BM) motion and the shearing motion between the tectorial membrane and the reticular lamina are calculated for these two excitation modes. The fully active response of the BM to acoustic excitation is predicted using a linear superposition of the calculated responses and an assumed frequency response for the OHC feedback. PMID:26888950

  18. Polarizabilities and van der Waals C{sub 6} coefficients of fullerenes from an atomistic electrodynamics model: Anomalous scaling with number of carbon atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saidi, Wissam A., E-mail: alsaidi@pitt.edu; Norman, Patrick

    2016-07-14

    The van der Waals C{sub 6} coefficients of fullerenes are shown to exhibit an anomalous dependence on the number of carbon atoms N such that C{sub 6} ∝ N{sup 2.2} as predicted using state-of-the-art quantum mechanical calculations based on fullerenes with small sizes, and N{sup 2.75} as predicted using a classical-metallic spherical-shell approximation of the fullerenes. We use an atomistic electrodynamics model where each carbon atom is described by a polarizable object to extend the quantum mechanical calculations to larger fullerenes. The parameters of this model are optimized to describe accurately the static and complex polarizabilities of the fullerenes bymore » fitting against accurate ab initio calculations. This model shows that C{sub 6} ∝ N{sup 2.8}, which is supportive of the classical-metallic spherical-shell approximation. Additionally, we show that the anomalous dependence of the polarizability on N is attributed to the electric charge term, while the dipole–dipole term scales almost linearly with the number of carbon atoms.« less

  19. The New Performance Calculation Method of Fouled Axial Flow Compressor

    PubMed Central

    Xu, Hong

    2014-01-01

    Fouling is the most important performance degradation factor, so it is necessary to accurately predict the effect of fouling on engine performance. In the previous research, it is very difficult to accurately model the fouled axial flow compressor. This paper develops a new performance calculation method of fouled multistage axial flow compressor based on experiment result and operating data. For multistage compressor, the whole compressor is decomposed into two sections. The first section includes the first 50% stages which reflect the fouling level, and the second section includes the last 50% stages which are viewed as the clean stage because of less deposits. In this model, the performance of the first section is obtained by combining scaling law method and linear progression model with traditional stage stacking method; simultaneously ambient conditions and engine configurations are considered. On the other hand, the performance of the second section is calculated by averaged infinitesimal stage method which is based on Reynolds' law of similarity. Finally, the model is successfully applied to predict the 8-stage axial flow compressor and 16-stage LM2500-30 compressor. The change of thermodynamic parameters such as pressure ratio, efficiency with the operating time, and stage number is analyzed in detail. PMID:25197717

  20. Elastic properties of graphene: A pseudo-beam model with modified internal bending moment and its application

    NASA Astrophysics Data System (ADS)

    Xia, Z. M.; Wang, C. G.; Tan, H. F.

    2018-04-01

    A pseudo-beam model with modified internal bending moment is presented to predict elastic properties of graphene, including the Young's modulus and Poisson's ratio. In order to overcome a drawback in existing molecular structural mechanics models, which only account for pure bending (constant bending moment), the presented model accounts for linear bending moments deduced from the balance equations. Based on this pseudo-beam model, an analytical prediction is accomplished to predict the Young's modulus and Poisson's ratio of graphene based on the equation of the strain energies by using Castigliano second theorem. Then, the elastic properties of graphene are calculated compared with results available in literature, which verifies the feasibility of the pseudo-beam model. Finally, the pseudo-beam model is utilized to study the twisting wrinkling characteristics of annular graphene. Due to modifications of the internal bending moment, the wrinkling behaviors of graphene sheet are predicted accurately. The obtained results show that the pseudo-beam model has a good ability to predict the elastic properties of graphene accurately, especially the out-of-plane deformation behavior.

Top