Sample records for key parameters prediction

  1. Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements

    NASA Technical Reports Server (NTRS)

    Lyatsky, W.; Khazanov, G. V.

    2007-01-01

    Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.

  2. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  3. End-of-Discharge and End-of-Life Prediction in Lithium-Ion Batteries with Electrochemistry-Based Aging Models

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Kulkarni, Chetan S.

    2016-01-01

    As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.

  4. Parameter Selection Methods in Inverse Problem Formulation

    DTIC Science & Technology

    2010-11-03

    clinical data and used for prediction and a model for the reaction of the cardiovascular system to an ergometric workload. Key Words: Parameter selection...model for HIV dynamics which has been successfully validated with clinical data and used for prediction and a model for the reaction of the...recently developed in-host model for HIV dynamics which has been successfully validated with clinical data and used for prediction [4, 8]; b) a global

  5. Integrating machine learning to achieve an automatic parameter prediction for practical continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Liu, Weiqi; Huang, Peng; Peng, Jinye; Fan, Jianping; Zeng, Guihua

    2018-02-01

    For supporting practical quantum key distribution (QKD), it is critical to stabilize the physical parameters of signals, e.g., the intensity, phase, and polarization of the laser signals, so that such QKD systems can achieve better performance and practical security. In this paper, an approach is developed by integrating a support vector regression (SVR) model to optimize the performance and practical security of the QKD system. First, a SVR model is learned to precisely predict the time-along evolutions of the physical parameters of signals. Second, such predicted time-along evolutions are employed as feedback to control the QKD system for achieving the optimal performance and practical security. Finally, our proposed approach is exemplified by using the intensity evolution of laser light and a local oscillator pulse in the Gaussian modulated coherent state QKD system. Our experimental results have demonstrated three significant benefits of our SVR-based approach: (1) it can allow the QKD system to achieve optimal performance and practical security, (2) it does not require any additional resources and any real-time monitoring module to support automatic prediction of the time-along evolutions of the physical parameters of signals, and (3) it is applicable to any measurable physical parameter of signals in the practical QKD system.

  6. Neuroeconomics and public health

    PubMed Central

    Larsen, Torben

    2010-01-01

    Objective To design an economic evaluation strategy for general health promotion projects. Method Identification of key parameters of behavioral health from neuroeconomic studies. Results The Frontal Power of Concentration (C) is a quadripartite executive integrator depending on four key parameters: 1) The Limbic system originating ambivalent emotions (L). 2) Volition in the Prefrontal Cortex (c) controlling cognitive prediction and emotions with a view on Frontopolar long-term goals. 3) Semantic memories in the Temporal lobe (R). 4) An intuitive visuospatial sketchpad in the Parietal lobe (I). C aiming to minimize error between preferences and predictions is directly determined by the following equation including I as a stochastic knowledge component: C =Rc2/L +εI→ 1 Discussion All of the parameters of C are object to improvement by training: Cognitive predictions are improved by open-mindedness towards feedback (R).The effect of emotional regrets is reinforced by an appropriate level of fitness (c, L).Our imagination may be unfolded by in-depth-relaxation-procedures and visualization (I). Conclusion Economic evaluation of general public health should focus on the subset of separate and integrated interventions that directly affect the parameters of Formula C in individuals.

  7. Enhancing performance of next generation FSO communication systems using soft computing-based predictions.

    PubMed

    Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori

    2006-06-12

    The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.

  8. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  9. User manual of the CATSS system (version 1.0) communication analysis tool for space station

    NASA Technical Reports Server (NTRS)

    Tsang, C. S.; Su, Y. T.; Lindsey, W. C.

    1983-01-01

    The Communication Analysis Tool for the Space Station (CATSS) is a FORTRAN language software package capable of predicting the communications links performance for the Space Station (SS) communication and tracking (C & T) system. An interactive software package was currently developed to run on the DEC/VAX computers. The CATSS models and evaluates the various C & T links of the SS, which includes the modulation schemes such as Binary-Phase-Shift-Keying (BPSK), BPSK with Direct Sequence Spread Spectrum (PN/BPSK), and M-ary Frequency-Shift-Keying with Frequency Hopping (FH/MFSK). Optical Space Communication link is also included. CATSS is a C & T system engineering tool used to predict and analyze the system performance for different link environment. Identification of system weaknesses is achieved through evaluation of performance with varying system parameters. System tradeoff for different values of system parameters are made based on the performance prediction.

  10. Prediction of Breakthrough Curves for Conservative and Reactive Transport from the Structural Parameters of Highly Heterogeneous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott; Haslauer, Claus P.; Cirpka, Olaf A.

    2017-01-05

    The key points of this presentation were to approach the problem of linking breakthrough curve shape (RP-CTRW transition distribution) to structural parameters from a Monte Carlo approach and to use the Monte Carlo analysis to determine any empirical error

  11. Prediction of Geomagnetic Activity and Key Parameters in High-latitude Ionosphere

    NASA Technical Reports Server (NTRS)

    Khazanov, George V.; Lyatsky, Wladislaw; Tan, Arjun; Ridley, Aaron

    2007-01-01

    Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere are important tasks of US Space Weather Program. Prediction reliability is dependent on the prediction method, and elements included in the prediction scheme. Two of the main elements of such prediction scheme are: an appropriate geomagnetic activity index, and an appropriate coupling function (the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity). We have developed a new index of geomagnetic activity, the Polar Magnetic (PM) index and an improved version of solar wind coupling function. PM index is similar to the existing polar cap PC index but it shows much better correlation with upstream solar wind/IMF data and other events in the magnetosphere and ionosphere. We investigate the correlation of PM index with upstream solar wind/IMF data for 10 years (1995-2004) that include both low and high solar activity. We also have introduced a new prediction function for the predicting of cross-polar-cap voltage and Joule heating based on using both PM index and upstream solar wind/IMF data. As we show such prediction function significantly increase the reliability of prediction of these important parameters. The correlation coefficients between the actual and predicted values of these parameters are approx. 0.9 and higher.

  12. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  13. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  14. A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks

    NASA Astrophysics Data System (ADS)

    Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan

    2018-01-01

    Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.

  15. Application of a data assimilation method via an ensemble Kalman filter to reactive urea hydrolysis transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juxiu Tong; Bill X. Hu; Hai Huang

    2014-03-01

    With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less

  16. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  17. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  18. Estimation of Quasi-Stiffness and Propulsive Work of the Human Ankle in the Stance Phase of Walking

    PubMed Central

    Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.

    2013-01-01

    Characterizing the quasi-stiffness and work of lower extremity joints is critical for evaluating human locomotion and designing assistive devices such as prostheses and orthoses intended to emulate the biological behavior of human legs. This work aims to establish statistical models that allow us to predict the ankle quasi-stiffness and net mechanical work for adults walking on level ground. During the stance phase of walking, the ankle joint propels the body through three distinctive phases of nearly constant stiffness known as the quasi-stiffness of each phase. Using a generic equation for the ankle moment obtained through an inverse dynamics analysis, we identify key independent parameters needed to predict ankle quasi-stiffness and propulsive work and also the functional form of each correlation. These parameters include gait speed, ankle excursion, and subject height and weight. Based on the identified form of the correlation and key variables, we applied linear regression on experimental walking data for 216 gait trials across 26 subjects (speeds from 0.75–2.63 m/s) to obtain statistical models of varying complexity. The most general forms of the statistical models include all the key parameters and have an R2 of 75% to 81% in the prediction of the ankle quasi-stiffnesses and propulsive work. The most specific models include only subject height and weight and could predict the ankle quasi-stiffnesses and work for optimal walking speed with average error of 13% to 30%. We discuss how these models provide a useful framework and foundation for designing subject- and gait-specific prosthetic and exoskeletal devices designed to emulate biological ankle function during level ground walking. PMID:23555839

  19. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  20. Prediction of Tensile Strength of Friction Stir Weld Joints with Adaptive Neuro-Fuzzy Inference System (ANFIS) and Neural Network

    NASA Technical Reports Server (NTRS)

    Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.

    2015-01-01

    Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.

  1. Estimation of the viscosities of liquid binary alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Su, Xiang-Yu

    2018-01-01

    As one of the most important physical and chemical properties, viscosity plays a critical role in physics and materials as a key parameter to quantitatively understanding the fluid transport process and reaction kinetics in metallurgical process design. Experimental and theoretical studies on liquid metals are problematic. Today, there are many empirical and semi-empirical models available with which to evaluate the viscosity of liquid metals and alloys. However, the parameter of mixed energy in these models is not easily determined, and most predictive models have been poorly applied. In the present study, a new thermodynamic parameter Δ G is proposed to predict liquid alloy viscosity. The prediction equation depends on basic physical and thermodynamic parameters, namely density, melting temperature, absolute atomic mass, electro-negativity, electron density, molar volume, Pauling radius, and mixing enthalpy. Our results show that the liquid alloy viscosity predicted using the proposed model is closely in line with the experimental values. In addition, if the component radius difference is greater than 0.03 nm at a certain temperature, the atomic size factor has a significant effect on the interaction of the binary liquid metal atoms. The proposed thermodynamic parameter Δ G also facilitates the study of other physical properties of liquid metals.

  2. Assessing the performance of community-available global MHD models using key system parameters and empirical relationships

    NASA Astrophysics Data System (ADS)

    Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.

    2015-12-01

    Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.

  3. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  4. Using Predictive Uncertainty Analysis to Assess Hydrologic Model Performance for a Watershed in Oregon

    NASA Astrophysics Data System (ADS)

    Brannan, K. M.; Somor, A.

    2016-12-01

    A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.

  5. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Schifer, Nicholas

    2011-01-01

    Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.

  6. A risk-based approach to management of leachables utilizing statistical analysis of extractables.

    PubMed

    Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M

    2015-04-01

    To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.

  7. [Parameters modification and evaluation of two evapotranspiration models based on Penman-Monteith model for summer maize].

    PubMed

    Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing

    2017-06-18

    The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.

  8. Stochastic Analysis of Orbital Lifetimes of Spacecraft

    NASA Technical Reports Server (NTRS)

    Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David

    2008-01-01

    A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.

  9. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  10. Estimation of Filling and Afterload Conditions by Pump Intrinsic Parameters in a Pulsatile Total Artificial Heart.

    PubMed

    Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich

    2016-07-01

    A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  11. Analysis of the economic impact of the national unified carbon trading market mechanism Hebei province, for example

    NASA Astrophysics Data System (ADS)

    Sun, Yuxing

    2018-05-01

    In this paper, a grey prediction model is used to predict the carbon emission in Hebei province, and the impact analysis model based on TermCo2 is established. At the same time, we read a lot about CGE and study on how to build the scene, the selection of key parameters, and sensitivity analysis of application scenarios do industry for reference.

  12. Impact of Hydrogeological Uncertainty on Estimation of Environmental Risks Posed by Hydrocarbon Transportation Networks

    NASA Astrophysics Data System (ADS)

    Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.

    2017-11-01

    Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.

  13. Investigation into the influence of build parameters on failure of 3D printed parts

    NASA Astrophysics Data System (ADS)

    Fornasini, Giacomo

    Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.

  14. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  15. Using Chemical Structure Information to Predict In Vitro Pharmacokinetic Parameters (SOT)

    EPA Science Inventory

    Toxicokinetic data are key for relating exposure and internal dose when building in vitro-based risk assessment models. However, conducting in vivo toxicokinetic studies has time and cost limitations, and in vitro toxicokinetic data is available only for a limited set of chemical...

  16. Development of a Harmonized Database of Reported and Predicted Consumer Product Ingredient Information

    EPA Science Inventory

    Near-field exposure to chemicals in consumer products has been identified as a significant source of exposure for many chemicals. Quantitative data on product chemical composition and weight fraction is a key parameter for characterizing this exposure. While data on product compo...

  17. Separating foliar physiology from morphology reveals the relative roles of vertically structured transpiration factors within red maple crowns and limitations of larger scale models

    PubMed Central

    Bauerle, William L.; Bowden, Joseph D.

    2011-01-01

    A spatially explicit mechanistic model, MAESTRA, was used to separate key parameters affecting transpiration to provide insights into the most influential parameters for accurate predictions of within-crown and within-canopy transpiration. Once validated among Acer rubrum L. genotypes, model responses to different parameterization scenarios were scaled up to stand transpiration (expressed per unit leaf area) to assess how transpiration might be affected by the spatial distribution of foliage properties. For example, when physiological differences were accounted for, differences in leaf width among A. rubrum L. genotypes resulted in a 25% difference in transpiration. An in silico within-canopy sensitivity analysis was conducted over the range of genotype parameter variation observed and under different climate forcing conditions. The analysis revealed that seven of 16 leaf traits had a ≥5% impact on transpiration predictions. Under sparse foliage conditions, comparisons of the present findings with previous studies were in agreement that parameters such as the maximum Rubisco-limited rate of photosynthesis can explain ∼20% of the variability in predicted transpiration. However, the spatial analysis shows how such parameters can decrease or change in importance below the uppermost canopy layer. Alternatively, model sensitivity to leaf width and minimum stomatal conductance was continuous along a vertical canopy depth profile. Foremost, transpiration sensitivity to an observed range of morphological and physiological parameters is examined and the spatial sensitivity of transpiration model predictions to vertical variations in microclimate and foliage density is identified to reduce the uncertainty of current transpiration predictions. PMID:21617246

  18. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality

    NASA Astrophysics Data System (ADS)

    Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun

    2010-10-01

    Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less

  20. Experimental determination of solvent-water partition coefficients and Abraham parameters for munition constituents.

    PubMed

    Liang, Yuzhen; Kuo, Dave T F; Allen, Herbert E; Di Toro, Dominic M

    2016-10-01

    There is concern about the environmental fate and effects of munition constituents (MCs). Polyparameter linear free energy relationships (pp-LFERs) that employ Abraham solute parameters can aid in evaluating the risk of MCs to the environment. However, poor predictions using pp-LFERs and ABSOLV estimated Abraham solute parameters are found for some key physico-chemical properties. In this work, the Abraham solute parameters are determined using experimental partition coefficients in various solvent-water systems. The compounds investigated include hexahydro-1,3,5-trinitro-1,3,5-triazacyclohexane (RDX), octahydro-1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane (HMX), hexahydro-1-nitroso-3,5-dinitro-1,3,5-triazine (MNX), hexahydro-1,3,5-trinitroso-1,3,5-triazine (TNX), hexahydro-1,3-dinitroso-5- nitro-1,3,5-triazine (DNX), 2,4,6-trinitrotoluene (TNT), 1,3,5-trinitrobenzene (TNB), and 4-nitroanisole. The solvents in the solvent-water systems are hexane, dichloromethane, trichloromethane, octanol, and toluene. The only available reported solvent-water partition coefficients are for octanol-water for some of the investigated compounds and they are in good agreement with the experimental measurements from this study. Solvent-water partition coefficients fitted using experimentally derived solute parameters from this study have significantly smaller root mean square errors (RMSE = 0.38) than predictions using ABSOLV estimated solute parameters (RMSE = 3.56) for the investigated compounds. Additionally, the predictions for various physico-chemical properties using the experimentally derived solute parameters agree with available literature reported values with prediction errors within 0.79 log units except for water solubility of RDX and HMX with errors of 1.48 and 2.16 log units respectively. However, predictions using ABSOLV estimated solute parameters have larger prediction errors of up to 7.68 log units. This large discrepancy is probably due to the missing R2NNO2 and R2NNO2 functional groups in the ABSOLV fragment database. Copyright © 2016. Published by Elsevier Ltd.

  1. Guiding automated left ventricular chamber segmentation in cardiac imaging using the concept of conserved myocardial volume.

    PubMed

    Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A

    2008-06-01

    The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.

  2. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling.

    PubMed

    Sumner, T; Shephard, E; Bogle, I D L

    2012-09-07

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.

  3. Review of concrete biodeterioration in relation to nuclear waste.

    PubMed

    Turick, Charles E; Berry, Christopher J

    2016-01-01

    Storage of radioactive waste in concrete structures is a means of containing wastes and related radionuclides generated from nuclear operations in many countries. Previous efforts related to microbial impacts on concrete structures that are used to contain radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the research in this field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete structures used to store or dispose of radioactive waste. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources such as components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The microbial contribution to degradation of the concrete structures containing radioactive waste is a constant possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Parameters to focus on for modeling activities and possible options for mitigation that would minimize concrete biodegradation are discussed and include key conditions that drive microbial activity on concrete surfaces. Copyright © 2015. Published by Elsevier Ltd.

  4. Analysis of helicopter flight dynamics through modeling and simulation of primary flight control actuation system

    NASA Astrophysics Data System (ADS)

    Nelson, Hunter Barton

    A simplified second-order transfer function actuator model used in most flight dynamics applications cannot easily capture the effects of different actuator parameters. The present work integrates a nonlinear actuator model into a nonlinear state space rotorcraft model to determine the effect of actuator parameters on key flight dynamics. The completed actuator model was integrated with a swashplate kinematics where step responses were generated over a range of key hydraulic parameters. The actuator-swashplate system was then introduced into a nonlinear state space rotorcraft simulation where flight dynamics quantities such as bandwidth and phase delay analyzed. Frequency sweeps were simulated for unique actuator configurations using the coupled nonlinear actuator-rotorcraft system. The software package CIFER was used for system identification and compared directly to the linearized models. As the actuator became rate saturated, the effects on bandwidth and phase delay were apparent on the predicted handling qualities specifications.

  5. Sensitivity of corneal biomechanical and optical behavior to material parameters using design of experiments method.

    PubMed

    Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung

    2018-02-01

    The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.

  6. Homogenization Theory for the Prediction of Obstructed Solute Diffusivity in Macromolecular Solutions.

    PubMed

    Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova

    2016-01-01

    The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter.

  7. A Toolkit to Study Sensitivity of the Geant4 Predictions to the Variations of the Physics Model Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, Laura; Genser, Krzysztof; Hatcher, Robert

    Geant4 is the leading detector simulation toolkit used in high energy physics to design detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured cross-sections and phenomenological predictions with physically motivated parameters estimated by theoretical calculation or measurement. Because these models are tuned to cover a very wide range of possible simulation tasks, they may not always be optimized for a given process or a given material. Thismore » raises several critical questions, e.g. how sensitive Geant4 predictions are to the variations of the model parameters, or what uncertainties are associated with a particular tune of a Geant4 physics model, or a group of models, or how to consistently derive guidance for Geant4 model development and improvement from a wide range of available experimental data. We have designed and implemented a comprehensive, modular, user-friendly software toolkit to study and address such questions. It allows one to easily modify parameters of one or several Geant4 physics models involved in the simulation, and to perform collective analysis of multiple variants of the resulting physics observables of interest and comparison against a variety of corresponding experimental data. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented and illustrated with results obtained with Geant4 key hadronic models.« less

  8. A Novel Approach for Constructing One-Way Hash Function Based on a Message Block Controlled 8D Hyperchaotic Map

    NASA Astrophysics Data System (ADS)

    Lin, Zhuosheng; Yu, Simin; Lü, Jinhu

    2017-06-01

    In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.

  9. Understanding which parameters control shallow ascent of silicic effusive magma

    NASA Astrophysics Data System (ADS)

    Thomas, Mark E.; Neuberg, Jurgen W.

    2014-11-01

    The estimation of the magma ascent rate is key to predicting volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. Linking potential changes of such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models Soufrière that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. We show that variability in the rate of low frequency seismicity, assumed to correlate directly with the rate of magma movement, can be used as an indicator for changes in ascent rate and, therefore, eruptive activity. The results indicate that conduit diameter and excess pressure in the magma chamber are amongst the dominant controlling variables, but the single most important parameter is the volatile content (assumed as only water). Modeling this parameter in the range of reported values causes changes in the calculated ascent velocities of up to 800%.

  10. Application of artificial neural networks to assess pesticide contamination in shallow groundwater

    USGS Publications Warehouse

    Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.

    2006-01-01

    In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.

  11. Predicting acidification recovery at the Hubbard Brook Experimental Forest, New Hampshire: evaluation of four models.

    PubMed

    Tominaga, Koji; Aherne, Julian; Watmough, Shaun A; Alveteg, Mattias; Cosby, Bernard J; Driscoll, Charles T; Posch, Maximilian; Pourmokhtarian, Afshin

    2010-12-01

    The performance and prediction uncertainty (owing to parameter and structural uncertainties) of four dynamic watershed acidification models (MAGIC, PnET-BGC, SAFE, and VSD) were assessed by systematically applying them to data from the Hubbard Brook Experimental Forest (HBEF), New Hampshire, where long-term records of precipitation and stream chemistry were available. In order to facilitate systematic evaluation, Monte Carlo simulation was used to randomly generate common model input data sets (n = 10,000) from parameter distributions; input data were subsequently translated among models to retain consistency. The model simulations were objectively calibrated against observed data (streamwater: 1963-2004, soil: 1983). The ensemble of calibrated models was used to assess future response of soil and stream chemistry to reduced sulfur deposition at the HBEF. Although both hindcast (1850-1962) and forecast (2005-2100) predictions were qualitatively similar across the four models, the temporal pattern of key indicators of acidification recovery (stream acid neutralizing capacity and soil base saturation) differed substantially. The range in predictions resulted from differences in model structure and their associated posterior parameter distributions. These differences can be accommodated by employing multiple models (ensemble analysis) but have implications for individual model applications.

  12. A Detailed Data-Driven Network Model of Prefrontal Cortex Reproduces Key Features of In Vivo Activity

    PubMed Central

    Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel

    2016-01-01

    The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition. PMID:27203563

  13. Remote health monitoring: predicting outcome success based on contextual features for cardiovascular disease.

    PubMed

    Alshurafa, Nabil; Eastwood, Jo-Ann; Pourhomayoun, Mohammad; Liu, Jason J; Sarrafzadeh, Majid

    2014-01-01

    Current studies have produced a plethora of remote health monitoring (RHM) systems designed to enhance the care of patients with chronic diseases. Many RHM systems are designed to improve patient risk factors for cardiovascular disease, including physiological parameters such as body mass index (BMI) and waist circumference, and lipid profiles such as low density lipoprotein (LDL) and high density lipoprotein (HDL). There are several patient characteristics that could be determining factors for a patient's RHM outcome success, but these characteristics have been largely unidentified. In this paper, we analyze results from an RHM system deployed in a six month Women's Heart Health study of 90 patients, and apply advanced feature selection and machine learning algorithms to identify patients' key baseline contextual features and build effective prediction models that help determine RHM outcome success. We introduce Wanda-CVD, a smartphone-based RHM system designed to help participants with cardiovascular disease risk factors by motivating participants through wireless coaching using feedback and prompts as social support. We analyze key contextual features that secure positive patient outcomes in both physiological parameters and lipid profiles. Results from the Women's Heart Health study show that health threat of heart disease, quality of life, family history, stress factors, social support, and anxiety at baseline all help predict patient RHM outcome success.

  14. Pesticide Environmental Fate Research for the 21st Century: Building Bridges Between Laboratory and Field Studies at Varying Scales

    USDA-ARS?s Scientific Manuscript database

    Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ...

  15. Pesticide Environmental Fate Research for the 21st Century: Building Bridges Between Laboratory and Field Studies at Varying Scales

    USDA-ARS?s Scientific Manuscript database

    Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ev...

  16. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Treesearch

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  17. Photovoltaic Module Soiling Map | Photovoltaic Research | NREL

    Science.gov Websites

    proposed in: M. Deceglie, L. Micheli, and M. Muller, "Quantifying soiling loss directly from PV yield described in: L. Micheli and M. Muller, "An investigation of the key parameters for predicting PV : M. Muller, L. Micheli, and A.A. Martinez-Morales, "A Method to Extract Soiling Loss Data from

  18. An investigation of the key parameters for predicting PV soiling losses

    DOE PAGES

    Micheli, Leonardo; Muller, Matthew

    2017-01-25

    One hundred and two environmental and meteorological parameters have been investigated and compared with the performance of 20 soiling stations installed in the USA, in order to determine their ability to predict the soiling losses occurring on PV systems. The results of this investigation showed that the annual average of the daily mean particulate matter values recorded by monitoring stations deployed near the PV systems are the best soiling predictors, with coefficients of determination ( R 2) as high as 0.82. The precipitation pattern was also found to be relevant: among the different meteorological parameters, the average length of drymore » periods had the best correlation with the soiling ratio. Lastly, a preliminary investigation of two-variable regressions was attempted and resulted in an adjusted R 2 of 0.90 when a combination of PM 2.5 and a binary classification for the average length of the dry period was introduced.« less

  19. Mining manufacturing data for discovery of high productivity process characteristics.

    PubMed

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  20. In-depth analysis and characterization of a dual damascene process with respect to different CD

    NASA Astrophysics Data System (ADS)

    Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Kim, Wan-Soo; Thrun, Xaver

    2018-03-01

    In a 200 mm high volume environment, we studied data from a dual damascene process. Dual damascene is a combination of lithography, etch and CMP that is used to create copper lines and contacts in one single step. During these process steps, different metal CD are measured by different measurement methods. In this study, we analyze the key numbers of the different measurements after different process steps and develop simple models to predict the electrical behavior* . In addition, radial profiles have been analyzed of both inline measurement parameters and electrical parameters. A matching method was developed based on inline and electrical data. Finally, correlation analysis for radial signatures is presented that can be used to predict excursions in electrical signatures.

  1. Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, mercedes C.

    2006-01-01

    The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.

  2. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  3. A square-force cohesion model and its extraction from bulk measurements

    NASA Astrophysics Data System (ADS)

    Liu, Peiyuan; Lamarche, Casey; Kellogg, Kevin; Hrenya, Christine

    2017-11-01

    Cohesive particles remain poorly understood, with order of magnitude differences exhibited for prior, physical predictions of agglomerate size. A major obstacle lies in the absence of robust models of particle-particle cohesion, thereby precluding accurate prediction of the behavior of cohesive particles. Rigorous cohesion models commonly contain parameters related to surface roughness, to which cohesion shows extreme sensitivity. However, both roughness measurement and its distillation into these model parameters are challenging. Accordingly, we propose a ``square-force'' model, where cohesive force remains constant until a cut-off separation. Via DEM simulations, we demonstrate validity of the square-force model as surrogate of more rigorous models, when its two parameters are selected to match the two key quantities governing dense and dilute granular flows, namely maximum cohesive force and critical cohesive energy, respectively. Perhaps more importantly, we establish a method to extract the parameters in the square-force model via defluidization, due to its ability to isolate the effects of the two parameters. Thus, instead of relying on complicated scans of individual grains, determination of particle-particle cohesion from simple bulk measurements becomes feasible. Dow Corning Corporation.

  4. Bayes Analysis and Reliability Implications of Stress-Rupture Testing a Kevlar/Epoxy COPV Using Temperature and Pressure Acceleration

    NASA Technical Reports Server (NTRS)

    Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.

    2009-01-01

    Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.

  5. Parameterization and prediction of nanoparticle transport in porous media: A reanalysis using artificial neural network

    NASA Astrophysics Data System (ADS)

    Babakhani, Peyman; Bridge, Jonathan; Doong, Ruey-an; Phenrat, Tanapon

    2017-06-01

    The continuing rapid expansion of industrial and consumer processes based on nanoparticles (NP) necessitates a robust model for delineating their fate and transport in groundwater. An ability to reliably specify the full parameter set for prediction of NP transport using continuum models is crucial. In this paper we report the reanalysis of a data set of 493 published column experiment outcomes together with their continuum modeling results. Experimental properties were parameterized into 20 factors which are commonly available. They were then used to predict five key continuum model parameters as well as the effluent concentration via artificial neural network (ANN)-based correlations. The Partial Derivatives (PaD) technique and Monte Carlo method were used for the analysis of sensitivities and model-produced uncertainties, respectively. The outcomes shed light on several controversial relationships between the parameters, e.g., it was revealed that the trend of Katt with average pore water velocity was positive. The resulting correlations, despite being developed based on a "black-box" technique (ANN), were able to explain the effects of theoretical parameters such as critical deposition concentration (CDC), even though these parameters were not explicitly considered in the model. Porous media heterogeneity was considered as a parameter for the first time and showed sensitivities higher than those of dispersivity. The model performance was validated well against subsets of the experimental data and was compared with current models. The robustness of the correlation matrices was not completely satisfactory, since they failed to predict the experimental breakthrough curves (BTCs) at extreme values of ionic strengths.

  6. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE PAGES

    Garitte, B.; Shao, H.; Wang, X. R.; ...

    2017-01-09

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  7. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garitte, B.; Shao, H.; Wang, X. R.

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  8. Systems Analysis of the Hydrogen Transition with HyTrans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leiby, Paul Newsome; Greene, David L; Bowman, David Charles

    2007-01-01

    The U.S. Federal government is carefully considering the merits and long-term prospects of hydrogen-fueled vehicles. NAS (1) has called for the careful application of systems analysis tools to structure the complex assessment required. Others, raising cautionary notes, question whether a consistent and plausible transition to hydrogen light-duty vehicles can identified (2) and whether that transition would, on balance, be environmentally preferred. Modeling the market transition to hydrogen-powered vehicles is an inherently complex process, encompassing hydrogen production, delivery and retailing, vehicle manufacturing, and vehicle choice and use. We describe the integration of key technological and market factors in a dynamic transitionmore » model, HyTrans. The usefulness of HyTrans and its predictions depends on three key factors: (1) the validity of the economic theories that underpin the model, (2) the authenticity with which the key processes are represented, and (3) the accuracy of specific parameter values used in the process representations. This paper summarizes the theoretical basis of HyTrans, and highlights the implications of key parameter specifications with sensitivity analysis.« less

  9. Psychoacoustical evaluation of natural and urban sounds in soundscapes.

    PubMed

    Yang, Ming; Kang, Jian

    2013-07-01

    Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.

  10. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    PubMed

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.

  11. Spatial Patterns and Temperature Predictions of Tuna Fatty Acids: Tracing Essential Nutrients and Changes in Primary Producers

    PubMed Central

    Pethybridge, Heidi R.; Parrish, Christopher C.; Morrongiello, John; Young, Jock W.; Farley, Jessica H.; Gunasekera, Rasanthi M.; Nichols, Peter D.

    2015-01-01

    Fatty acids are among the least understood nutrients in marine environments, despite their profile as key energy components of food webs and that they are essential to all life forms. Presented here is a novel approach to predict the spatial-temporal distributions of fatty acids in marine resources using generalized additive mixed models. Fatty acid tracers (FAT) of key primary producers, nutritional condition indices and concentrations of two essential long-chain (≥C20) omega-3 fatty acids (EFA) measured in muscle of albacore tuna, Thunnus alalunga, sampled in the south-west Pacific Ocean were response variables. Predictive variables were: location, time, sea surface temperature (SST) and chlorophyll-a (Chla), and phytoplankton biomass at time of catch and curved fork length. The best model fit for all fatty acid parameters included fish length and SST. The first oceanographic contour maps of EFA and FAT (FATscapes) were produced and demonstrated clear geographical gradients in the study region. Predicted changes in all fatty acid parameters reflected shifts in the size-structure of dominant primary producers. Model projections show that the supply and availability of EFA are likely to be negatively affected by increases in SST especially in temperate waters where a 12% reduction in both total fatty acid content and EFA proportions are predicted. Such changes will have large implications for the availability of energy and associated health benefits to high-order consumers. Results convey new concerns on impacts of projected climate change on fish-derived EFA in marine systems. PMID:26135308

  12. Homogenization Theory for the Prediction of Obstructed Solute Diffusivity in Macromolecular Solutions

    PubMed Central

    Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova

    2016-01-01

    The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter. PMID:26731550

  13. A simple physiologically based pharmacokinetic model evaluating the effect of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saylor, Kyle, E-mail: saylor@vt.edu; Zhang, Chenmi

    Physiologically based pharmacokinetic (PBPK) modeling was applied to investigate the effects of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans. Successful construction of both rat and human models was achieved by fitting model outputs to published nicotine concentration time course data in the blood and in the brain. Key parameters presumed to have the most effect on the ability of these antibodies to prevent nicotine from entering the brain were selected for investigation using the human model. These parameters, which included antibody affinity for nicotine, antibody cross-reactivity with cotinine, and antibody concentration, were broken down intomore » different, clinically-derived in silico treatment levels and fed into the human PBPK model. Model predictions suggested that all three parameters, in addition to smoking status, have a sizable impact on anti-nicotine antibodies' ability to prevent nicotine from entering the brain and that the antibodies elicited by current human vaccines do not have sufficient binding characteristics to reduce brain nicotine concentrations. If the antibody binding characteristics achieved in animal studies can similarly be achieved in human studies, however, nicotine vaccine efficacy in terms of brain nicotine concentration reduction is predicted to meet threshold values for alleviating nicotine dependence. - Highlights: • Modelling of nicotine disposition in the presence of anti-nicotine antibodies • Key vaccine efficacy factors are evaluated in silico in rats and in humans. • Model predicts insufficient antibody binding in past human nicotine vaccines. • Improving immunogenicity and antibody specificity may lead to vaccine success.« less

  14. Development of Design Rules for Reliable Antisense RNA Behavior in E. coli.

    PubMed

    Hoynes-O'Connor, Allison; Moon, Tae Seok

    2016-12-16

    A key driver of synthetic biology is the development of designable genetic parts with predictable behaviors that can be quickly implemented in complex genetic systems. However, the intrinsic complexity of gene regulation can make the rational design of genetic parts challenging. This challenge is apparent in the design of antisense RNA (asRNA) regulators. Though asRNAs are well-known regulators, the literature governing their design is conflicting and leaves the synthetic biology community without clear asRNA design rules. The goal of this study is to perform a comprehensive experimental characterization and statistical analysis of 121 unique asRNA regulators in order to resolve the conflicts that currently exist in the literature. asRNAs usually consist of two regions, the Hfq binding site and the target binding region (TBR). First, the behaviors of several high-performing Hfq binding sites were compared, in terms of their ability to improve repression efficiencies and their orthogonality. Next, a large-scale analysis of TBR design parameters identified asRNA length, the thermodynamics of asRNA-mRNA complex formation, and the percent of target mismatch as key parameters for TBR design. These parameters were used to develop simple asRNA design rules. Finally, these design rules were applied to construct both a simple and a complex genetic circuit containing different asRNAs, and predictable behavior was observed in both circuits. The results presented in this study will drive synthetic biology forward by providing useful design guidelines for the construction of asRNA regulators with predictable behaviors.

  15. Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2015-12-01

    The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.

  16. The treatment of uncertainties in reactive pollution dispersion models at urban scales.

    PubMed

    Tomlin, A S; Ziehn, T; Goodman, P; Tate, J E; Dixon, N S

    2016-07-18

    The ability to predict NO2 concentrations ([NO2]) within urban street networks is important for the evaluation of strategies to reduce exposure to NO2. However, models aiming to make such predictions involve the coupling of several complex processes: traffic emissions under different levels of congestion; dispersion via turbulent mixing; chemical processes of relevance at the street-scale. Parameterisations of these processes are challenging to quantify with precision. Predictions are therefore subject to uncertainties which should be taken into account when using models within decision making. This paper presents an analysis of mean [NO2] predictions from such a complex modelling system applied to a street canyon within the city of York, UK including the treatment of model uncertainties and their causes. The model system consists of a micro-scale traffic simulation and emissions model, and a Reynolds averaged turbulent flow model coupled to a reactive Lagrangian particle dispersion model. The analysis focuses on the sensitivity of predicted in-street increments of [NO2] at different locations in the street to uncertainties in the model inputs. These include physical characteristics such as background wind direction, temperature and background ozone concentrations; traffic parameters such as overall demand and primary NO2 fraction; as well as model parameterisations such as roughness lengths, turbulent time- and length-scales and chemical reaction rate coefficients. Predicted [NO2] is shown to be relatively robust with respect to model parameterisations, although there are significant sensitivities to the activation energy for the reaction NO + O3 as well as the canyon wall roughness length. Under off-peak traffic conditions, demand is the key traffic parameter. Under peak conditions where the network saturates, road-side [NO2] is relatively insensitive to changes in demand and more sensitive to the primary NO2 fraction. The most important physical parameter was found to be the background wind direction. The study highlights the key parameters required for reliable [NO2] estimations suggesting that accurate reference measurements for wind direction should be a critical part of air quality assessments for in-street locations. It also highlights the importance of street scale chemical processes in forming road-side [NO2], particularly for regions of high NOx emissions such as close to traffic queues.

  17. Satellite irrigation management support with the terrestrial observation and prediction system: A framework for integration of satellite & surface observations to support improvements in agricultural water resource management

    USDA-ARS?s Scientific Manuscript database

    In California and other regions vulnerable to water shortages, satellite-derived estimates of key hydrologic parameters can support agricultural producers and water managers in maximizing the benefits of available water supplies. The Satellite Irrigation Management Support (SIMS) project combines N...

  18. Technical Note: Field-observed angles of repose for stored grain in the United States

    USDA-ARS?s Scientific Manuscript database

    Bulk grain angle of repose (AoR) is a key parameter for inventorying grain, predicting flow characteristics, and designing bins and grain handling systems. The AoR is defined for two cases, piling (dynamic) or emptying (static), and usually varies with grain type. The objective of this study was to ...

  19. An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate

    NASA Astrophysics Data System (ADS)

    Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud

    2018-03-01

    In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.

  20. Unifying mechanical and thermodynamic descriptions across the thioredoxin protein family.

    PubMed

    Mottonen, James M; Xu, Minli; Jacobs, Donald J; Livesay, Dennis R

    2009-05-15

    We compare various predicted mechanical and thermodynamic properties of nine oxidized thioredoxins (TRX) using a Distance Constraint Model (DCM). The DCM is based on a nonadditive free energy decomposition scheme, where entropic contributions are determined from rigidity and flexibility of structure based on distance constraints. We perform averages over an ensemble of constraint topologies to calculate several thermodynamic and mechanical response functions that together yield quantitative stability/flexibility relationships (QSFR). Applied to the TRX protein family, QSFR metrics display a rich variety of similarities and differences. In particular, backbone flexibility is well conserved across the family, whereas cooperativity correlation describing mechanical and thermodynamic couplings between the residue pairs exhibit distinctive features that readily standout. The diversity in predicted QSFR metrics that describe cooperativity correlation between pairs of residues is largely explained by a global flexibility order parameter describing the amount of intrinsic flexibility within the protein. A free energy landscape is calculated as a function of the flexibility order parameter, and key values are determined where the native-state, transition-state, and unfolded-state are located. Another key value identifies a mechanical transition where the global nature of the protein changes from flexible to rigid. The key values of the flexibility order parameter help characterize how mechanical and thermodynamic response is linked. Variation in QSFR metrics and key characteristics of global flexibility are related to the native state X-ray crystal structure primarily through the hydrogen bond network. Furthermore, comparison of three TRX redox pairs reveals differences in thermodynamic response (i.e., relative melting point) and mechanical properties (i.e., backbone flexibility and cooperativity correlation) that are consistent with experimental data on thermal stabilities and NMR dynamical profiles. The results taken together demonstrate that small-scale structural variations are amplified into discernible global differences by propagating mechanical couplings through the H-bond network.

  1. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  2. Bayes Analysis and Reliability Implications of Stress-Rupture Testing a Kevlar/Epoxy COPV using Temperature and Pressure Acceleration

    NASA Technical Reports Server (NTRS)

    Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.

    2009-01-01

    Composite Overwrapped Pressure Vessel (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Sometimes lifetime testing is performed on an actual COPV in service in an effort to validate the reliability model that is the basis for certifying the continued flight worthiness of its sisters. Currently, testing of such a Kevlar49(registered TradeMark)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the data base and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one nine , that is, reducing the probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several would be necessary.

  3. Understanding controls of hydrologic processes across two monolithological catchments using model-data integration

    NASA Astrophysics Data System (ADS)

    Xiao, D.; Shi, Y.; Li, L.

    2016-12-01

    Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.

  4. A Simplified Model of Moisture Transport in Hydrophilic Porous Media With Applications to Pharmaceutical Tablets.

    PubMed

    Klinzing, Gerard R; Zavaliangos, Antonios

    2016-08-01

    This work establishes a predictive model that explicitly recognizes microstructural parameters in the description of the overall mass uptake and local gradients of moisture into tablets. Model equations were formulated based on local tablet geometry to describe the transient uptake of moisture. An analytical solution to a simplified set of model equations was solved to predict the overall mass uptake and moisture gradients with the tablets. The analytical solution takes into account individual diffusion mechanisms in different scales of porosity and diffusion into the solid phase. The time constant of mass uptake was found to be a function of several key material properties, such as tablet relative density, pore tortuosity, and equilibrium moisture content of the material. The predictions of the model are in excellent agreement with experimental results for microcrystalline cellulose tablets without the need for parameter fitting. The model presented provides a new method to analyze the transient uptake of moisture into hydrophilic materials with the knowledge of only a few fundamental material and microstructural parameters. In addition, the model allows for quick and insightful predictions of moisture diffusion for a variety of practical applications including pharmaceutical tablets, porous polymer systems, or cementitious materials. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Subject-specific finite element modelling of the human foot complex during walking: sensitivity analysis of material properties, boundary and loading conditions.

    PubMed

    Akrami, Mohammad; Qian, Zhihui; Zou, Zhemin; Howard, David; Nester, Chris J; Ren, Lei

    2018-04-01

    The objective of this study was to develop and validate a subject-specific framework for modelling the human foot. This was achieved by integrating medical image-based finite element modelling, individualised multi-body musculoskeletal modelling and 3D gait measurements. A 3D ankle-foot finite element model comprising all major foot structures was constructed based on MRI of one individual. A multi-body musculoskeletal model and 3D gait measurements for the same subject were used to define loading and boundary conditions. Sensitivity analyses were used to investigate the effects of key modelling parameters on model predictions. Prediction errors of average and peak plantar pressures were below 10% in all ten plantar regions at five key gait events with only one exception (lateral heel, in early stance, error of 14.44%). The sensitivity analyses results suggest that predictions of peak plantar pressures are moderately sensitive to material properties, ground reaction forces and muscle forces, and significantly sensitive to foot orientation. The maximum region-specific percentage change ratios (peak stress percentage change over parameter percentage change) were 1.935-2.258 for ground reaction forces, 1.528-2.727 for plantar flexor muscles and 4.84-11.37 for foot orientations. This strongly suggests that loading and boundary conditions need to be very carefully defined based on personalised measurement data.

  6. A strategy to establish Food Safety Model Repositories.

    PubMed

    Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M

    2015-07-02

    Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.

  7. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  8. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  9. The Compositional Dependence of the Microstructure and Properties of CMSX-4 Superalloys

    NASA Astrophysics Data System (ADS)

    Yu, Hao; Xu, Wei; Van Der Zwaag, Sybrand

    2018-01-01

    The degradation of creep resistance in Ni-based single-crystal superalloys is essentially ascribed to their microstructural evolution. Yet there is a lack of work that manages to predict (even qualitatively) the effect of alloying element concentrations on the rate of microstructural degradation. In this research, a computational model is presented to connect the rafting kinetics of Ni superalloys to their chemical composition by combining thermodynamics calculation and a modified microstructural model. To simulate the evolution of key microstructural parameters during creep, the isotropic coarsening rate and γ/ γ' misfit stress are defined as composition-related parameters, and the effect of service temperature, time, and applied stress are taken into consideration. Two commercial superalloys, for which the kinetics of the rafting process are selected as the reference alloys, and the corresponding microstructural parameters are simulated and compared with experimental observations reported in the literature. The results confirm that our physical model not requiring any fitting parameters manages to predict (semiquantitatively) the microstructural parameters for different service conditions, as well as the effects of alloying element concentrations. The model can contribute to the computational design of new Ni-based superalloys.

  10. Methodology for calculating shear stress in a meandering channel

    Treesearch

    Kyung-Seop Sin

    2010-01-01

    Shear stress in meandering channels is the key parameter to predict bank erosion and bend migration. A representative study reach of the Rio Grande River in central New Mexico has been modeled in the Hydraulics Laboratory at CSU. To determine the shear stress distribution in a meandering channel, the large scale (1:12) physical modeling study was conducted in the...

  11. Modeling temperature and moisture state effects on acoustic velocity in wood

    Treesearch

    Shan Gao; X. Wang; L. Wang; R.B. Bruce

    2011-01-01

    Previous research has proved the concept of acoustic wave propagation methods for evaluating wood quality of trees and logs during forest operations. As commercial acoustic equipment is implemented in field for various purposes, one has to consider the influence of operating temperature on acoustic velocity — a key parameter for wood property prediction. Our field...

  12. Quantifying the Model-Related Variability of Biomass Stock and Change Estimates in the Norwegian National Forest Inventory

    Treesearch

    Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup

    2014-01-01

    National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...

  13. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  14. Developing a tuberculosis transmission model that accounts for changes in population health.

    PubMed

    Oxlade, Olivia; Schwartzman, Kevin; Benedetti, Andrea; Pai, Madhukar; Heymann, Jody; Menzies, Dick

    2011-01-01

    Simulation models are useful in policy planning for tuberculosis (TB) control. To accurately assess interventions, important modifiers of the epidemic should be accounted for in evaluative models. Improvements in population health were associated with the declining TB epidemic in the pre-antibiotic era and may be relevant today. The objective of this study was to develop and validate a TB transmission model that accounted for changes in population health. We developed a deterministic TB transmission model, using reported data from the pre-antibiotic era in England. Change in adjusted life expectancy, used as a proxy for general health, was used to determine the rate of change of key epidemiological parameters. Predicted outcomes included risk of TB infection and TB mortality. The model was validated in the setting of the Netherlands and then applied to modern Peru. The model, developed in the setting of England, predicted TB trends in the Netherlands very accurately. The R(2) value for correlation between observed and predicted data was 0.97 and 0.95 for TB infection and mortality, respectively. In Peru, the predicted decline in incidence prior to the expansion of "Directly Observed Treatment Short Course" (The DOTS strategy) was 3.7% per year (observed = 3.9% per year). After DOTS expansion, the predicted decline was very similar to the observed decline of 5.8% per year. We successfully developed and validated a TB model, which uses a proxy for population health to estimate changes in key epidemiology parameters. Population health contributed significantly to improvement in TB outcomes observed in Peru. Changing population health should be incorporated into evaluative models for global TB control.

  15. Structure-activity correlation in transfection promoted by pyridinium cationic lipids.

    PubMed

    Parvizi-Bahktar, P; Mendez-Campos, J; Raju, L; Khalique, N A; Jubeli, E; Larsen, H; Nicholson, D; Pungente, M D; Fyles, T M

    2016-03-21

    The efficiency of the transfection of a plasmid DNA encoding a galactosidase promoted by a series of pyridinium lipids in mixtures with other cationic lipids and neutral lipids was assessed in CHO-K1 cells. We identify key molecular parameters of the lipids in the mixture - clog P, lipid length, partial molar volume - to predict the morphology of the lipid-DNA lipoplex and then correlate these same parameters with transfection efficiency in an in vitro assay. We define a Transfection Index that provides a linear correlation with normalized transfection efficiency over a series of 90 different lipoplex compositions. We also explore the influence of the same set of molecular parameters on the cytotoxicity of the formulations.

  16. New best estimates for radionuclide solid-liquid distribution coefficients in soils. Part 2: naturally occurring radionuclides.

    PubMed

    Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M

    2009-09-01

    Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.

  17. Parametric study of a concentric coaxial glass tube solar air collector: a theoretical approach

    NASA Astrophysics Data System (ADS)

    Dabra, Vishal; Yadav, Avadhesh

    2018-06-01

    Concentric coaxial glass tube solar air collector (CCGTSAC) is a quite innovative development in the field of solar collectors. This type of collector is specially designed to produce hot air. A mathematical model based on the energy conservation equations for small control volumes along the axial direction of concentric coaxial glass tube (CCGT) is developed in this paper. It is applied to predict the effect of thirteen different parameters on the exit air temperature rise and appeared that absorber tube size, length of CCGT, absorptivity of transparent glazing, transmissivity of transparent glazing, absorptivity of absorber coating, inlet or ambient air temperature, mass flow rate, variation of thermo-physical properties of air, wind speed, solar intensity and vacuum present between transparent glazing and absorber tube are significant parameters. Results of the model were analysed to predict the effect of key parameters on the thermal performance of a CCGTSAC for exit air temperature rise about 43.9-58.4 °C.

  18. Parametric study of a concentric coaxial glass tube solar air collector: a theoretical approach

    NASA Astrophysics Data System (ADS)

    Dabra, Vishal; Yadav, Avadhesh

    2017-12-01

    Concentric coaxial glass tube solar air collector (CCGTSAC) is a quite innovative development in the field of solar collectors. This type of collector is specially designed to produce hot air. A mathematical model based on the energy conservation equations for small control volumes along the axial direction of concentric coaxial glass tube (CCGT) is developed in this paper. It is applied to predict the effect of thirteen different parameters on the exit air temperature rise and appeared that absorber tube size, length of CCGT, absorptivity of transparent glazing, transmissivity of transparent glazing, absorptivity of absorber coating, inlet or ambient air temperature, mass flow rate, variation of thermo-physical properties of air, wind speed, solar intensity and vacuum present between transparent glazing and absorber tube are significant parameters. Results of the model were analysed to predict the effect of key parameters on the thermal performance of a CCGTSAC for exit air temperature rise about 43.9-58.4 °C.

  19. Quantitative Studies of the Optical and UV Spectra of Galactic Early B Supergiants

    NASA Technical Reports Server (NTRS)

    Searle, S. C.; Prinja, R. K.; Massa, D.; Ryans, R.

    2008-01-01

    We undertake an optical and ultraviolet spectroscopic analysis of a sample of 20 Galactic B0-B5 supergiants of luminosity classes Ia, Ib, Iab, and II. Fundamental stellar parameters are obtained from optical diagnostics and a critical comparison of the model predictions to observed UV spectral features is made. Methods. Fundamental parameters (e.g., T(sub eff), log L(sub *), mass-loss rates and CNO abundances) are derived for individual stars using CMFGEN, a nLTE, line-blanketed model atmosphere code. The impact of these newly derived parameters on the Galactic B supergiant Ten scale, mass discrepancy, and wind-momentum luminosity relation is examined. Results. The B supergiant temperature scale derived here shows a reduction of about 1000-3000 K compared to previous results using unblanketed codes. Mass-loss rate estimates are in good agreement with predicted theoretical values, and all of the 20 BO-B5 supergiants analysed show evidence of CNO processing. A mass discrepancy still exists between spectroscopic and evolutionary masses, with the largest discrepancy occuring at log (L/(solar)L approx. 5.4. The observed WLR values calculated for B0-B0.7 supergiants are higher than predicted values, whereas the reverse is true for B1-B5 supergiants. This means that the discrepancy between observed and theoretical values cannot be resolved by adopting clumped (i.e., lower) mass-loss rates as for O stars. The most surprising result is that, although CMFGEN succeeds in reproducing the optical stellar spectrum accurately, it fails to precisely reproduce key UV diagnostics, such as the N v and C IV P Cygni profiles. This problem arises because the models are not ionised enough and fail to reproduce the full extent of the observed absorption trough of the P Cygni profiles. Conclusions. Newly-derived fundamental parameters for early B supergiants are in good agreement with similar work in the field. The most significant discovery, however, is the failure of CMFGEN to predict the correct ionisation fraction for some ions. Such findings add further support to revising the current standard model of massive star winds, as our understanding of these winds is incomplete without a precise knowledge of the ionisation structure and distribution of clumping in the wind. Key words. techniques: spectroscopic - stars: mass-loss - stars: supergiants - stars: abundances - stars: atmospheres - stars: fundamental parameters

  20. Ball Bearing Analysis with the ORBIS Tool

    NASA Technical Reports Server (NTRS)

    Halpin, Jacob D.

    2016-01-01

    Ball bearing design is critical to the success of aerospace mechanisms. Key bearing performance parameters, such as load capability, stiffness, torque, and life all depend on accurate determination of the internal load distribution. Hence, a good analytical bearing tool that provides both comprehensive capabilities and reliable results becomes a significant asset to the engineer. This paper introduces the ORBIS bearing tool. A discussion of key modeling assumptions and a technical overview is provided. Numerous validation studies and case studies using the ORBIS tool are presented. All results suggest the ORBIS code closely correlates to predictions on bearing internal load distributions, stiffness, deflection and stresses.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vrugt, Jasper A; Robinson, Bruce A; Ter Braak, Cajo J F

    In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented usingmore » the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchments.« less

  2. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    PubMed

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  3. Model predictions of ocular injury from 1315-nm laser light

    NASA Astrophysics Data System (ADS)

    Polhamus, Garrett D.; Zuclich, Joseph A.; Cain, Clarence P.; Thomas, Robert J.; Foltz, Michael

    2003-06-01

    With the advent of future weapons systems that employ high energy lasers, the 1315 nm wavelength will present a new laser safety hazard to the armed forces. Experiments in non-human primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular and retinal lesions, as a function of pulse duration and spot size at the cornea. To improve our understanding of this phenomena, there is a need for a mathematical model that properly predicts these injuries and their dependence on appropriate exposure parameters. This paper describes the use of a finite difference model of laser thermal injury in the cornea and retina. The model was originally developed for use with shorter wavelength laser irradiation, and as such, requires estimation of several key parameters used in the computations. The predictions from the model are compared to the experimental data, and conclusions are drawn regarding the ability of the model to properly follow the published observations at this wavelength.

  4. Bridging the gap between theoretical ecology and real ecosystems: modeling invertebrate community composition in streams.

    PubMed

    Schuwirth, Nele; Reichert, Peter

    2013-02-01

    For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.

  5. A semi-supervised learning approach for RNA secondary structure prediction.

    PubMed

    Yonemoto, Haruka; Asai, Kiyoshi; Hamada, Michiaki

    2015-08-01

    RNA secondary structure prediction is a key technology in RNA bioinformatics. Most algorithms for RNA secondary structure prediction use probabilistic models, in which the model parameters are trained with reliable RNA secondary structures. Because of the difficulty of determining RNA secondary structures by experimental procedures, such as NMR or X-ray crystal structural analyses, there are still many RNA sequences that could be useful for training whose secondary structures have not been experimentally determined. In this paper, we introduce a novel semi-supervised learning approach for training parameters in a probabilistic model of RNA secondary structures in which we employ not only RNA sequences with annotated secondary structures but also ones with unknown secondary structures. Our model is based on a hybrid of generative (stochastic context-free grammars) and discriminative models (conditional random fields) that has been successfully applied to natural language processing. Computational experiments indicate that the accuracy of secondary structure prediction is improved by incorporating RNA sequences with unknown secondary structures into training. To our knowledge, this is the first study of a semi-supervised learning approach for RNA secondary structure prediction. This technique will be useful when the number of reliable structures is limited. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  7. Advanced multivariate data analysis to determine the root cause of trisulfide bond formation in a novel antibody–peptide fusion

    PubMed Central

    Goldrick, Stephen; Holmes, William; Bond, Nicholas J.; Lewis, Gareth; Kuiper, Marcel; Turner, Richard

    2017-01-01

    ABSTRACT Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody–peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high‐throughput (HT) micro‐bioreactor system (AmbrTM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on‐line and off‐line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale‐up. Biotechnol. Bioeng. 2017;114: 2222–2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:28500668

  8. Contrasting cue-density effects in causal and prediction judgments.

    PubMed

    Vadillo, Miguel A; Musca, Serban C; Blanco, Fernando; Matute, Helena

    2011-02-01

    Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.

  9. The use of least squares methods in functional optimization of energy use prediction models

    NASA Astrophysics Data System (ADS)

    Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.

    2012-06-01

    The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.

  10. Predicting perturbation patterns from the topology of biological networks.

    PubMed

    Santolini, Marc; Barabási, Albert-László

    2018-06-20

    High-throughput technologies, offering an unprecedented wealth of quantitative data underlying the makeup of living systems, are changing biology. Notably, the systematic mapping of the relationships between biochemical entities has fueled the rapid development of network biology, offering a suitable framework to describe disease phenotypes and predict potential drug targets. However, our ability to develop accurate dynamical models remains limited, due in part to the limited knowledge of the kinetic parameters underlying these interactions. Here, we explore the degree to which we can make reasonably accurate predictions in the absence of the kinetic parameters. We find that simple dynamically agnostic models are sufficient to recover the strength and sign of the biochemical perturbation patterns observed in 87 biological models for which the underlying kinetics are known. Surprisingly, a simple distance-based model achieves 65% accuracy. We show that this predictive power is robust to topological and kinetic parameter perturbations, and we identify key network properties that can increase up to 80% the recovery rate of the true perturbation patterns. We validate our approach using experimental data on the chemotactic pathway in bacteria, finding that a network model of perturbation spreading predicts with ∼80% accuracy the directionality of gene expression and phenotype changes in knock-out and overproduction experiments. These findings show that the steady advances in mapping out the topology of biochemical interaction networks opens avenues for accurate perturbation spread modeling, with direct implications for medicine and drug development.

  11. INDIVIDUALIZED FETAL GROWTH ASSESSMENT: CRITICAL EVALUATION OF KEY CONCEPTS IN THE SPECIFICATION OF THIRD TRIMESTER GROWTH TRAJECTORIES

    PubMed Central

    Deter, Russell L.; Lee, Wesley; Yeo, Lami; Romero, Roberto

    2012-01-01

    Objectives To characterize 2nd and 3rd trimester fetal growth using Individualized Growth Assessment in a large cohort of fetuses with normal growth outcomes. Methods A prospective longitudinal study of 119 pregnancies was carried out from 18 weeks, MA, to delivery. Measurements of eleven fetal growth parameters were obtained from 3D scans at 3–4 week intervals. Regression analyses were used to determine Start Points [SP] and Rossavik model [P = c (t) k + st] coefficients c, k and s for each parameter in each fetus. Second trimester growth model specification functions were re-established. These functions were used to generate individual growth models and determine predicted s and s-residual [s = pred s + s-resid] values. Actual measurements were compared to predicted growth trajectories obtained from the growth models and Percent Deviations [% Dev = {{actual − predicted}/predicted} × 100] calculated. Age-specific reference standards for this statistic were defined using 2-level statistical modeling for the nine directly measured parameters and estimated weight. Results Rossavik models fit the data for all parameters very well [R2: 99%], with SP’s and k values similar to those found in a much smaller cohort. The c values were strongly related to the 2nd trimester slope [R2: 97%] as was predicted s to estimated c [R2: 95%]. The latter was negative for skeletal parameters and positive for soft tissue parameters. The s-residuals were unrelated to estimated c’s [R2: 0%], and had mean values of zero. Rossavik models predicted 3rd trimester growth with systematic errors close to 0% and random errors [95% range] of 5.7 – 10.9% and 20.0 – 24.3% for one and three dimensional parameters, respectively. Moderate changes in age-specific variability were seen in the 3rd trimester.. Conclusions IGA procedures for evaluating 2nd and 3rd trimester growth are now established based on a large cohort [4–6 fold larger than those used previously], thus permitting more reliable growth assessment with each fetus acting as its own control. New, more rigorously defined, age-specific standards for the evaluation of 3rd trimester growth deviations are now available for 10 anatomical parameters. Our results are also consistent with the predicted s and s-residual being representatives of growth controllers operating through the insulin-like growth factor [IGF] axis. PMID:23962305

  12. The detectability of radio emission from exoplanets

    NASA Astrophysics Data System (ADS)

    Lynch, C. R.; Murphy, Tara; Lenc, E.; Kaplan, D. L.

    2018-05-01

    Like the magnetised planets in our Solar System, magnetised exoplanets should emit strongly at radio wavelengths. Radio emission directly traces the planetary magnetic fields and radio detections can place constraints on the physical parameters of these features. Large comparative studies of predicted radio emission characteristics for the known population of exoplanets help to identify what physical parameters could be key for producing bright, observable radio emission. Since the last comparative study, many thousands of exoplanets have been discovered. We report new estimates for the radio flux densities and maximum emission frequencies for the current population of known exoplanets orbiting pre-main sequence and main-sequence stars with spectral types F-M. The set of exoplanets predicted to produce observable radio emission are Hot Jupiters orbiting young stars. The youth of these system predicts strong stellar magnetic fields and/or dense winds, which are key for producing bright, observable radio emission. We use a new all-sky circular polarisation Murchison Widefield Array survey to place sensitive limits on 200 MHz emission from exoplanets, with 3σ values ranging from 4.0 - 45.0 mJy. Using a targeted Giant Metre Wave Radio Telescope observing campaign, we also report a 3σ upper limit of 4.5 mJy on the radio emission from V830 Tau b, the first Hot Jupiter to be discovered orbiting a pre-main sequence star. Our limit is the first to be reported for the low-frequency radio emission from this source.

  13. An analytic formula for the supercluster mass function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seunghwan; Lee, Jounghun, E-mail: slim@astro.umass.edu, E-mail: jounghun@astro.snu.ac.kr

    2014-03-01

    We present an analytic formula for the supercluster mass function, which is constructed by modifying the extended Zel'dovich model for the halo mass function. The formula has two characteristic parameters whose best-fit values are determined by fitting to the numerical results from N-body simulations for the standard ΛCDM cosmology. The parameters are found to be independent of redshifts and robust against variation of the key cosmological parameters. Under the assumption that the same formula for the supercluster mass function is valid for non-standard cosmological models, we show that the relative abundance of the rich superclusters should be a powerful indicatormore » of any deviation of the real universe from the prediction of the standard ΛCDM model.« less

  14. Drop rebound after impact: the role of the receding contact angle.

    PubMed

    Antonini, C; Villa, F; Bernagozzi, I; Amirfazli, A; Marengo, M

    2013-12-31

    Data from the literature suggest that the rebound of a drop from a surface can be achieved when the wettability is low, i.e., when contact angles, measured at the triple line (solid-liquid-air), are high. However, no clear criterion exists to predict when a drop will rebound from a surface and which is the key wetting parameter to govern drop rebound (e.g., the "equilibrium" contact angle, θeq, the advancing and the receding contact angles, θA and θR, respectively, the contact angle hysteresis, Δθ, or any combination of these parameters). To clarify the conditions for drop rebound, we conducted experimental tests on different dry solid surfaces with variable wettability, from hydrophobic to superhydrophobic surfaces, with advancing contact angles 108° < θA < 169° and receding contact angles 89° < θR < 161°. It was found that the receding contact angle is the key wetting parameter that influences drop rebound, along with surface hydrophobicity: for the investigated impact conditions (drop diameter 2.4 < D0 < 2.6 mm, impact speed 0.8 < V < 4.1 m/s, Weber number 25 < We < 585), rebound was observed only on surfaces with receding contact angles higher than 100°. Also, the drop rebound time decreased by increasing the receding contact angle. It was also shown that in general care must be taken when using statically defined wetting parameters (such as advancing and receding contact angles) to predict the dynamic behavior of a liquid on a solid surface because the dynamics of the phenomenon may affect surface wetting close to the impact point (e.g., as a result of the transition from the Cassie-Baxter to Wenzel state in the case of the so-called superhydrophobic surfaces) and thus affect the drop rebound.

  15. Uncertainty quantification and propagation of errors of the Lennard-Jones 12-6 parameters for n-alkanes

    PubMed Central

    Knotts, Thomas A.

    2017-01-01

    Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455

  16. Composite multi-parameter ranking of real and virtual compounds for design of MC4R agonists: renaissance of the Free-Wilson methodology.

    PubMed

    Nilsson, Ingemar; Polla, Magnus O

    2012-10-01

    Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.

  17. Evaluation and integration of existing methods for computational prediction of allergens

    PubMed Central

    2013-01-01

    Background Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. Results To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. Conclusions This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction. PMID:23514097

  18. Evaluation and integration of existing methods for computational prediction of allergens.

    PubMed

    Wang, Jing; Yu, Yabin; Zhao, Yunan; Zhang, Dabing; Li, Jing

    2013-01-01

    Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction.

  19. Universal structural parameter to quantitatively predict metallic glass properties

    DOE PAGES

    Ding, Jun; Cheng, Yong-Qiang; Sheng, Howard; ...

    2016-12-12

    Quantitatively correlating the amorphous structure in metallic glasses (MGs) with their physical properties has been a long-sought goal. Here we introduce flexibility volume' as a universal indicator, to bridge the structural state the MG is in with its properties, on both atomic and macroscopic levels. The flexibility volume combines static atomic volume with dynamics information via atomic vibrations that probe local configurational space and interaction between neighbouring atoms. We demonstrate that flexibility volume is a physically appropriate parameter that can quantitatively predict the shear modulus, which is at the heart of many key properties of MGs. Moreover, the new parametermore » correlates strongly with atomic packing topology, and also with the activation energy for thermally activated relaxation and the propensity for stress-driven shear transformations. These correlations are expected to be robust across a very wide range of MG compositions, processing conditions and length scales.« less

  20. Compression-recovery model of absorptive glass mat (AGM) separator guided by X-ray micro-computed tomography analysis

    NASA Astrophysics Data System (ADS)

    Kameswara Rao, P. V.; Rawal, Amit; Kumar, Vijay; Rajput, Krishn Gopal

    2017-10-01

    Absorptive glass mat (AGM) separators play a key role in enhancing the cycle life of the valve regulated lead acid (VRLA) batteries by maintaining the elastic characteristics under a defined level of compression force with the plates of the electrodes. Inevitably, there are inherent challenges to maintain the required level of compression characteristics of AGM separators during the charge and discharge of the battery. Herein, we report a three-dimensional (3D) analytical model for predicting the compression-recovery behavior of AGM separators by formulating a direct relationship with the constituent fiber and structural parameters. The analytical model of compression-recovery behavior of AGM separators has successfully included the fiber slippage criterion and internal friction losses. The presented work uses, for the first time, 3D data of fiber orientation from X-ray micro-computed tomography, for predicting the compression-recovery behavior of AGM separators. A comparison has been made between the theoretical and experimental results of compression-recovery behavior of AGM samples with defined fiber orientation characteristics. In general, the theory agreed reasonably well with the experimental results of AGM samples in both dry and wet states. Through theoretical modeling, fiber volume fraction was established as one of the key structural parameters that modulates the compression hysteresis of an AGM separator.

  1. Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.

    PubMed

    Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea

    2014-01-01

    A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.

  2. A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints

    NASA Astrophysics Data System (ADS)

    Wei, Helin; Wang, Kuisheng

    2011-11-01

    Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.

  3. Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Licatta, Angelo; Griffin, Devon

    2007-01-01

    The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.

  4. Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Griffin, Devon

    2008-01-01

    The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.

  5. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  6. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  7. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond.

    PubMed

    Perdikaris, Paris; Karniadakis, George Em

    2016-05-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).

  8. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond

    PubMed Central

    Perdikaris, Paris; Karniadakis, George Em

    2016-01-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481

  9. Advanced multivariate data analysis to determine the root cause of trisulfide bond formation in a novel antibody-peptide fusion.

    PubMed

    Goldrick, Stephen; Holmes, William; Bond, Nicholas J; Lewis, Gareth; Kuiper, Marcel; Turner, Richard; Farid, Suzanne S

    2017-10-01

    Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody-peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high-throughput (HT) micro-bioreactor system (Ambr TM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on-line and off-line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale-up. Biotechnol. Bioeng. 2017;114: 2222-2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  10. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    PubMed

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  11. Fermentation of Saccharomyces cerevisiae - Combining kinetic modeling and optimization techniques points out avenues to effective process design.

    PubMed

    Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara

    2018-09-14

    A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Post-cracking characteristics of high performance fiber reinforced cementitious composites

    NASA Astrophysics Data System (ADS)

    Suwannakarn, Supat W.

    The application of high performance fiber reinforced cement composites (HPFRCC) in structural systems depends primarily on the material's tensile response, which is a direct function of fiber and matrix characteristics, the bond between them, and the fiber content or volume fraction. The objective of this dissertation is to evaluate and model the post-cracking behavior of HPFRCC. In particular, it focused on the influential parameters controlling tensile behavior and the variability associated with them. The key parameters considered include: the stress and strain at first cracking, the stress and strain at maximum post-cracking, the shape of the stress-strain or stress-elongation response, the multiple cracking process, the shape of the resistance curve after crack localization, the energy associated with the multiple cracking process, and the stress versus crack opening response of a single crack. Both steel fibers and polymeric fibers, perceived to have the greatest potential for current commercial applications, are considered. The main variables covered include fiber type (Torex, Hooked, PVA, and Spectra) and fiber volume fraction (ranging from 0.75% to 2.0%). An extensive experimental program is carried out using direct tensile tests and stress-versus crack opening displacement tests on notched tensile prisms. The key experimental results were analysed and modeled using simple prediction equations which, combined with a composite mechanics approach, allowed for predicting schematic simplified stress-strain and stress-displacement response curves for use in structural modeling. The experimental data show that specimens reinforced with Torex fibers performs best, follows by Hooked and Spectra fibers, then PVA fibers. Significant variability in key parameters was observed througout suggesting that variability must be studied further. The new information obtained can be used as input for material models for finite element analysis and can provide greater confidence in using the HPFRC composites in structural applications. It also provides a good foundation to integrate these composites in conventional structural analysis and design.

  13. Elucidating Inherent Uncertainties in Data Assimilation for Predictions Incorporating Non-stationary Processes - Focus on Predictive Phenology

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2017-12-01

    Data assimilation (DA) is the widely accepted procedure for estimating parameters within predictive models because of the adaptability and uncertainty quantification offered by Bayesian methods. DA applications in phenology modeling offer critical insights into how extreme weather or changes in climate impact the vegetation life cycle. Changes in leaf onset and senescence, root phenology, and intermittent leaf shedding imply large changes in the surface radiative, water, and carbon budgets at multiple scales. Models of leaf phenology require concurrent atmospheric and soil conditions to determine how biophysical plant properties respond to changes in temperature, light and water demand. Presently, climatological records for fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI), the modelled states indicative of plant phenology, are not available. Further, DA models are typically trained on short periods of record (e.g. less than 10 years). Using limited records with a DA framework imposes non-stationarity on estimated parameters and the resulting predicted model states. This talk discusses how uncertainty introduced by the inherent non-stationarity of the modeled processes propagates through a land-surface hydrology model coupled to a predictive phenology model. How water demand is accounted for in the upscaling of DA model inputs and analysis period serves as a key source of uncertainty in the FPAR and LAI predictions. Parameters estimated from different DA effectively calibrate a plant water-use strategy within the land-surface hydrology model. For example, when extreme droughts are included in the DA period, the plants are trained to uptake water, transpire, and assimilate carbon under favorable conditions and quickly shut down at the onset of water stress.

  14. Project for Solar-Terrestrial Environment Prediction (PSTEP): Towards Predicting Next Solar Cycle

    NASA Astrophysics Data System (ADS)

    Imada, S.; Iijima, H.; Hotta, H.; Shiota, D.; Kanou, O.; Fujiyama, M.; Kusano, K.

    2016-10-01

    It is believed that the longer-term variations of the solar activity can affect the Earth's climate. Therefore, predicting the next solar cycle is crucial for the forecast of the "solar-terrestrial environment". To build prediction schemes for the activity level of the next solar cycle is a key for the long-term space weather study. Although three-years prediction can be almost achieved, the prediction of next solar cycle is very limited, so far. We are developing a five-years prediction scheme by combining the Surface Flux Transport (SFT) model and the most accurate measurements of solar magnetic fields as a part of the PSTEP (Project for Solar-Terrestrial Environment Prediction),. We estimate the meridional flow, differential rotation, and turbulent diffusivity from recent modern observations (Hinode and Solar Dynamics Observatory). These parameters are used in the SFT models to predict the polar magnetic fields strength at the solar minimum. In this presentation, we will explain the outline of our strategy to predict the next solar cycle. We also report the present status and the future perspective of our project.

  15. RANS computations for identification of 1-D cavitation model parameters: application to full load cavitation vortex rope

    NASA Astrophysics Data System (ADS)

    Alligné, S.; Decaix, J.; Müller, A.; Nicolet, C.; Avellan, F.; Münch, C.

    2017-04-01

    Due to the massive penetration of alternative renewable energies, hydropower is a key energy conversion technology for stabilizing the electrical power network by using hydraulic machines at off design operating conditions. At full load, the axisymmetric cavitation vortex rope developing in Francis turbines acts as an internal source of energy, leading to an instability commonly referred to as self-excited surge. 1-D models are developed to predict this phenomenon and to define the range of safe operating points for a hydropower plant. These models require a calibration of several parameters. The present work aims at identifying these parameters by using CFD results as objective functions for an optimization process. A 2-D Venturi and 3-D Francis turbine are considered.

  16. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  17. Challenges of model transferability to data-scarce regions (Invited)

    NASA Astrophysics Data System (ADS)

    Samaniego, L. E.

    2013-12-01

    Developing the ability to globally predict the movement of water on the land surface at spatial scales from 1 to 5 km constitute one of grand challenges in land surface modelling. Copying with this grand challenge implies that land surface models (LSM) should be able to make reliable predictions across locations and/or scales other than those used for parameter estimation. In addition to that, data scarcity and quality impose further difficulties in attaining reliable predictions of water and energy fluxes at the scales of interest. Current computational limitations impose also seriously limitations to exhaustively investigate the parameter space of LSM over large domains (e.g. greater than half a million square kilometers). Addressing these challenges require holistic approaches that integrate the best techniques available for parameter estimation, field measurements and remotely sensed data at their native resolutions. An attempt to systematically address these issues is the multiscale parameterisation technique (MPR) that links high resolution land surface characteristics with effective model parameters. This technique requires a number of pedo-transfer functions and a much fewer global parameters (i.e. coefficients) to be inferred by calibration in gauged basins. The key advantage of this technique is the quasi-scale independence of the global parameters which enables to estimate global parameters at coarser spatial resolutions and then to transfer them to (ungauged) areas and scales of interest. In this study we show the ability of this technique to reproduce the observed water fluxes and states over a wide range of climate and land surface conditions ranging from humid to semiarid and from sparse to dense forested regions. Results of transferability of global model parameters in space (from humid to semi-arid basins) and across scales (from coarser to finer) clearly indicate the robustness of this technique. Simulations with coarse data sets (e.g. EOBS forcing 25x25 km2, FAO soil map 1:5000000) using parameters obtained with high resolution information (REGNIE forcing 1x1 km2, BUEK soil map 1:1000000) in different climatic regions indicate the potential of MPR for prediction in data-scarce regions. In this presentation, we will also discuss how the transferability of global model parameters across scales and locations helps to identify deficiencies in model structure and regionalization functions.

  18. Competition for Shared Resources in the Cellular Chassis: Impact on Synthetic Circuits

    DTIC Science & Technology

    2013-11-01

    glyceraldehyde dehydrogenase from E. coli and it is very abundant in the bacterial cytoplasm. The control MBP-dRFP does not display an effect on GFP ...discover how key parameters control the extent of coupling we constructed an analytical model that predicts how the concentration of one protein ( GFP ...SECURITY CLASSIFICATION OF: Protein overexpression leads to growth inhibition and decreased expression of native proteins. This effect is mostly due

  19. Computer Simulations: A Tool to Predict Experimental Parameters with Cold Atoms

    DTIC Science & Technology

    2013-04-01

    Department of the Army position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an...specifically designed to work with cold atom systems and atom chips, and is already able to compute their key properties. We simulate our experimental...also allows one to choose different physics and define the interdependencies between them. It is not specifically designed for cold atom systems or

  20. Quantifying Uncertainty in Inverse Models of Geologic Data from Shear Zones

    NASA Astrophysics Data System (ADS)

    Davis, J. R.; Titus, S.

    2016-12-01

    We use Bayesian Markov chain Monte Carlo simulation to quantify uncertainty in inverse models of geologic data. Although this approach can be applied to many tectonic settings, field areas, and mathematical models, we focus on transpressional shear zones. The underlying forward model, either kinematic or dynamic, produces a velocity field, which predicts the dikes, foliation-lineations, crystallographic preferred orientation (CPO), shape preferred orientation (SPO), and other geologic data that should arise in the shear zone. These predictions are compared to data using modern methods of geometric statistics, including the Watson (for lines such as dike poles), isotropic matrix Fisher (for orientations such as foliation-lineations and CPO), and multivariate normal (for log-ellipsoids such as SPO) distributions. The result of the comparison is a likelihood, which is a key ingredient in the Bayesian approach. The other key ingredient is a prior distribution, which reflects the geologist's knowledge of the parameters before seeing the data. For some parameters, such as shear zone strike and dip, we identify realistic informative priors. For other parameters, where the geologist has no prior knowledge, we identify useful uninformative priors.We investigate the performance of this approach through numerical experiments on synthetic data sets. A fundamental issue is that many models of deformation exhibit asymptotic behavior (e.g., flow apophyses, fabric attractors) or periodic behavior (e.g., SPO when the clasts are rigid), which causes the likelihood to be too uniform. Based on our experiments, we offer rules of thumb for how many data, of which types, are needed to constrain deformation.

  1. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  2. Aspects of metallic low-temperature transport in Mott-insulator/band-insulator superlattices: Optical conductivity and thermoelectricity

    NASA Astrophysics Data System (ADS)

    Rüegg, Andreas; Pilgram, Sebastian; Sigrist, Manfred

    2008-06-01

    We investigate the low-temperature electrical and thermal transport properties in atomically precise metallic heterostructures involving strongly correlated electron systems. The model of the Mott-insulator/band-insulator superlattice was discussed in the framework of the slave-boson mean-field approximation and transport quantities were derived by use of the Boltzmann transport equation in the relaxation-time approximation. The results for the optical conductivity are in good agreement with recently published experimental data on (LaTiO3)N/(SrTiO3)M superlattices and allow us to estimate the values of key parameters of the model. Furthermore, predictions for the thermoelectric response were made and the dependence of the Seebeck coefficient on model parameters was studied in detail. The width of the Mott-insulating material was identified as the most relevant parameter, in particular, this parameter provides a way to optimize the thermoelectric power factor at low temperatures.

  3. A global resource allocation strategy governs growth transition kinetics of Escherichia coli

    PubMed Central

    Erickson, David W; Schink, Severin J.; Patsalo, Vadim; Williamson, James R.; Gerland, Ulrich; Hwa, Terence

    2018-01-01

    A grand challenge of systems biology is to predict the kinetic responses of living systems to perturbations starting from the underlying molecular interactions. Changes in the nutrient environment have long been used to study regulation and adaptation phenomena in microorganisms1–3 and they remain a topic of active investigation4–11. Although much is known about the molecular interactions that govern the regulation of key metabolic processes in response to applied perturbations12–17, they are insufficiently quantified for predictive bottom-up modelling. Here we develop a top-down approach, expanding the recently established coarse-grained proteome allocation models15,18–20 from steady-state growth into the kinetic regime. Using only qualitative knowledge of the underlying regulatory processes and imposing the condition of flux balance, we derive a quantitative model of bacterial growth transitions that is independent of inaccessible kinetic parameters. The resulting flux-controlled regulation model accurately predicts the time course of gene expression and biomass accumulation in response to carbon upshifts and downshifts (for example, diauxic shifts) without adjustable parameters. As predicted by the model and validated by quantitative proteomics, cells exhibit suboptimal recovery kinetics in response to nutrient shifts owing to a rigid strategy of protein synthesis allocation, which is not directed towards alleviating specific metabolic bottlenecks. Our approach does not rely on kinetic parameters, and therefore points to a theoretical framework for describing a broad range of such kinetic processes without detailed knowledge of the underlying biochemical reactions. PMID:29072300

  4. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less

  5. Predicting plant biomass accumulation from image-derived parameters

    PubMed Central

    Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian

    2018-01-01

    Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559

  6. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  7. Models and observations of Arctic melt ponds

    NASA Astrophysics Data System (ADS)

    Golden, K. M.

    2016-12-01

    During the Arctic melt season, the sea ice surface undergoes a striking transformation from vast expanses of snow covered ice to complex mosaics of ice and melt ponds. Sea ice albedo, a key parameter in climate modeling, is largely determined by the complex evolution of melt pond configurations. In fact, ice-albedo feedback has played a significant role in the recent declines of the summer Arctic sea ice pack. However, understanding melt pond evolution remains a challenge to improving climate projections. It has been found that as the ponds grow and coalesce, the fractal dimension of their boundaries undergoes a transition from 1 to about 2, around a critical length scale of 100 square meters in area. As the ponds evolve they take complex, self-similar shapes with boundaries resembling space-filling curves. I will outline how mathematical models of composite materials and statistical physics, such as percolation and Ising models, are being used to describe this evolution and predict key geometrical parameters that agree very closely with observations.

  8. Performance of a block detector PET scanner in imaging non-pure positron emitters—modelling and experimental validation with 124I

    NASA Astrophysics Data System (ADS)

    Robinson, S.; Julyan, P. J.; Hastings, D. L.; Zweit, J.

    2004-12-01

    The key performance measures of resolution, count rate, sensitivity and scatter fraction are predicted for a dedicated BGO block detector patient PET scanner (GE Advance) in 2D mode for imaging with the non-pure positron-emitting radionuclides 124I, 55Co, 61Cu, 62Cu, 64Cu and 76Br. Model calculations including parameters of the scanner, decay characteristics of the radionuclides and measured parameters in imaging the pure positron-emitter 18F are used to predict performance according to the National Electrical Manufacturers Association (NEMA) NU 2-1994 criteria. Predictions are tested with measurements made using 124I and show that, in comparison with 18F, resolution degrades by 1.2 mm radially and tangentially throughout the field-of-view (prediction: 1.2 mm), count-rate performance reduces considerably and in close accordance with calculations, sensitivity decreases to 23.4% of that with 18F (prediction: 22.9%) and measured scatter fraction increases from 10.0% to 14.5% (prediction: 14.7%). Model predictions are expected to be equally accurate for other radionuclides and may be extended to similar scanners. Although performance is worse with 124I than 18F, imaging is not precluded in 2D mode. The viability of 124I imaging and performance in a clinical context compared with 18F is illustrated with images of a patient with recurrent thyroid cancer acquired using both [124I]-sodium iodide and [18F]-2-fluoro-2-deoxyglucose.

  9. A Four-parameter Budyko Equation for Mean Annual Water Balance

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Wang, D.

    2016-12-01

    In this study, a four-parameter Budyko equation for long-term water balance at watershed scale is derived based on the proportionality relationships of the two-stage partitioning of precipitation. The four-parameter Budyko equation provides a practical solution to balance model simplicity and representation of dominated hydrologic processes. Under the four-parameter Budyko framework, the key hydrologic processes related to the lower bound of Budyko curve are determined, that is, the lower bound is corresponding to the situation when surface runoff and initial evaporation not competing with base flow generation are zero. The derived model is applied to 166 MOPEX watersheds in United States, and the dominant controlling factors on each parameter are determined. Then, four statistical models are proposed to predict the four model parameters based on the dominant controlling factors, e.g., saturated hydraulic conductivity, fraction of sand, time period between two storms, watershed slope, and Normalized Difference Vegetation Index. This study shows a potential application of the four-parameter Budyko equation to constrain land-surface parameterizations in ungauged watersheds or general circulation models.

  10. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  11. In silico screening of drug-membrane thermodynamics reveals linear relations between bulk partitioning and the potential of mean force

    NASA Astrophysics Data System (ADS)

    Menichetti, Roberto; Kanekal, Kiran H.; Kremer, Kurt; Bereau, Tristan

    2017-09-01

    The partitioning of small molecules in cell membranes—a key parameter for pharmaceutical applications—typically relies on experimentally available bulk partitioning coefficients. Computer simulations provide a structural resolution of the insertion thermodynamics via the potential of mean force but require significant sampling at the atomistic level. Here, we introduce high-throughput coarse-grained molecular dynamics simulations to screen thermodynamic properties. This application of physics-based models in a large-scale study of small molecules establishes linear relationships between partitioning coefficients and key features of the potential of mean force. This allows us to predict the structure of the insertion from bulk experimental measurements for more than 400 000 compounds. The potential of mean force hereby becomes an easily accessible quantity—already recognized for its high predictability of certain properties, e.g., passive permeation. Further, we demonstrate how coarse graining helps reduce the size of chemical space, enabling a hierarchical approach to screening small molecules.

  12. Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling.

    PubMed

    Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M

    2012-04-01

    Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency. Copyright © 2011 Wiley Periodicals, Inc.

  13. Use the predictive models to explore the key factors affecting phytoplankton succession in Lake Erhai, China.

    PubMed

    Zhu, Rong; Wang, Huan; Chen, Jun; Shen, Hong; Deng, Xuwei

    2018-01-01

    Increasing algae in Lake Erhai has resulted in frequent blooms that have not only led to water ecosystem degeneration but also seriously influenced the quality of the water supply and caused extensive damage to the local people, as the lake is a water resource for Dali City. Exploring the key factors affecting phytoplankton succession and developing predictive models with easily detectable parameters for phytoplankton have been proven to be practical ways to improve water quality. To this end, a systematic survey focused on phytoplankton succession was conducted over 2 years in Lake Erhai. The data from the first study year were used to develop predictive models, and the data from the second year were used for model verification. The seasonal succession of phytoplankton in Lake Erhai was obvious. The dominant groups were Cyanobacteria in the summer, Chlorophyta in the autumn and Bacillariophyta in the winter. The developments and verification of predictive models indicated that compared to phytoplankton biomass, phytoplankton density is more effective for estimating phytoplankton variation in Lake Erhai. CCA (canonical correlation analysis) indicated that TN (total nitrogen), TP (total phosphorus), DO (dissolved oxygen), SD (Secchi depth), Cond (conductivity), T (water temperature), and ORP (oxidation reduction potential) had significant influences (p < 0.05) on the phytoplankton community. The CCA of the dominant species found that Microcystis was significantly influenced by T. The dominant Chlorophyta, Psephonema aenigmaticum and Mougeotia, were significantly influenced by TN. All results indicated that TN and T were the two key factors driving phytoplankton succession in Lake Erhai.

  14. An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs

    DOE PAGES

    Zhu, Zizhong; Wu, Ping; Wu, Shunqing; ...

    2017-05-15

    An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less

  15. Predicting Defects Using Information Intelligence Process Models in the Software Technology Project

    PubMed Central

    Selvaraj, Manjula Gandhi; Jayabal, Devi Shree; Srinivasan, Thenmozhi; Balasubramanie, Palanisamy

    2015-01-01

    A key differentiator in a competitive market place is customer satisfaction. As per Gartner 2012 report, only 75%–80% of IT projects are successful. Customer satisfaction should be considered as a part of business strategy. The associated project parameters should be proactively managed and the project outcome needs to be predicted by a technical manager. There is lot of focus on the end state and on minimizing defect leakage as much as possible. Focus should be on proactively managing and shifting left in the software life cycle engineering model. Identify the problem upfront in the project cycle and do not wait for lessons to be learnt and take reactive steps. This paper gives the practical applicability of using predictive models and illustrates use of these models in a project to predict system testing defects thus helping to reduce residual defects. PMID:26495427

  16. Predicting Defects Using Information Intelligence Process Models in the Software Technology Project.

    PubMed

    Selvaraj, Manjula Gandhi; Jayabal, Devi Shree; Srinivasan, Thenmozhi; Balasubramanie, Palanisamy

    2015-01-01

    A key differentiator in a competitive market place is customer satisfaction. As per Gartner 2012 report, only 75%-80% of IT projects are successful. Customer satisfaction should be considered as a part of business strategy. The associated project parameters should be proactively managed and the project outcome needs to be predicted by a technical manager. There is lot of focus on the end state and on minimizing defect leakage as much as possible. Focus should be on proactively managing and shifting left in the software life cycle engineering model. Identify the problem upfront in the project cycle and do not wait for lessons to be learnt and take reactive steps. This paper gives the practical applicability of using predictive models and illustrates use of these models in a project to predict system testing defects thus helping to reduce residual defects.

  17. An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Zizhong; Wu, Ping; Wu, Shunqing

    An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less

  18. Prediction of interface residue based on the features of residue interaction network.

    PubMed

    Jiao, Xiong; Ranganathan, Shoba

    2017-11-07

    Protein-protein interaction plays a crucial role in the cellular biological processes. Interface prediction can improve our understanding of the molecular mechanisms of the related processes and functions. In this work, we propose a classification method to recognize the interface residue based on the features of a weighted residue interaction network. The random forest algorithm is used for the prediction and 16 network parameters and the B-factor are acting as the element of the input feature vector. Compared with other similar work, the method is feasible and effective. The relative importance of these features also be analyzed to identify the key feature for the prediction. Some biological meaning of the important feature is explained. The results of this work can be used for the related work about the structure-function relationship analysis via a residue interaction network model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Surface elastic properties in silicon nanoparticles

    NASA Astrophysics Data System (ADS)

    Melis, Claudio; Giordano, Stefano; Colombo, Luciano

    2017-09-01

    The elastic behavior of the external surface of a solid body plays a key role in nanomechanical phenomena. While bulk elasticity enjoys the benefits of a robust theoretical understanding, many surface elasticity features remain unexplored: some of them are here addressed by blending together continuum elasticity and atomistic simulations. A suitable readdressing of the surface elasticity theory allows to write the balance equations in arbitrary curvilinear coordinates and to investigate the dependence of the surface elastic parameters on the mean and Gaussian curvatures of the surface. In particular, we predict the radial strain induced by surface effects in spherical and cylindrical silicon nanoparticles and provide evidence that the surface parameters are nearly independent of curvatures and, therefore, of the surface conformation.

  20. Thermal inflation with a thermal waterfall scalar field coupled to a light spectator scalar field

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Lyth, David H.; Rumsey, Arron

    2017-05-01

    A new model of thermal inflation is introduced, in which the mass of the thermal waterfall field is dependent on a light spectator scalar field. Using the δ N formalism, the "end of inflation" scenario is investigated in order to ascertain whether this model is able to produce the dominant contribution to the primordial curvature perturbation. A multitude of constraints are considered so as to explore the parameter space, with particular emphasis on key observational signatures. For natural values of the parameters, the model is found to yield a sharp prediction for the scalar spectral index and its running, well within the current observational bounds.

  1. Nonuniform gyrotropic oscillation of skyrmion in a nanodisk

    NASA Astrophysics Data System (ADS)

    Xuan, Shengjie; Liu, Yan

    2018-04-01

    It was predicted that magnetic skyrmions have potential application in the spin nano-oscillators. The oscillation frequency is a key parameter. In this paper, we study the skyrmion relaxation in a FeGe nanodisk and find that the oscillation frequency depends on the skyrmion position. The relaxation process is associated with the variation of skyrmion diameter. By analyzing the system energy, we believe that the nonuniform gyrotropic oscillation frequency is due to the change of the skyrmion diameter.

  2. The Software Management Environment (SME)

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.; Decker, William; Buell, John

    1988-01-01

    The Software Management Environment (SME) is a research effort designed to utilize the past experiences and results of the Software Engineering Laboratory (SEL) and to incorporate this knowledge into a tool for managing projects. SME provides the software development manager with the ability to observe, compare, predict, analyze, and control key software development parameters such as effort, reliability, and resource utilization. The major components of the SME, the architecture of the system, and examples of the functionality of the tool are discussed.

  3. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  4. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  5. Towards soil property retrieval from space: Proof of concept using in situ observations

    NASA Astrophysics Data System (ADS)

    Bandara, Ranmalee; Walker, Jeffrey P.; Rüdiger, Christoph

    2014-05-01

    Soil moisture is a key variable that controls the exchange of water and energy fluxes between the land surface and the atmosphere. However, the temporal evolution of soil moisture is neither easy to measure nor monitor at large scales because of its high spatial variability. This is mainly a result of the local variation in soil properties and vegetation cover. Thus, land surface models are normally used to predict the evolution of soil moisture and yet, despite their importance, these models are based on low-resolution soil property information or typical values. Therefore, the availability of more accurate and detailed soil parameter data than are currently available is vital, if regional or global soil moisture predictions are to be made with the accuracy required for environmental applications. The proposed solution is to estimate the soil hydraulic properties via model calibration to remotely sensed soil moisture observation, with in situ observations used as a proxy in this proof of concept study. Consequently, the feasibility is assessed, and the level of accuracy that can be expected determined, for soil hydraulic property estimation of duplex soil profiles in a semi-arid environment using near-surface soil moisture observations under naturally occurring conditions. The retrieved soil hydraulic parameters were then assessed by their reliability to predict the root zone soil moisture using the Joint UK Land Environment Simulator model. When using parameters that were retrieved using soil moisture observations, the root zone soil moisture was predicted to within an accuracy of 0.04 m3/m3, which is an improvement of ∼0.025 m3/m3 on predictions that used published values or pedo-transfer functions.

  6. Influence of pulmonary emphysema on COPD assessment test-oriented categorization in GOLD document.

    PubMed

    Suzuki, Toshio; Tada, Yuji; Kawata, Naoko; Ikari, Jun; Kasahara, Yasunori; Sakurai, Yoriko; Iesato, Ken; Nishimura, Rintaro; West, James; Tatsumi, Koichiro

    2015-01-01

    The COPD assessment test (CAT) score is a key component of the multifactorial assessment of COPD in the Global initiative for chronic Obstructive Lung Disease (GOLD) guidelines of 2014. Nevertheless, little is known regarding the differences among COPD categories in terms of clinical parameters such as pulmonary function or radiological findings. Thus, our aims in this study were to evaluate the associations between CAT scores and pulmonary clinical parameters, and to investigate factors that could discriminate between a "less symptomatic group" (categories A and C) and a "more symptomatic group" (categories B and D) among stable COPD patients. We enrolled 200 outpatients at Chiba University Hospital. Study subjects were assessed by CAT, pulmonary function testing, and multidetector computed tomography (MDCT). We assessed possible correlations between these indices. CAT scores were negatively correlated with percentage of the forced expiratory volume in 1 second predicted value (FEV1 %predicted) and percentage of the diffusing capacity for carbon monoxide per liter of lung volume predicted value (DLCO/VA [%predicted]) results and positively correlated with low attenuation volume percentage (LAV%) and residual volume to total lung capacity ratios (RV/TLC). In the "more symptomatic group" (category B or D), the mean DLCO/VA (%predicted) was significantly lower and the mean LAV% and RV/TLC was significantly higher than those in the "less symptomatic group" (category A or C), respectively. Interestingly, those in category B had higher mean LAV% compared to those in category C. CAT scores were significantly correlated with pulmonary function parameters and emphysematous changes on MDCT. The new GOLD classification system would be a step toward a phenotypic approach, especially taking into account the degree of emphysema and hyperinflation.

  7. Comparing modelling techniques when designing VPH gratings for BigBOSS

    NASA Astrophysics Data System (ADS)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5<=z<=1.6 in addition to several hundred thousand QSOs at 0.5<=z<=3.5. When designing BigBOSS instrumentation, it is imperative to maximize throughput whilst maintaining a resolving power of between R=1500 and 4000 over a wavelength range of 360-980 nm. Volume phase Holographic (VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  8. Development of a General Form CO 2 and Brine Flux Input Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansoor, K.; Sun, Y.; Carroll, S.

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less

  9. Adsorption of dextrin on hydrophobic minerals.

    PubMed

    Beaussart, Audrey; Mierczynska-Vasilev, Agnieszka; Beattie, David A

    2009-09-01

    The adsorption of dextrin on talc, molybdenite, and graphite (three naturally hydrophobic minerals) has been compared. Adsorption isotherms and in situ tapping mode atomic force microscope (TMAFM) imaging have enabled polymer adsorbed amount and morphology of the adsorbed layer (area coverage and polymer domain size) to be determined and also the amount of hydration water in the structure of the adsorbed layer. The effect of the polymer on the mineral contact angles, measured by the captive bubble method on cleaved mineral surfaces, indicates clear correlations between the hydrophobicity reduction of the minerals, the adsorbed amount, and the surface coverage of the adsorbed polymer. Predictions of the flotation recovery of the treated mineral phases have been confirmed by performing batch flotation experiments. The influence of the polymer surface coverage on flotation recovery has highlighted the importance of this key parameter in the predictions of depressant efficiency. The roles of the initial hydrophobicity and the surface structure of the mineral basal plane in determining adsorption parameters and flotation response of the polymer-treated minerals are also discussed.

  10. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  11. Dynamics of Ranking Processes in Complex Systems

    NASA Astrophysics Data System (ADS)

    Blumm, Nicholas; Ghoshal, Gourab; Forró, Zalán; Schich, Maximilian; Bianconi, Ginestra; Bouchaud, Jean-Philippe; Barabási, Albert-László

    2012-09-01

    The world is addicted to ranking: everything, from the reputation of scientists, journals, and universities to purchasing decisions is driven by measured or perceived differences between them. Here, we analyze empirical data capturing real time ranking in a number of systems, helping to identify the universal characteristics of ranking dynamics. We develop a continuum theory that not only predicts the stability of the ranking process, but shows that a noise-induced phase transition is at the heart of the observed differences in ranking regimes. The key parameters of the continuum theory can be explicitly measured from data, allowing us to predict and experimentally document the existence of three phases that govern ranking stability.

  12. Satellite Remote Sensing is Key to Water Cycle Integrator

    NASA Astrophysics Data System (ADS)

    Koike, T.

    2016-12-01

    To promote effective multi-sectoral, interdisciplinary collaboration based on coordinated and integrated efforts, the Global Earth Observation System of Systems (GEOSS) is now developing a "GEOSS Water Cycle Integrator (WCI)", which integrates "Earth observations", "modeling", "data and information", "management systems" and "education systems". GEOSS/WCI sets up "work benches" by which partners can share data, information and applications in an interoperable way, exchange knowledge and experiences, deepen mutual understanding and work together effectively to ultimately respond to issues of both mitigation and adaptation. (A work bench is a virtual geographical or phenomenological space where experts and managers collaborate to use information to address a problem within that space). GEOSS/WCI enhances the coordination of efforts to strengthen individual, institutional and infrastructure capacities, especially for effective interdisciplinary coordination and integration. GEOSS/WCI archives various satellite data to provide various hydrological information such as cloud, rainfall, soil moisture, or land-surface snow. These satellite products were validated using land observation in-situ data. Water cycle models can be developed by coupling in-situ and satellite data. River flows and other hydrological parameters can be simulated and validated by in-situ data. Model outputs from weather-prediction, seasonal-prediction, and climate-prediction models are archived. Some of these model outputs are archived on an online basis, but other models, e.g., climate-prediction models are archived on an offline basis. After models are evaluated and biases corrected, the outputs can be used as inputs into the hydrological models for predicting the hydrological parameters. Additionally, we have already developed a data-assimilation system by combining satellite data and the models. This system can improve our capability to predict hydrological phenomena. The WCI can provide better predictions of the hydrological parameters for integrated water resources management (IWRM) and also assess the impact of climate change and calculate adaptation needs.

  13. Improved design method of a rotating spool compressor using a comprehensive model and comparison to experimental results

    NASA Astrophysics Data System (ADS)

    Bradshaw, Craig R.; Kemp, Greg; Orosz, Joe; Groll, Eckhard A.

    2017-08-01

    An improvement to the design process of the rotating spool compressor is presented. This improvement utilizes a comprehensive model to explore two working uids (R410A and R134a), various displaced volumes, at a variety of geometric parameters. The geometric parameters explored consists of eccentricity ratio and length-to-diameter ratio. The eccentricity ratio is varied between 0.81 and 0.92 and the length-to-diameter ratio is varied between 0.4 and 3. The key tradeoffs are evaluated and the results show that there is an optimum eccentricity and length-to-diameter ratio, which will maximize the model predicted performance, that is unique to a particular uid and displaced volume. For R410A, the modeling tool predicts that the overall isentropic efficiency will optimize at a length-to-diameter ratio that is lower than for R134a. Additionally, the tool predicts that as the displaced volume increases the overall isentropic efficiency will increase and the ideal length-to-diameter ratio will shift. The result from this study are utilized to develop a basic design for a 141 kW (40 tonsR) capacity prototype spool compressor for light-commercial air-conditioning applications. Results from a prototype compressor constructed based on these efforts is presented. The volumetric efficiency predictions are found to be very accurate with the overall isentropic efficiency predictions shown to be slightly over-predicted.

  14. Comparison of three‐dimensional analysis and stereological techniques for quantifying lithium‐ion battery electrode microstructures

    PubMed Central

    TAIWO, OLUWADAMILOLA O.; FINEGAN, DONAL P.; EASTWOOD, DAVID S.; FIFE, JULIE L.; BROWN, LEON D.; DARR, JAWWAD A.; LEE, PETER D.; BRETT, DANIEL J.L.

    2016-01-01

    Summary Lithium‐ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium‐ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3‐D imaging techniques, quantitative assessment of 3‐D microstructures from 2‐D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two‐dimensional (2‐D) data sets. In this study, stereological prediction and three‐dimensional (3‐D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium‐ion battery electrodes were imaged using synchrotron‐based X‐ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2‐D image sections generated from tomographic imaging, whereas direct 3‐D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2‐D image sections is bound to be associated with ambiguity and that volume‐based 3‐D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially‐dependent parameters, such as tortuosity and pore‐phase connectivity. PMID:26999804

  15. Comparison of three-dimensional analysis and stereological techniques for quantifying lithium-ion battery electrode microstructures.

    PubMed

    Taiwo, Oluwadamilola O; Finegan, Donal P; Eastwood, David S; Fife, Julie L; Brown, Leon D; Darr, Jawwad A; Lee, Peter D; Brett, Daniel J L; Shearing, Paul R

    2016-09-01

    Lithium-ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium-ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3-D imaging techniques, quantitative assessment of 3-D microstructures from 2-D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two-dimensional (2-D) data sets. In this study, stereological prediction and three-dimensional (3-D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium-ion battery electrodes were imaged using synchrotron-based X-ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2-D image sections generated from tomographic imaging, whereas direct 3-D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2-D image sections is bound to be associated with ambiguity and that volume-based 3-D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially-dependent parameters, such as tortuosity and pore-phase connectivity. © 2016 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  16. Generalized Smooth Transition Map Between Tent and Logistic Maps

    NASA Astrophysics Data System (ADS)

    Sayed, Wafaa S.; Fahmy, Hossam A. H.; Rezk, Ahmed A.; Radwan, Ahmed G.

    There is a continuous demand on novel chaotic generators to be employed in various modeling and pseudo-random number generation applications. This paper proposes a new chaotic map which is a general form for one-dimensional discrete-time maps employing the power function with the tent and logistic maps as special cases. The proposed map uses extra parameters to provide responses that fit multiple applications for which conventional maps were not enough. The proposed generalization covers also maps whose iterative relations are not based on polynomials, i.e. with fractional powers. We introduce a framework for analyzing the proposed map mathematically and predicting its behavior for various combinations of its parameters. In addition, we present and explain the transition map which results in intermediate responses as the parameters vary from their values corresponding to tent map to those corresponding to logistic map case. We study the properties of the proposed map including graph of the map equation, general bifurcation diagram and its key-points, output sequences, and maximum Lyapunov exponent. We present further explorations such as effects of scaling, system response with respect to the new parameters, and operating ranges other than transition region. Finally, a stream cipher system based on the generalized transition map validates its utility for image encryption applications. The system allows the construction of more efficient encryption keys which enhances its sensitivity and other cryptographic properties.

  17. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  18. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  19. Experimental evidence of a pathogen invasion threshold

    PubMed Central

    Krkošek, Martin

    2018-01-01

    Host density thresholds to pathogen invasion separate regions of parameter space corresponding to endemic and disease-free states. The host density threshold is a central concept in theoretical epidemiology and a common target of human and wildlife disease control programmes, but there is mixed evidence supporting the existence of thresholds, especially in wildlife populations or for pathogens with complex transmission modes (e.g. environmental transmission). Here, we demonstrate the existence of a host density threshold for an environmentally transmitted pathogen by combining an epidemiological model with a microcosm experiment. Experimental epidemics consisted of replicate populations of naive crustacean zooplankton (Daphnia dentifera) hosts across a range of host densities (20–640 hosts l−1) that were exposed to an environmentally transmitted fungal pathogen (Metschnikowia bicuspidata). Epidemiological model simulations, parametrized independently of the experiment, qualitatively predicted experimental pathogen invasion thresholds. Variability in parameter estimates did not strongly influence outcomes, though systematic changes to key parameters have the potential to shift pathogen invasion thresholds. In summary, we provide one of the first clear experimental demonstrations of pathogen invasion thresholds in a replicated experimental system, and provide evidence that such thresholds may be predictable using independently constructed epidemiological models. PMID:29410876

  20. Scaling relationships among drivers of aquatic respiration from the smallest to the largest freshwater ecosystems

    USGS Publications Warehouse

    Hall, Ed K; Schoolmaster, Donald; Amado, A.M; Stets, Edward G.; Lennon, J.T.; Domaine, L.; Cotner, J.B.

    2016-01-01

    To address how various environmental parameters control or constrain planktonic respiration (PR), we used geometric scaling relationships and established biological scaling laws to derive quantitative predictions for the relationships among key drivers of PR. We then used empirical measurements of PR and environmental (soluble reactive phosphate [SRP], carbon [DOC], chlorophyll a [Chl-a)], and temperature) and landscape parameters (lake area [LA] and watershed area [WA]) from a set of 44 lakes that varied in size and trophic status to test our hypotheses. We found that landscape-level processes affected PR through direct effects on DOC and temperature and indirectly via SRP. In accordance with predictions made from known relationships and scaling laws, scale coefficients (the parameter that describes the shape of a relationship between 2 variables) were found to be negative and have an absolute value 1, others <1). We also found evidence of a significant relationship between temperature and SRP. Because our dataset included measurements of respiration from small pond catchments to the largest body of freshwater on the planet, Lake Superior, these findings should be applicable to controls of PR for the great majority of temperate aquatic ecosystems.

  1. Aerodynamic parameters from distributed heterogeneous CNT hair sensors with a feedforward neural network.

    PubMed

    Magar, Kaman Thapa; Reich, Gregory W; Kondash, Corey; Slinker, Keith; Pankonien, Alexander M; Baur, Jeffery W; Smyers, Brian

    2016-11-10

    Distributed arrays of artificial hair sensors have bio-like sensing capabilities to obtain spatial and temporal surface flow information which is an important aspect of an effective fly-by-feel system. The spatiotemporal surface flow measurement enables further exploration of additional flow features such as flow stagnation, separation, and reattachment points. Due to their inherent robustness and fault tolerant capability, distributed arrays of hair sensors are well equipped to assess the aerodynamic and flow states in adverse conditions. In this paper, a local flow measurement from an array of artificial hair sensors in a wind tunnel experiment is used with a feedforward artificial neural network to predict aerodynamic parameters such as lift coefficient, moment coefficient, free-stream velocity, and angle of attack on an airfoil. We find the prediction error within 6% and 10% for lift and moment coefficients. The error for free-stream velocity and angle of attack were within 0.12 mph and 0.37 degrees. Knowledge of these parameters are key to finding the real time forces and moments which paves the way for effective control design to increase flight agility, stability, and maneuverability.

  2. The logic of comparative life history studies for estimating key parameters, with a focus on natural mortality rate

    USGS Publications Warehouse

    Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.

    2016-01-01

    There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.

  3. Review of Concrete Biodeterioration in Relation to Buried Nuclear Waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turick, C; Berry, C.

    Long-term storage of low level radioactive material in below ground concrete disposal units (DUs) (Saltstone Disposal Facility) is a means of depositing wastes generated from nuclear operations of the U.S. Department of Energy. Based on the currently modeled degradation mechanisms, possible microbial induced effects on the structural integrity of buried low level wastes must be addressed. Previous international efforts related to microbial impacts on concrete structures that house low level radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the recent research in thismore » field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete vaults housing stored wastes and the wastes themselves. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources like components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The results of this review suggest that microbial activity in Saltstone, (grouted low level radioactive waste) is unlikely due to very high pH and osmotic pressure. Biodegradation of the concrete vaults housing the radioactive waste however, is a possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Results from this review point to parameters to focus on for modeling activities and also, possible options for mitigation that would minimize concrete biodegradation. In addition, key chemical components that drive microbial activity on concrete surfaces are discussed.« less

  4. Validation and sensitivity of the FINE Bayesian network for forecasting aquatic exposure to nano-silver.

    PubMed

    Money, Eric S; Barton, Lauren E; Dawson, Joseph; Reckhow, Kenneth H; Wiesner, Mark R

    2014-03-01

    The adaptive nature of the Forecasting the Impacts of Nanomaterials in the Environment (FINE) Bayesian network is explored. We create an updated FINE model (FINEAgNP-2) for predicting aquatic exposure concentrations of silver nanoparticles (AgNP) by combining the expert-based parameters from the baseline model established in previous work with literature data related to particle behavior, exposure, and nano-ecotoxicology via parameter learning. We validate the AgNP forecast from the updated model using mesocosm-scale field data and determine the sensitivity of several key variables to changes in environmental conditions, particle characteristics, and particle fate. Results show that the prediction accuracy of the FINEAgNP-2 model increased approximately 70% over the baseline model, with an error rate of only 20%, suggesting that FINE is a reliable tool to predict aquatic concentrations of nano-silver. Sensitivity analysis suggests that fractal dimension, particle diameter, conductivity, time, and particle fate have the most influence on aquatic exposure given the current knowledge; however, numerous knowledge gaps can be identified to suggest further research efforts that will reduce the uncertainty in subsequent exposure and risk forecasts. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. [Prediction of 137Cs behaviour in the soil-plant system in the territory of Semipalatinsk test site].

    PubMed

    Spiridonov, S I; Mukusheva, M K; Gontarenko, I A; Fesenko, S V; Baranov, S A

    2005-01-01

    A mathematical model of 137Cs behaviour in the soil-plant system is presented. The model has been parameterized for the area adjacent to the testing area Ground Zero of the Semipalatinsk Test Site. The model describes the main processes responsible for the changes in 137Cs content in the soil solution and, thereby, dynamics of the radionuclide uptake by vegetation. The results are taken from predictive and retrospective calculations that reflect the dynamics of 137Cs distribution by species in soil after nuclear explosions. The importance of factors governing 137Cs accumulation in plants within the STS area is assessed. The analysis of sensitivity of the output model variable to changes in its parameters revealed that the key soil properties significantly influence the results of prediction of 137Cs content in plants.

  6. Space charge induced surface stresses: implications in ceria and other ionic solids.

    PubMed

    Sheldon, Brian W; Shenoy, Vivek B

    2011-05-27

    Volume changes associated with point defects in space charge layers can produce strains that substantially alter thermodynamic equilibrium near surfaces in ionic solids. For example, near-surface compressive stresses exceeding -10 GPa are predicted for ceria. The magnitude of this effect is consistent with anomalous lattice parameter increases that occur in ceria nanoparticles. These stresses should significantly alter defect concentrations and key transport properties in a wide range of materials (e.g., ceria electrolytes in fuel cells). © 2011 American Physical Society

  7. Automatic Adviser on stationary devices status identification and anticipated change

    NASA Astrophysics Data System (ADS)

    Shabelnikov, A. N.; Liabakh, N. N.; Gibner, Ya M.; Pushkarev, E. A.

    2018-05-01

    A task is defined to synthesize an Automatic Adviser to identify the automation systems stationary devices status using an autoregressive model of changing their key parameters. An applied model type was rationalized and the research objects monitoring process algorithm was developed. A complex of mobile objects status operation simulation and prediction results analysis was proposed. Research results are commented using a specific example of a hump yard compressor station. The work was supported by the Russian Fundamental Research Fund, project No. 17-20-01040.

  8. Denitrification in Agricultural Soils: Integrated control and Modelling at various scales (DASIM)

    NASA Astrophysics Data System (ADS)

    Müller, Christoph; Well, Reinhard; Böttcher, Jürgen; Butterbach-Bahl, Klaus; Dannenmann, Michael; Deppe, Marianna; Dittert, Klaus; Dörsch, Peter; Horn, Marcus; Ippisch, Olaf; Mikutta, Robert; Senbayram, Mehmet; Vogel, Hans-Jörg; Wrage-Mönnig, Nicole; Müller, Carsten

    2016-04-01

    The new research unit DASIM brings together the expertise of 11 working groups to study the process of denitrification at unprecedented spatial and temporal resolution. Based on state-of-the art analytical techniques our aim is to develop improved denitrification models ranging from the microscale to the field/plot scale. Denitrification, the process of nitrate reduction allowing microbes to sustain respiration under anaerobic conditions, is the key process returning reactive nitrogen as N2to the atmosphere. Actively denitrifying communities in soil show distinct regulatory phenotypes (DRP) with characteristic controls on the single reaction steps and end-products. It is unresolved whether DRPs are anchored in the taxonomic composition of denitrifier communities and how environmental conditions shape them. Despite being intensively studied for more than 100 years, denitrification rates and emissions of its gaseous products can still not be satisfactorily predicted. While the impact of single environmental parameters is well understood, the complexity of the process itself with its intricate cellular regulation in response to highly variable factors in the soil matrix prevents robust prediction of gaseous emissions. Key parameters in soil are pO2, organic matter content and quality, pH and the microbial community structure, which in turn are affected by the soil structure, chemistry and soil-plant interactions. In the DASIM research unit, we aim at the quantitative prediction of denitrification rates as a function of microscale soil structure, organic matter quality, DRPs and atmospheric boundary conditions via a combination of state-of-the-art experimental and analytical tools (X-ray μCT, 15N tracing, NanoSIMS, microsensors, advanced flux detection, NMR spectroscopy, and molecular methods including next generation sequencing of functional gene transcripts). We actively seek collaboration with researchers working in the field of denitrification.

  9. A Novel Computational Model Predicts Key Regulators of Chemokine Gradient Formation in Lymph Nodes and Site-Specific Roles for CCL19 and ACKR4

    PubMed Central

    Brook, Bindi S.

    2017-01-01

    The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19–CCL21–CCR7–ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines. PMID:28807994

  10. An Assessment of the Subseasonal Predictability of Severe Thunderstorm Environments and Activity using the Climate Forecast System Version 2

    NASA Astrophysics Data System (ADS)

    Stepanek, Adam J.

    The prospect for skillful long-term predictions of atmospheric conditions known to directly contribute to the onset and maintenance of severe convective storms remains unclear. A thorough assessment of the capability for a global climate model such as the Climate Forecast System Version 2 (CFSv2) to skillfully represent parameters related to severe weather has the potential to significantly improve medium- to long-range outlooks vital to risk managers. Environmental convective available potential energy (CAPE) and deep-layer vertical wind shear (DLS) can be used to distinguish an atmosphere conducive to severe storms from one supportive of primarily non-severe 'ordinary' convection. As such, this research concentrates on the predictability of CAPE, DLS, and a product of the two parameters (CAPEDLS) by the CFSv2 with a specific focus on the subseasonal timescale. Individual month-long verification periods from the Climate Forecast System reanalysis (CFSR) dataset are measured against a climatological standard using cumulative distribution function (CDF) and area-under-the-CDF (AUCDF) techniques designed mitigate inherent model biases while concurrently assessing the entire distribution of a given parameter in lieu of a threshold-based approach. Similar methods imposed upon the CFS reforecast (CFSRef) and operational CFSv2 allow for comparisons elucidating both spatial and temporal trends in skill using correlation coefficients, proportion correct metrics, Heidke skill score (HSS), and root-mean-square-error (RMSE) statistics. Key results show the CFSv2-based output often demonstrates skill beyond a climatologically-based threshold when the forecast is notably anomalous from the 29-year (1982-2010) mean CFSRef prediction (exceeding one standard deviation at grid point level). CFSRef analysis indicates enhanced skill during the months of April and June (relative to May) and for predictions of DLS. Furthermore, years exhibiting skill in terms of RMSE are shown to possess certain correlations with El Nino-Southern Oscillation conditions from the preceding winter and concurrent Madden Julian Oscillation activity. Applying results gleaned from the CFSRef analysis to the operational CFSv2 (2011-16) indicates predictive skill can be increased by isolating forecasts meeting multiple parameter-based relationships.

  11. Gene expression models for prediction of longitudinal dispersion coefficient in streams

    NASA Astrophysics Data System (ADS)

    Sattar, Ahmed M. A.; Gharabaghi, Bahram

    2015-05-01

    Longitudinal dispersion is the key hydrologic process that governs transport of pollutants in natural streams. It is critical for spill action centers to be able to predict the pollutant travel time and break-through curves accurately following accidental spills in urban streams. This study presents a novel gene expression model for longitudinal dispersion developed using 150 published data sets of geometric and hydraulic parameters in natural streams in the United States, Canada, Europe, and New Zealand. The training and testing of the model were accomplished using randomly-selected 67% (100 data sets) and 33% (50 data sets) of the data sets, respectively. Gene expression programming (GEP) is used to develop empirical relations between the longitudinal dispersion coefficient and various control variables, including the Froude number which reflects the effect of reach slope, aspect ratio, and the bed material roughness on the dispersion coefficient. Two GEP models have been developed, and the prediction uncertainties of the developed GEP models are quantified and compared with those of existing models, showing improved prediction accuracy in favor of GEP models. Finally, a parametric analysis is performed for further verification of the developed GEP models. The main reason for the higher accuracy of the GEP models compared to the existing regression models is that exponents of the key variables (aspect ratio and bed material roughness) are not constants but a function of the Froude number. The proposed relations are both simple and accurate and can be effectively used to predict the longitudinal dispersion coefficients in natural streams.

  12. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  13. Differences between wafer and bake plate temperature uniformity in proximity bake: a theoretical and experimental study

    NASA Astrophysics Data System (ADS)

    Ramanan, Natarajan; Kozman, Austin; Sims, James B.

    2000-06-01

    As the lithography industry moves toward finer features, specifications on temperature uniformity of the bake plates are expected to become more stringent. Consequently, aggressive improvements are needed to conventional bake station designs to make them perform significantly better than current market requirements. To this end, we have conducted a rigorous study that combines state-of-the-art simulation tools and experimental methods to predict the impact of the parameters that influence the uniformity of the wafer in proximity bake. The key observation from this detailed study is that the temperature uniformity of the wafer in proximity mode depends on a number of parameters in addition to the uniformity of the bake plate itself. These parameters include the lid design, the air flow distribution around the bake chamber, bake plate design and flatness of the bake plate and wafer. By performing careful experimental studies that were guided by extensive numerical simulations, we were able to understand the relative importance of each of these parameters. In an orderly fashion, we made appropriate design changes to curtail or eliminate the nonuniformity caused by each of these parameters. After implementing all these changes, we have now been able to match or improve the temperature uniformity of the wafer in proximity with that of a contact measurement on the bake plate. The wafer temperature uniformity is also very close to the theoretically predicted uniformity of the wafer.

  14. Predicting protein thermal stability changes upon point mutations using statistical potentials: Introducing HoTMuSiC.

    PubMed

    Pucci, Fabrizio; Bourgeas, Raphaël; Rooman, Marianne

    2016-03-18

    The accurate prediction of the impact of an amino acid substitution on the thermal stability of a protein is a central issue in protein science, and is of key relevance for the rational optimization of various bioprocesses that use enzymes in unusual conditions. Here we present one of the first computational tools to predict the change in melting temperature ΔTm upon point mutations, given the protein structure and, when available, the melting temperature Tm of the wild-type protein. The key ingredients of our model structure are standard and temperature-dependent statistical potentials, which are combined with the help of an artificial neural network. The model structure was chosen on the basis of a detailed thermodynamic analysis of the system. The parameters of the model were identified on a set of more than 1,600 mutations with experimentally measured ΔTm. The performance of our method was tested using a strict 5-fold cross-validation procedure, and was found to be significantly superior to that of competing methods. We obtained a root mean square deviation between predicted and experimental ΔTm values of 4.2 °C that reduces to 2.9 °C when ten percent outliers are removed. A webserver-based tool is freely available for non-commercial use at soft.dezyme.com.

  15. A statistical survey of heat input parameters into the cusp thermosphere

    NASA Astrophysics Data System (ADS)

    Moen, J. I.; Skjaeveland, A.; Carlson, H. C.

    2017-12-01

    Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.

  16. The effect of aerosol optical depth on rainfall with reference to meteorology over metro cities in India.

    PubMed

    Gunaseelan, Indira; Bhaskar, B Vijay; Muthuchelian, K

    2014-01-01

    Rainfall is a key link in the global water cycle and a proxy for changing climate; therefore, proper assessment of the urban environment's impact on rainfall will be increasingly important in ongoing climate diagnostics and prediction. Aerosol optical depth (AOD) measurements on the monsoon seasons of the years 2008 to 2010 were made over four metro regional hotspots in India. The highest average of AOD was in the months of June and July for the four cities during 3 years and lowest was in September. Comparing the four regions, Kolkata was in the peak of aerosol contamination and Chennai was in least. Pearson correlation was made between AOD with climatic parameters. Some changes in the parameters were found during drought year. Temperature, cloud parameters, and humidity play an important role for the drought conditions. The role of aerosols, meteorological parameters, and their impacts towards the precipitation during the monsoon was studied.

  17. Low Velocity Earth-Penetration Test and Analysis

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jones, Yvonne; Knight, Norman F., Jr.; Kellas, Sotiris

    2001-01-01

    Modeling and simulation of structural impacts into soil continue to challenge analysts to develop accurate material models and detailed analytical simulations to predict the soil penetration event. This paper discusses finite element modeling of a series of penetrometer drop tests into soft clay. Parametric studies are performed with penetrometers of varying diameters, masses, and impact speeds to a maximum of 45 m/s. Parameters influencing the simulation such as the contact penalty factor and the material model representing the soil are also studied. An empirical relationship between key parameters is developed and is shown to correlate experimental and analytical results quite well. The results provide preliminary design guidelines for Earth impact that may be useful for future space exploration sample return missions.

  18. Linear elastic properties derivation from microstructures representative of transport parameters.

    PubMed

    Hoang, Minh Tan; Bonnet, Guy; Tuan Luu, Hoang; Perrot, Camille

    2014-06-01

    It is shown that three-dimensional periodic unit cells (3D PUC) representative of transport parameters involved in the description of long wavelength acoustic wave propagation and dissipation through real foam samples may also be used as a standpoint to estimate their macroscopic linear elastic properties. Application of the model yields quantitative agreement between numerical homogenization results, available literature data, and experiments. Key contributions of this work include recognizing the importance of membranes and properties of the base material for the physics of elasticity. The results of this paper demonstrate that a 3D PUC may be used to understand and predict not only the sound absorbing properties of porous materials but also their transmission loss, which is critical for sound insulation problems.

  19. The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.

    PubMed

    Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro

    2018-03-01

    Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  1. Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain

    PubMed Central

    Chis Ster, Irina; Ferguson, Neil M.

    2007-01-01

    Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582

  2. Effects of correlated parameters and uncertainty in electronic-structure-based chemical kinetic modelling

    NASA Astrophysics Data System (ADS)

    Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2016-04-01

    Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.

  3. Investigation of the current yaw engineering models for simulation of wind turbines in BEM and comparison with CFD and experiment

    NASA Astrophysics Data System (ADS)

    Rahimi, H.; Hartvelt, M.; Peinke, J.; Schepers, J. G.

    2016-09-01

    The aim of this work is to investigate the capabilities of current engineering tools based on Blade Element Momentum (BEM) and free vortex wake codes for the prediction of key aerodynamic parameters of wind turbines in yawed flow. Axial induction factor and aerodynamic loads of three wind turbines (NREL VI, AVATAR and INNWIND.EU) were investigated using wind tunnel measurements and numerical simulations for 0 and 30 degrees of yaw. Results indicated that for axial conditions there is a good agreement between all codes in terms of mean values of aerodynamic parameters, however in yawed flow significant deviations were observed. This was due to unsteady phenomena such as advancing & retreating and skewed wake effect. These deviations were more visible in aerodynamic parameters in comparison to the rotor azimuthal angle for the sections at the root and tip where the skewed wake effect plays a major role.

  4. Modeling and optimization of joint quality for laser transmission joint of thermoplastic using an artificial neural network and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Zhang, Cheng; Li, Pin; Wang, Kai; Hu, Yang; Zhang, Peng; Liu, Huixia

    2012-11-01

    A central composite rotatable experimental design(CCRD) is conducted to design experiments for laser transmission joining of thermoplastic-Polycarbonate (PC). The artificial neural network was used to establish the relationships between laser transmission joining process parameters (the laser power, velocity, clamp pressure, scanning number) and joint strength and joint seam width. The developed mathematical models are tested by analysis of variance (ANOVA) method to check their adequacy and the effects of process parameters on the responses and the interaction effects of key process parameters on the quality are analyzed and discussed. Finally, the desirability function coupled with genetic algorithm is used to carry out the optimization of the joint strength and joint width. The results show that the predicted results of the optimization are in good agreement with the experimental results, so this study provides an effective method to enhance the joint quality.

  5. Impact of the irregular microgeometry of polyurethane foam on the macroscopic acoustic behavior predicted by a unit-cell model.

    PubMed

    Doutres, O; Ouisse, M; Atalla, N; Ichchou, M

    2014-10-01

    This paper deals with the prediction of the macroscopic sound absorption behavior of highly porous polyurethane foams using two unit-cell microstructure-based models recently developed by Doutres, Atalla, and Dong [J. Appl. Phys. 110, 064901 (2011); J. Appl. Phys. 113, 054901 (2013)]. In these models, the porous material is idealized as a packing of a tetrakaidecahedra unit-cell representative of the disordered network that constitutes the porous frame. The non-acoustic parameters involved in the classical Johnson-Champoux-Allard model (i.e., porosity, airflow resistivity, tortuosity, etc.) are derived from characteristic properties of the unit-cell and semi-empirical relationships. A global sensitivity analysis is performed on these two models in order to investigate how the variability associated with the measured unit-cell characteristics affects the models outputs. This allows identification of the possible limitations of a unit-cell micro-macro approach due to microstructure irregularity. The sensitivity analysis mainly shows that for moderately and highly reticulated polyurethane foams, the strut length parameter is the key parameter since it greatly impacts three important non-acoustic parameters and causes large uncertainty on the sound absorption coefficient even if its measurement variability is moderate. For foams with a slight inhomogeneity and anisotropy, a micro-macro model associated to cell size measurements should be preferred.

  6. The contribution of NOAA/CMDL ground-based measurements to understanding long-term stratospheric changes

    NASA Astrophysics Data System (ADS)

    Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.

    2005-05-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  7. Identifying data gaps and prioritizing restoration strategies for Fremont cottonwood using linked geomorphic and population models

    NASA Astrophysics Data System (ADS)

    Harper, E. B.; Stella, J. C.; Fremier, A. K.

    2009-12-01

    Fremont cottonwood (Populus fremontii) is an important component of semi-arid riparian ecosystems throughout western North America, but its populations are in decline due to flow regulation. Achieving a balance between human resource needs and riparian ecosystem function requires a mechanistic understanding of the multiple geomorphic and biological factors affecting tree recruitment and survival, including the timing and magnitude of river flows, and the concomitant influence on suitable habitat creation and mortality from scour and sedimentation burial. Despite a great deal of empirical research on some components of the system, such as factors affecting cottonwood recruitment, other key components are less studied. Yet understanding the relative influence of the full suite of physical and life-history drivers is critical to modeling whole-population dynamics under changing environmental conditions. We addressed these issues for the Fremont cottonwood population along the Sacramento River, CA using a sensitivity analysis approach to quantify uncertainty in parameters on the outcomes of a patch-based, dynamic population model. Using a broad range of plausible values for 15 model parameters that represent key physical, biological and climatic components of the ecosystem, we ran 1,000 population simulations that consisted of a subset of 14.3 million possible combinations of parameter estimates to predict the frequency of patch colonization and total forest habitat predicted to occur under current hydrologic conditions after 175 years. Results indicate that Fremont cottonwood populations are highly sensitive to the interactions among flow regime, sedimentation rate and the depth of the capillary fringe (Fig. 1). Estimates of long-term floodplain sedimentation rate would substantially improve model accuracy. Spatial variation in sediment texture was also important to the extent that it determines the depth of the capillary fringe, which regulates the availability of water for germination and adult tree growth. Our sensitivity analyses suggest that models of future scenarios should incorporate regional climate change projections because changes in temperature and the timing and volume of precipitation affects sensitive aspects of the system, including the timing of seed release and spring snowmelt runoff. Figure 1. The relative effects on model predictions of uncertainty around each parameter included in the patch-based population model for Fremont cottonwood.

  8. Printability of calcium phosphate powders for three-dimensional printing of tissue engineering scaffolds.

    PubMed

    Butscher, Andre; Bohner, Marc; Roth, Christian; Ernstberger, Annika; Heuberger, Roman; Doebelin, Nicola; von Rohr, Philipp Rudolf; Müller, Ralph

    2012-01-01

    Three-dimensional printing (3DP) is a versatile method to produce scaffolds for tissue engineering. In 3DP the solid is created by the reaction of a liquid selectively sprayed onto a powder bed. Despite the importance of the powder properties, there has to date been a relatively poor understanding of the relation between the powder properties and the printing outcome. This article aims at improving this understanding by looking at the link between key powder parameters (particle size, flowability, roughness, wettability) and printing accuracy. These powder parameters are determined as key factors with a predictive value for the final 3DP outcome. Promising results can be expected for mean particle size in the range of 20-35 μm, compaction rate in the range of 1.3-1.4, flowability in the range of 5-7 and powder bed surface roughness of 10-25 μm. Finally, possible steps and strategies in pushing the physical limits concerning improved quality in 3DP are addressed and discussed. Copyright © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  9. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".

  10. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    NASA Astrophysics Data System (ADS)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  11. Hidden markov model for the prediction of transmembrane proteins using MATLAB.

    PubMed

    Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath

    2011-01-01

    Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.

  12. Assessment of key transport parameters in a karst system under different dynamic conditions based on tracer experiments: the Jeita karst system, Lebanon

    NASA Astrophysics Data System (ADS)

    Doummar, Joanna; Margane, Armin; Geyer, Tobias; Sauter, Martin

    2018-03-01

    Artificial tracer experiments were conducted in the mature karst system of Jeita (Lebanon) under various flow conditions using surface and subsurface tracer injection points, to determine the variation of transport parameters (attenuation of peak concentration, velocity, transit times, dispersivity, and proportion of immobile and mobile regions) along fast and slow flow pathways. Tracer breakthrough curves (TBCs) observed at the karst spring were interpreted using a two-region nonequilibrium approach (2RNEM) to account for the skewness in the TBCs' long tailings. The conduit test results revealed a discharge threshold in the system dynamics, beyond which the transport parameters vary significantly. The polynomial relationship between transport velocity and discharge can be related to the variation of the conduit's cross-sectional area. Longitudinal dispersivity in the conduit system is not a constant value (α = 7-10 m) and decreases linearly with increasing flow rate because of dilution effects. Additionally, the proportion of immobile regions (arising from conduit irregularities) increases with decreasing water level in the conduit system. From tracer tests with injection at the surface, longitudinal dispersivity values are found to be large (8-27 m). The tailing observed in some TBCs is generated in the unsaturated zone before the tracer actually arrives at the major subsurface conduit draining the system. This work allows the estimation and prediction of the key transport parameters in karst aquifers. It shows that these parameters vary with time and flow dynamics, and they reflect the geometry of the flow pathway and the origin of infiltrating (potentially contaminated) recharge.

  13. A critical state model for mudrock behavior at high stress levels

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Nikolinakou, M. A.; Flemings, P. B.

    2016-12-01

    Recent experimental work has documented that the compression behavior, friction angle, and lateral stress ratio (k0) of mudrocks vary over the stress range of 1 to 100 MPa. We integrate these observations into a critical state model. The internal friction angle and the slope of the compression curve are key parameters in a mudrock critical state model. Published models assume that these parameters do not depend on the stress level, and hence predict lateral stress and normalized strength ratios that do not change with the stress level. However, recent experimental data on resedimented mudrock samples from Eugene Island, Gulf of Mexico, demonstrate that all these parameters vary considerably with the stress level (Casey and Germaine, 2013; Casey et al., 2015). To represent these variations, we develop an enhanced critical state model that uses a stress-level-dependent friction angle and a curvilinear compression curve. We show that this enhanced model predicts the observed variations of the lateral stress and strength ratios. The successful performance of our model indicates that the critical state theory developed for soil can predict mudrock nonlinear behavior at high stress levels and thus can be used in modeling geologic systems. Casey, B., Germaine, J., 2013. Stress Dependence of Shear Strength in Fine-Grained Soils and Correlations with Liquid Limit. J. Geotech. Geoenviron. Eng. 139, 1709-1717. Casey, B., Germaine, J., Flemings, P.B., Fahy, B.P., 2015. Estimating horizontal stresses for mudrocks under one-dimensional compression. Mar. Pet. Geol. 65, 178-186.

  14. Simulation based efficiency prediction of a Brushless DC drive applied in ventricular assist devices.

    PubMed

    Pohlmann, André; Hameyer, Kay

    2012-01-01

    Ventricular Assist Devices (VADs) are mechanical blood pumps that support the human heart in order to maintain a sufficient perfusion of the human body and its organs. During VAD operation blood damage caused by hemolysis, thrombogenecity and denaturation has to be avoided. One key parameter causing the blood's denaturation is its temperature which must not exceed 42 °C. As a temperature rise can be directly linked to the losses occuring in the drive system, this paper introduces an efficiency prediction chain for Brushless DC (BLDC) drives which are applied in various VAD systems. The presented chain is applied to various core materials and operation ranges, providing a general overview on the loss dependencies.

  15. Ganymede - A relationship between thermal history and crater statistics

    NASA Technical Reports Server (NTRS)

    Phillips, R. J.; Malin, M. C.

    1980-01-01

    An approach for factoring the effects of a planetary thermal history into a predicted set of crater statistics for an icy satellite is developed and forms the basis for subsequent data inversion studies. The key parameter is a thermal evolution-dependent critical time for which craters of a particular size forming earlier do not contribute to present-day statistics. An example is given for the satellite Ganymede and the effect of the thermal history is easily seen in the resulting predicted crater statistics. A preliminary comparison with the data, subject to the uncertainties in ice rheology and impact flux history, suggests a surface age of 3.8 x 10 to the 9th years and a radionuclide abundance of 0.3 times the chondritic value.

  16. A stochastic whole-body physiologically based pharmacokinetic model to assess the impact of inter-individual variability on tissue dosimetry over the human lifespan.

    PubMed

    Beaudouin, Rémy; Micallef, Sandrine; Brochot, Céline

    2010-06-01

    Physiologically based pharmacokinetic (PBPK) models have proven to be successful in integrating and evaluating the influence of age- or gender-dependent changes with respect to the pharmacokinetics of xenobiotics throughout entire lifetimes. Nevertheless, for an effective application of toxicokinetic modelling to chemical risk assessment, a PBPK model has to be detailed enough to include all the multiple tissues that could be targeted by the various xenobiotics present in the environment. For this reason, we developed a PBPK model based on a detailed compartmentalization of the human body and parameterized with new relationships describing the time evolution of physiological and anatomical parameters. To take into account the impact of human variability on the predicted toxicokinetics, we defined probability distributions for key parameters related to the xenobiotics absorption, distribution, metabolism and excretion. The model predictability was evaluated by a direct comparison between computational predictions and experimental data for the internal concentrations of two chemicals (1,3-butadiene and 2,3,7,8-tetrachlorodibenzo-p-dioxin). A good agreement between predictions and observed data was achieved for different scenarios of exposure (e.g., acute or chronic exposure and different populations). Our results support that the general stochastic PBPK model can be a valuable computational support in the area of chemical risk analysis. (c)2010 Elsevier Inc. All rights reserved.

  17. Capillary Rise: Validity of the Dynamic Contact Angle Models.

    PubMed

    Wu, Pingkeng; Nikolov, Alex D; Wasan, Darsh T

    2017-08-15

    The classical Lucas-Washburn-Rideal (LWR) equation, using the equilibrium contact angle, predicts a faster capillary rise process than experiments in many cases. The major contributor to the faster prediction is believed to be the velocity dependent dynamic contact angle. In this work, we investigated the dynamic contact angle models for their ability to correct the dynamic contact angle effect in the capillary rise process. We conducted capillary rise experiments of various wetting liquids in borosilicate glass capillaries and compared the model predictions with our experimental data. The results show that the LWR equations modified by the molecular kinetic theory and hydrodynamic model provide good predictions on the capillary rise of all the testing liquids with fitting parameters, while the one modified by Joos' empirical equation works for specific liquids, such as silicone oils. The LWR equation modified by molecular self-layering model predicts well the capillary rise of carbon tetrachloride, octamethylcyclotetrasiloxane, and n-alkanes with the molecular diameter or measured solvation force data. The molecular self-layering model modified LWR equation also has good predictions on the capillary rise of silicone oils covering a wide range of bulk viscosities with the same key parameter W(0), which results from the molecular self-layering. The advantage of the molecular self-layering model over the other models reveals the importance of the layered molecularly thin wetting film ahead of the main meniscus in the energy dissipation associated with dynamic contact angle. The analysis of the capillary rise of silicone oils with a wide range of bulk viscosities provides new insights into the capillary dynamics of polymer melts.

  18. Quantitative predictions of streamflow variability in the Susquehanna River Basin

    NASA Astrophysics Data System (ADS)

    Alexander, R.; Boyer, E. W.; Leonard, L. N.; Duffy, C.; Schwarz, G. E.; Smith, R. A.

    2012-12-01

    Hydrologic researchers and water managers have increasingly sought an improved understanding of the major processes that control fluxes of water and solutes across diverse environmental settings and large spatial scales. Regional analyses of observed streamflow data have led to advances in our knowledge of relations among land use, climate, and streamflow, with methodologies ranging from statistical assessments of multiple monitoring sites to the regionalization of the parameters of catchment-scale mechanistic simulation models. However, gaps remain in our understanding of the best ways to transfer the knowledge of hydrologic response and governing processes among locations, including methods for regionalizing streamflow measurements and model predictions. We developed an approach to predict variations in streamflow using the SPARROW (SPAtially Referenced Regression On Watershed attributes) modeling infrastructure, with mechanistic functions, mass conservation constraints, and statistical estimation of regional and sub-regional parameters. We used the model to predict discharge in the Susquehanna River Basin (SRB) under varying hydrological regimes that are representative of contemporary flow conditions. The resulting basin-scale water balance describes mean monthly flows in stream reaches throughout the entire SRB (represented at a 1:100,000 scale using the National Hydrologic Data network), with water supply and demand components that are inclusive of a range of hydrologic, climatic, and cultural properties (e.g., precipitation, evapotranspiration, soil and groundwater storage, runoff, baseflow, water use). We compare alternative models of varying complexity that reflect differences in the number and types of explanatory variables and functional expressions as well as spatial and temporal variability in the model parameters. Statistical estimation of the models reveals the levels of complexity that can be uniquely identified, subject to the information content and uncertainties of the hydrologic and climate measurements. Assessment of spatial variations in the model parameters and predictions provides an improved understanding of how much of the hydrologic response to land use, climate, and other properties is unique to specific locations versus more universally observed across catchments of the SRB. This approach advances understanding of water cycle variability at any location throughout the stream network, as a function of both landscape characteristics (e.g., soils, vegetation, land use) and external forcings (e.g., precipitation quantity and frequency). These improvements in predictions of streamflow dynamics will advance the ability to predict spatial and temporal variability in key solutes, such as nutrients, and their delivery to the Chesapeake Bay.

  19. A Solution to the Cosmic Conundrum including Cosmological Constant and Dark Energy Problems

    NASA Astrophysics Data System (ADS)

    Singh, A.

    2009-12-01

    A comprehensive solution to the cosmic conundrum is presented that also resolves key paradoxes of quantum mechanics and relativity. A simple mathematical model, the Gravity Nullification model (GNM), is proposed that integrates the missing physics of the spontaneous relativistic conversion of mass to energy into the existing physics theories, specifically a simplified general theory of relativity. Mechanistic mathematical expressions are derived for a relativistic universe expansion, which predict both the observed linear Hubble expansion in the nearby universe and the accelerating expansion exhibited by the supernova observations. The integrated model addresses the key questions haunting physics and Big Bang cosmology. It also provides a fresh perspective on the misconceived birth and evolution of the universe, especially the creation and dissolution of matter. The proposed model eliminates singularities from existing models and the need for the incredible and unverifiable assumptions including the superluminous inflation scenario, multiple universes, multiple dimensions, Anthropic principle, and quantum gravity. GNM predicts the observed features of the universe without any explicit consideration of time as a governing parameter.

  20. Two-Pendulum Model of Propellant Slosh in Europa Clipper PMD Tank

    NASA Technical Reports Server (NTRS)

    Ng, Wanyi; Benson, David

    2017-01-01

    The objective of this fluids analysis is to model propellant slosh for the Europa Clipper mission using a two-pendulum model, such that controls engineers can predict slosh behavior during the mission. Propellant slosh causes shifts in center of mass and exerts forces and torques on the spacecraft which, if not adequately controlled, can lead to mission failure. The two-pendulum model provides a computationally simple model that can be used to predict slosh for the Europa Clipper tank geometry. The Europa Clipper tank is cylindrical with a domed top and bottom and includes a propellant management device (PMD). Due to the lack of experimental data in low gravity environments, computational fluid dynamics (CFD) simulation results were used as 'real' slosh behavior for two propellants at three fill fractions. Key pendulum parameters were derived that allow the pendulum model's center of mass, forces, and moments to closely match the CFD data. The parameter trends were examined as a function of tank fill fraction and compared with solutions to analytic equations that describe the frequency of slosh in tanks with simple geometries. The trends were monotonic as expected, and parameters resembled analytical predictions; any differences could be explained by the specific differences in the geometry of the tank. This paper summarizes the new method developed at Goddard Space Flight Center (GSFC) for deriving pendulum parameters for two-pendulum equivalent sloshing models. It presents the results of this method and discusses the validity of the results. This analysis is at a completed stage and will be applied in the immediate future to the evolving tank geometry as Europa Clipper moves past its preliminary design review (PDR) phase.

  1. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  2. The genetic landscape of a physical interaction

    PubMed Central

    Diss, Guillaume

    2018-01-01

    A key question in human genetics and evolutionary biology is how mutations in different genes combine to alter phenotypes. Efforts to systematically map genetic interactions have mostly made use of gene deletions. However, most genetic variation consists of point mutations of diverse and difficult to predict effects. Here, by developing a new sequencing-based protein interaction assay – deepPCA – we quantified the effects of >120,000 pairs of point mutations on the formation of the AP-1 transcription factor complex between the products of the FOS and JUN proto-oncogenes. Genetic interactions are abundant both in cis (within one protein) and trans (between the two molecules) and consist of two classes – interactions driven by thermodynamics that can be predicted using a three-parameter global model, and structural interactions between proximally located residues. These results reveal how physical interactions generate quantitatively predictable genetic interactions. PMID:29638215

  3. Predicting long-term performance of engineered geologic carbon dioxide storage systems to inform decisions amidst uncertainty

    NASA Astrophysics Data System (ADS)

    Pawar, R.

    2016-12-01

    Risk assessment and risk management of engineered geologic CO2 storage systems is an area of active investigation. The potential geologic CO2 storage systems currently under consideration are inherently heterogeneous and have limited to no characterization data. Effective risk management decisions to ensure safe, long-term CO2 storage requires assessing and quantifying risks while taking into account the uncertainties in a storage site's characteristics. The key decisions are typically related to definition of area of review, effective monitoring strategy and monitoring duration, potential of leakage and associated impacts, etc. A quantitative methodology for predicting a sequestration site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale geologic storage projects where projects will require quantitative assessments of potential long-term liabilities. An integrated assessment modeling (IAM) paradigm which treats a geologic CO2 storage site as a system made up of various linked subsystems can be used to predict long-term performance. The subsystems include storage reservoir, seals, potential leakage pathways (such as wellbores, natural fractures/faults) and receptors (such as shallow groundwater aquifers). CO2 movement within each of the subsystems and resulting interactions are captured through reduced order models (ROMs). The ROMs capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. The computational efficiency allows for performing Monte Carlo simulations necessary for quantitative probabilistic risk assessment. We have used the IAM to predict long-term performance of geologic CO2 sequestration systems and to answer questions related to probability of leakage of CO2 through wellbores, impact of CO2/brine leakage into shallow aquifer, etc. Answers to such questions are critical in making key risk management decisions. A systematic uncertainty quantification approach can been used to understand how uncertain parameters associated with different subsystems (e.g., reservoir permeability, wellbore cement permeability, wellbore density, etc.) impact the overall site performance predictions.

  4. Current State and Future Perspectives in QSAR Models to Predict Blood- Brain Barrier Penetration in Central Nervous System Drug R&D.

    PubMed

    Morales, Juan F; Montoto, Sebastian Scioli; Fagiolino, Pietro; Ruiz, Maria E

    2017-01-01

    The Blood-Brain Barrier (BBB) is a physical and biochemical barrier that restricts the entry of certain drugs to the Central Nervous System (CNS), while allowing the passage of others. The ability to predict the permeability of a given molecule through the BBB is a key aspect in CNS drug discovery and development, since neurotherapeutic agents with molecular targets in the CNS should be able to cross the BBB, whereas peripherally acting agents should not, to minimize the risk of CNS adverse effects. In this review we examine and discuss QSAR approaches and current availability of experimental data for the construction of BBB permeability predictive models, focusing on the modeling of the biorelevant parameter unbound partitioning coefficient (Kp,uu). Emphasis is made on two possible strategies to overcome the current limitations of in silico models: considering the prediction of brain penetration as a multifactorial problem, and increasing experimental datasets through accurate and standardized experimental techniques.

  5. Comprehensive computational model for combining fluid hydrodynamics, light transport and biomass growth in a Taylor vortex algal photobioreactor: Lagrangian approach.

    PubMed

    Gao, Xi; Kong, Bo; Vigil, R Dennis

    2017-01-01

    A comprehensive quantitative model incorporating the effects of fluid flow patterns, light distribution, and algal growth kinetics on biomass growth rate is developed in order to predict the performance of a Taylor vortex algal photobioreactor for culturing Chlorella vulgaris. A commonly used Lagrangian strategy for coupling the various factors influencing algal growth was employed whereby results from computational fluid dynamics and radiation transport simulations were used to compute numerous microorganism light exposure histories, and this information in turn was used to estimate the global biomass specific growth rate. The simulations provide good quantitative agreement with experimental data and correctly predict the trend in reactor performance as a key reactor operating parameter is varied (inner cylinder rotation speed). However, biomass growth curves are consistently over-predicted and potential causes for these over-predictions and drawbacks of the Lagrangian approach are addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Model-Based Approach to Predict Adherence to Protocol During Antiobesity Trials.

    PubMed

    Sharma, Vishnu D; Combes, François P; Vakilynejad, Majid; Lahu, Gezim; Lesko, Lawrence J; Trame, Mirjam N

    2018-02-01

    Development of antiobesity drugs is continuously challenged by high dropout rates during clinical trials. The objective was to develop a population pharmacodynamic model that describes the temporal changes in body weight, considering disease progression, lifestyle intervention, and drug effects. Markov modeling (MM) was applied for quantification and characterization of responder and nonresponder as key drivers of dropout rates, to ultimately support the clinical trial simulations and the outcome in terms of trial adherence. Subjects (n = 4591) from 6 Contrave ® trials were included in this analysis. An indirect-response model developed by van Wart et al was used as a starting point. Inclusion of drug effect was dose driven using a population dose- and time-dependent pharmacodynamic (DTPD) model. Additionally, a population-pharmacokinetic parameter- and data (PPPD)-driven model was developed using the final DTPD model structure and final parameter estimates from a previously developed population pharmacokinetic model based on available Contrave ® pharmacokinetic concentrations. Last, MM was developed to predict transition rate probabilities among responder, nonresponder, and dropout states driven by the pharmacodynamic effect resulting from the DTPD or PPPD model. Covariates included in the models and parameters were diabetes mellitus and race. The linked DTPD-MM and PPPD-MM was able to predict transition rates among responder, nonresponder, and dropout states well. The analysis concluded that body-weight change is an important factor influencing dropout rates, and the MM depicted that overall a DTPD model-driven approach provides a reasonable prediction of clinical trial outcome probabilities similar to a pharmacokinetic-driven approach. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.

  7. Selection, calibration, and validation of models of tumor growth.

    PubMed

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.

  8. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  9. Uptake and localization mechanisms of fluorescent and colored lipid probes. Part 2. QSAR models that predict localization of fluorescent probes used to identify ("specifically stain") various biomembranes and membranous organelles.

    PubMed

    Horobin, R W; Stockert, J C; Rashid-Doubell, F

    2015-05-01

    We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.

  10. Validation of systems biology derived molecular markers of renal donor organ status associated with long term allograft function.

    PubMed

    Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert

    2018-05-03

    Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.

  11. A simple physiologically based pharmacokinetic model evaluating the effect of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans

    PubMed Central

    Saylor, Kyle; Zhang, Chenming

    2017-01-01

    Physiologically based pharmacokinetic (PBPK) modeling was applied to investigate the effects of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans. Successful construction of both rat and human models was achieved by fitting model outputs to published nicotine concentration time course data in the blood and in the brain. Key parameters presumed to have the most effect on the ability of these antibodies to prevent nicotine from entering the brain were selected for investigation using the human model. These parameters, which included antibody affinity for nicotine, antibody cross-reactivity with cotinine, and antibody concentration, were broken down into different, clinically-derived in silico treatment levels and fed into the human PBPK model. Model predictions suggested that all three parameters, in addition to smoking status, have a sizable impact on anti-nicotine antibodies’ ability to prevent nicotine from entering the brain and that the antibodies elicited by current human vaccines do not have sufficient binding characteristics to reduce brain nicotine concentrations. If the antibody binding characteristics achieved in animal studies can similarly be achieved in human studies, however, nicotine vaccine efficacy in terms of brain nicotine concentration reduction is predicted to meet threshold values for alleviating nicotine dependence. PMID:27473014

  12. Predicting the photoinduced electron transfer thermodynamics in polyfluorinated 1,3,5-triarylpyrazolines based on multiple linear free energy relationships†

    PubMed Central

    Verma, Manjusha; Chaudhry, Aneese F.; Fahrni, Christoph J.

    2010-01-01

    The photophysical properties of 1,3,5-triarylpyrazolines are strongly influenced by the nature and position of substituents attached to the aryl-rings, rendering this fluorophore platform well suited for the design of fluorescent probes utilizing a photoinduced electron transfer (PET) switching mechanism. To explore the tunability of two key parameters that govern the PET thermodynamics, the excited state energy ΔE00 and acceptor potential E(A/A−), a library of polyfluoro-substituted 1,3-diaryl-5-phenyl-pyrazolines was synthesized and characterized. The observed trends for the PET parameters were effectively captured through multiple Hammett linear free energy relationships (LFER) using a set of independent substituent constants for each of the two aryl rings. Given the lack of experimental Hammett constants for polyfluoro substituted aromatics, theoretically derived constants based on the electrostatic potential at the nucleus (EPN) of carbon atoms were employed as quantum chemical descriptors. The performance of the LFER was evaluated with a set of compounds that were not included in the training set, yielding a mean unsigned error of 0.05 eV for the prediction of the combined PET parameters. The outlined LFER approach should be well suited to design and optimize the performance of cation-responsive 1,3,5-triarylpyrazolines. PMID:19343239

  13. Predicting species' range limits from functional traits for the tree flora of North America.

    PubMed

    Stahl, Ulrike; Reu, Björn; Wirth, Christian

    2014-09-23

    Using functional traits to explain species' range limits is a promising approach in functional biogeography. It replaces the idiosyncrasy of species-specific climate ranges with a generic trait-based predictive framework. In addition, it has the potential to shed light on specific filter mechanisms creating large-scale vegetation patterns. However, its application to a continental flora, spanning large climate gradients, has been hampered by a lack of trait data. Here, we explore whether five key plant functional traits (seed mass, wood density, specific leaf area (SLA), maximum height, and longevity of a tree)--indicative of life history, mechanical, and physiological adaptations--explain the climate ranges of 250 North American tree species distributed from the boreal to the subtropics. Although the relationship between traits and the median climate across a species range is weak, quantile regressions revealed strong effects on range limits. Wood density and seed mass were strongly related to the lower but not upper temperature range limits of species. Maximum height affects the species range limits in both dry and humid climates, whereas SLA and longevity do not show clear relationships. These results allow the definition and delineation of climatic "no-go areas" for North American tree species based on key traits. As some of these key traits serve as important parameters in recent vegetation models, the implementation of trait-based climatic constraints has the potential to predict both range shifts and ecosystem consequences on a more functional basis. Moreover, for future trait-based vegetation models our results provide a benchmark for model evaluation.

  14. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  15. In silico predictions of gastrointestinal drug absorption in pharmaceutical product development: application of the mechanistic absorption model GI-Sim.

    PubMed

    Sjögren, Erik; Westergren, Jan; Grant, Iain; Hanisch, Gunilla; Lindfors, Lennart; Lennernäs, Hans; Abrahamsson, Bertil; Tannergren, Christer

    2013-07-16

    Oral drug delivery is the predominant administration route for a major part of the pharmaceutical products used worldwide. Further understanding and improvement of gastrointestinal drug absorption predictions is currently a highly prioritized area of research within the pharmaceutical industry. The fraction absorbed (fabs) of an oral dose after administration of a solid dosage form is a key parameter in the estimation of the in vivo performance of an orally administrated drug formulation. This study discloses an evaluation of the predictive performance of the mechanistic physiologically based absorption model GI-Sim. GI-Sim deploys a compartmental gastrointestinal absorption and transit model as well as algorithms describing permeability, dissolution rate, salt effects, partitioning into micelles, particle and micelle drifting in the aqueous boundary layer, particle growth and amorphous or crystalline precipitation. Twelve APIs with reported or expected absorption limitations in humans, due to permeability, dissolution and/or solubility, were investigated. Predictions of the intestinal absorption for different doses and formulations were performed based on physicochemical and biopharmaceutical properties, such as solubility in buffer and simulated intestinal fluid, molecular weight, pK(a), diffusivity and molecule density, measured or estimated human effective permeability and particle size distribution. The performance of GI-Sim was evaluated by comparing predicted plasma concentration-time profiles along with oral pharmacokinetic parameters originating from clinical studies in healthy individuals. The capability of GI-Sim to correctly predict impact of dose and particle size as well as the in vivo performance of nanoformulations was also investigated. The overall predictive performance of GI-Sim was good as >95% of the predicted pharmacokinetic parameters (C(max) and AUC) were within a 2-fold deviation from the clinical observations and the predicted plasma AUC was within one standard deviation of the observed mean plasma AUC in 74% of the simulations. GI-Sim was also able to correctly capture the trends in dose- and particle size dependent absorption for the study drugs with solubility and dissolution limited absorption, respectively. In addition, GI-Sim was also shown to be able to predict the increase in absorption and plasma exposure achieved with nanoformulations. Based on the results, the performance of GI-Sim was shown to be suitable for early risk assessment as well as to guide decision making in pharmaceutical formulation development. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Unitarity and predictiveness in new Higgs inflation

    NASA Astrophysics Data System (ADS)

    Fumagalli, Jacopo; Mooij, Sander; Postma, Marieke

    2018-03-01

    In new Higgs inflation the Higgs kinetic terms are non-minimally coupled to the Einstein tensor, allowing the Higgs field to play the role of the inflaton. The new interaction is non-renormalizable, and the model only describes physics below some cutoff scale. Even if the unknown UV physics does not affect the tree level inflaton potential significantly, it may still enter at loop level and modify the running of the Standard Model (SM) parameters. This is analogous to what happens in the original model for Higgs inflation. A key difference, though, is that in new Higgs inflation the inflationary predictions are sensitive to this running. Thus the boundary conditions at the EW scale as well as the unknown UV completion may leave a signature on the inflationary parameters. However, this dependence can be evaded if the kinetic terms of the SM fermions and gauge fields are non-minimally coupled to gravity as well. Our approach to determine the model's UV dependence and the connection between low and high scale physics can be used in any particle physics model of inflation.

  17. Remote monitoring of LED lighting system performance

    NASA Astrophysics Data System (ADS)

    Thotagamuwa, Dinusha R.; Perera, Indika U.; Narendran, Nadarajah

    2016-09-01

    The concept of connected lighting systems using LED lighting for the creation of intelligent buildings is becoming attractive to building owners and managers. In this application, the two most important parameters include power demand and the remaining useful life of the LED fixtures. The first enables energy-efficient buildings and the second helps building managers schedule maintenance services. The failure of an LED lighting system can be parametric (such as lumen depreciation) or catastrophic (such as complete cessation of light). Catastrophic failures in LED lighting systems can create serious consequences in safety critical and emergency applications. Therefore, both failure mechanisms must be considered and the shorter of the two must be used as the failure time. Furthermore, because of significant variation between the useful lives of similar products, it is difficult to accurately predict the life of LED systems. Real-time data gathering and analysis of key operating parameters of LED systems can enable the accurate estimation of the useful life of a lighting system. This paper demonstrates the use of a data-driven method (Euclidean distance) to monitor the performance of an LED lighting system and predict its time to failure.

  18. Predicting coral bleaching in response to environmental stressors using 8 years of global-scale data.

    PubMed

    Yee, Susan Harrell; Barron, Mace G

    2010-02-01

    Coral reefs have experienced extensive mortality over the past few decades as a result of temperature-induced mass bleaching events. There is an increasing realization that other environmental factors, including water mixing, solar radiation, water depth, and water clarity, interact with temperature to either exacerbate bleaching or protect coral from mass bleaching. The relative contribution of these factors to variability in mass bleaching at a global scale has not been quantified, but can provide insights when making large-scale predictions of mass bleaching events. Using data from 708 bleaching surveys across the globe, a framework was developed to predict the probability of moderate or severe bleaching as a function of key environmental variables derived from global-scale remote-sensing data. The ability of models to explain spatial and temporal variability in mass bleaching events was quantified. Results indicated approximately 20% improved accuracy of predictions of bleaching when solar radiation and water mixing, in addition to elevated temperature, were incorporated into models, but predictive accuracy was variable among regions. Results provide insights into the effects of environmental parameters on bleaching at a global scale.

  19. Development of polyparameter linear free energy relationship models for octanol-air partition coefficients of diverse chemicals.

    PubMed

    Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen

    2017-03-22

    The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).

  20. Development of a prediction model on the acceptance of electronic laboratory notebooks in academic environments.

    PubMed

    Kloeckner, Frederik; Farkas, Robert; Franken, Tobias; Schmitz-Rode, Thomas

    2014-04-01

    Documentation of research data plays a key role in the biomedical engineering innovation processes. It makes an important contribution to the protection of intellectual property, the traceability of results and fulfilling the regulatory requirement. Because of the increasing digitalization in laboratories, an electronic alternative to the commonly-used paper-bound notebooks could contribute to the production of sophisticated documentation. However, compared to in an industrial environment, the use of electronic laboratory notebooks is not widespread in academic laboratories. Little is known about the acceptance of an electronic documentation system and the underlying reasons for this. Thus, this paper aims to establish a prediction model on the potential preference and acceptance of scientists either for paper-based or electronic documentation. The underlying data for the analysis originate from an online survey of 101 scientists in industrial, academic and clinical environments. Various parameters were analyzed to identify crucial factors for the system preference using binary logistic regression. The analysis showed significant dependency between the documentation system preference and the supposed workload associated with the documentation system (p<0.006; odds ratio=58.543) and an additional personal component. Because of the dependency of system choice on specific parameters it is possible to predict the acceptance of an electronic laboratory notebook before implementation.

  1. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  2. Formulation strategies for optimizing the morphology of polymeric bulk heterojunction organic solar cells: a brief review

    NASA Astrophysics Data System (ADS)

    Vongsaysy, Uyxing; Bassani, Dario M.; Servant, Laurent; Pavageau, Bertrand; Wantz, Guillaume; Aziz, Hany

    2014-01-01

    Polymeric bulk heterojunction (BHJ) organic solar cells represent one of the most promising technologies for renewable energy with a low fabrication cost. Control over BHJ morphology is one of the key factors in obtaining high-efficiency devices. This review focuses on formulation strategies for optimizing the BHJ morphology. We address how solvent choice and the introduction of processing additives affect the morphology. We also review a number of recent studies concerning prediction methods that utilize the Hansen solubility parameters to develop efficient solvent systems.

  3. Predictive value of seven preoperative prognostic scoring systems for spinal metastases.

    PubMed

    Leithner, Andreas; Radl, Roman; Gruber, Gerald; Hochegger, Markus; Leithner, Katharina; Welkerling, Heike; Rehak, Peter; Windhager, Reinhard

    2008-11-01

    Predicting prognosis is the key factor in selecting the proper treatment modality for patients with spinal metastases. Therefore, various assessment systems have been designed in order to provide a basis for deciding the course of treatment. Such systems have been proposed by Tokuhashi, Sioutos, Tomita, Van der Linden, and Bauer. The scores differ greatly in the kind of parameters assessed. The aim of this study was to evaluate the prognostic value of each score. Eight parameters were assessed for 69 patients (37 male, 32 female): location, general condition, number of extraspinal bone metastases, number of spinal metastases, visceral metastases, primary tumour, severity of spinal cord palsy, and pathological fracture. Scores according to Tokuhashi (original and revised), Sioutos, Tomita, Van der Linden, and Bauer were assessed as well as a modified Bauer score without scoring for pathologic fracture. Nineteen patients were still alive as of September 2006 with a minimum follow-up of 12 months. All other patients died after a mean period of 17 months after operation. The mean overall survival period was only 3 months for lung cancer, followed by prostate (7 months), kidney (23 months), breast (35 months), and multiple myeloma (51 months). At univariate survival analysis, primary tumour and visceral metastases were significant parameters, while Karnofsky score was only significant in the group including myeloma patients. In multivariate analysis of all seven parameters assessed, primary tumour and visceral metastases were the only significant parameters. Of all seven scoring systems, the original Bauer score and a Bauer score without scoring for pathologic fracture had the best association with survival (P < 0.001). The data of the present study emphasize that the original Bauer score and a modified Bauer score without scoring for pathologic fracture seem to be practicable and highly predictive preoperative scoring systems for patients with spinal metastases. However, decision for or against surgery should never be based alone on a prognostic score but should take symptoms like pain or neurological compromise into account.

  4. A Simplified Model of Choice Behavior under Uncertainty

    PubMed Central

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  5. Sensitivity of predicted scaling and permeability in Enhanced Geothermal Systems to Thermodynamic Data and Activity Models

    NASA Astrophysics Data System (ADS)

    Hingerl, Ferdinand F.; Wagner, Thomas; Kulik, Dmitrii A.; Kosakowski, Georg; Driesner, Thomas; Thomsen, Kaj

    2010-05-01

    A consortium of research groups from ETH Zurich, EPF Lausanne, the Paul Scherrer Institut and the University of Bonn collaborates in a comprehensive program of basic research on key aspects of the Enhanced Geothermal Systems (EGSs). As part of this GEOTHERM project (www.geotherm.ethz.ch), we concentrate on the fundamental investigation of thermodynamic models suitable for describing fluid-rock interactions at geothermal conditions. Predictions of the fluid-rock interaction in EGS still face several major challenges. Slight variations in the input thermodynamic and kinetic parameters may result in significant differences in the predicted mineral solubilities and stable assemblage. Realistic modeling of mineral precipitation in turn has implications onto our understanding of the permeability evolution of the geothermal reservoir, as well as the scaling in technical installations. In order to reasonably model an EGS, thermodynamic databases and activity models must be tailored to geothermal conditions. We therefore implemented in GEMS code the Pitzer formalism, which is the standard model used for computing thermodynamic excess properties of brines at elevated temperatures and pressures. This model, however, depends on a vast amount of interaction parameters, which are to a substantial extend unknown. Furthermore, a high order polynomial temperature interpolation makes extrapolation unreliable if not impossible. As an alternative we additionally implemented the EUNIQUAC activity model. EUNIQUAC requires fewer empirical fit parameters (only binary interaction parameters needed) and uses simpler and more stable temperature and pressure extrapolations. This results in an increase in computation speed, which is of crucial importance when performing coupled long term simulations of geothermal reservoirs. To achieve better performance under geothermal conditions, we are currently partly reformulating EUNIQUAC and refitting the existing parameter set. First results of the Pitzer-EUNIQUAC benchmark applied to relevant aqueous solutions at elevated temperature, pressure and ionic strength will be presented.

  6. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  7. Methods for using groundwater model predictions to guide hydrogeologic data collection, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.

    2003-01-01

    Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.

  8. Polymeric membrane materials: new aspects of empirical approaches to prediction of gas permeability parameters in relation to permanent gases, linear lower hydrocarbons and some toxic gases.

    PubMed

    Malykh, O V; Golub, A Yu; Teplyakov, V V

    2011-05-11

    Membrane gas separation technologies (air separation, hydrogen recovery from dehydrogenation processes, etc.) use traditionally the glassy polymer membranes with dominating permeability of "small" gas molecules. For this purposes the membranes based on the low free volume glassy polymers (e.g., polysulfone, tetrabromopolycarbonate and polyimides) are used. On the other hand, an application of membrane methods for VOCs and some toxic gas recovery from air, separation of the lower hydrocarbons containing mixtures (in petrochemistry and oil refining) needs the membranes with preferable penetration of components with relatively larger molecular sizes. In general, this kind of permeability is characterized for rubbers and for the high free volume glassy polymers. Data files accumulated (more than 1500 polymeric materials) represent the region of parameters "inside" of these "boundaries." Two main approaches to the prediction of gas permeability of polymers are considered in this paper: (1) the statistical treatment of published transport parameters of polymers and (2) the prediction using model of ≪diffusion jump≫ with consideration of the key properties of the diffusing molecule and polymeric matrix. In the frames of (1) the paper presents N-dimensional methods of the gas permeability estimation of polymers using the correlations "selectivity/permeability." It is found that the optimal accuracy of prediction is provided at n=4. In the frames of the solution-diffusion mechanism (2) the key properties include the effective molecular cross-section of penetrating species to be responsible for molecular transportation in polymeric matrix and the well known force constant (ε/k)(eff i) of {6-12} potential for gas-gas interaction. Set of corrected effective molecular cross-section of penetrant including noble gases (He, Ne, Ar, Kr, Xe), permanent gases (H(2), O(2), N(2), CO), ballast and toxic gases (CO(2), NO(,) NO(2), SO(2), H(2)S) and linear lower hydrocarbons (CH(4), C(2)H(6), C(3)H(8), C(4)H(10), C(2)H(4), C(3)H(6), C(4)H(8) - 1, C(2)H(2), C(3)H(4)-m (methylacetylene) and C(3)H(4)-a (allen) is determined by using two above mentioned approaches. All of this allows calculating preliminary the permeability parameters of above mentioned gases for most part of known polymers based on limited experimental data. The new correlations suggested demonstrate that the available free volume of polymeric matrix plays an important role in providing of rate and selectivity of gas diffusion for glassy-like polymers; the rate and selectivity of gas diffusion in rubbers is affected mainly by cohesion energy density (CED) the both polymer parameters being calculated by traditional additive group contributions technique. Results of present study are demonstrated by calculation of expected permeability parameters in relation to lower hydrocarbons and some toxic gases for polynorbornene based polymers, PIM and PTMSP outlining potential of practical application for new membrane polymers. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Minimum Winfree loop determines self-sustained oscillations in excitable Erdös-Rényi random networks.

    PubMed

    Qian, Yu; Cui, Xiaohua; Zheng, Zhigang

    2017-07-18

    The investigation of self-sustained oscillations in excitable complex networks is very important in understanding various activities in brain systems, among which the exploration of the key determinants of oscillations is a challenging task. In this paper, by investigating the influence of system parameters on self-sustained oscillations in excitable Erdös-Rényi random networks (EERRNs), the minimum Winfree loop (MWL) is revealed to be the key factor in determining the emergence of collective oscillations. Specifically, the one-to-one correspondence between the optimal connection probability (OCP) and the MWL length is exposed. Moreover, many important quantities such as the lower critical connection probability (LCCP), the OCP, and the upper critical connection probability (UCCP) are determined by the MWL. Most importantly, they can be approximately predicted by the network structure analysis, which have been verified in numerical simulations. Our results will be of great importance to help us in understanding the key factors in determining persistent activities in biological systems.

  10. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  11. Chairside CAD/CAM materials. Part 3: Cyclic fatigue parameters and lifetime predictions.

    PubMed

    Wendler, Michael; Belli, Renan; Valladares, Diana; Petschelt, Anselm; Lohbauer, Ulrich

    2018-06-01

    Chemical and mechanical degradation play a key role on the lifetime of dental restorative materials. Therefore, prediction of their long-term performance in the oral environment should base on fatigue, rather than inert strength data, as commonly observed in the dental material's field. The objective of the present study was to provide mechanistic fatigue parameters of current dental CAD/CAM materials under cyclic biaxial flexure and assess their suitability in predicting clinical fracture behaviors. Eight CAD/CAM materials, including polycrystalline zirconia (IPS e.max ZirCAD), reinforced glasses (Vitablocs Mark II, IPS Empress CAD), glass-ceramics (IPS e.max CAD, Suprinity PC, Celtra Duo), as well as hybrid materials (Enamic, Lava Ultimate) were evaluated. Rectangular plates (12×12×1.2mm 3 ) with highly polished surfaces were prepared and tested in biaxial cyclic fatigue in water until fracture using the Ball-on-Three-Balls (B3B) test. Cyclic fatigue parameters n and A* were obtained from the lifetime data for each material and further used to build SPT diagrams. The latter were used to compare in-vitro with in-vivo fracture distributions for IPS e.max CAD and IPS Empress CAD. Susceptibility to subcritical crack growth under cyclic loading was observed for all materials, being more severe (n≤20) in lithium-based glass-ceramics and Vitablocs Mark II. Strength degradations of 40% up to 60% were predicted after only 1 year of service. Threshold stress intensity factors (K th ) representing the onset of subcritical crack growth (SCG), were estimated to lie in the range of 0.37-0.44 of K Ic for the lithium-based glass-ceramics and Vitablocs Mark II and between 0.51-0.59 of K Ic for the other materials. Failure distributions associated with mechanistic estimations of strength degradation in-vitro showed to be useful in interpreting failure behavior in-vivo. The parameter K th stood out as a better predictor of clinical performance in detriment to the SCG n parameter. Fatigue parameters obtained from cyclic loading experiments are more reliable predictors of the mechanical performance of contemporary dental CAD/CAM restoratives than quasi-static mechanical properties. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  12. Concomitant semi-quantitative and visual analysis improves the predictive value on treatment outcome of interim 18F-fluorodeoxyglucose / Positron Emission Tomography in advanced Hodgkin lymphoma.

    PubMed

    Biggi, Alberto; Bergesio, Fabrizio; Chauvie, Stephane; Bianchi, Andrea; Menga, Massimo; Fallanca, Federico; Hutchings, Martin; Gregianin, Michele; Meignan, Michel; Gallamini, Andrea

    2017-07-27

    Qualitative assessment using the Deauville five-point scale (DS) is the gold standard for interim and end-of treatment PET interpretation in lymphoma. In the present study we assessed the reliability and the prognostic value of different semi- quantitative (SQ) parameters in comparison with DS for interim PET (iPET) interpretation in Hodgkin lymphoma (HL). A cohort of 82 out of 260 patients with advanced stage HL enrolled in the International Validation Study (IVS), scored as 3 to 5 by the expert panel was included in the present report. Two nuclear medicine physicians blinded to patient history, clinical data and treatment outcome reviewed independently the iPET using the following parameters: DS, SUVMax, SUVPeak of the most active lesion, QMax (ratio of SUVMax of the lesion to liver SUVMax) and QRes (ratio of SUVPeak of the lesion to liver SUVMean). The optimal sensitivity, specificity, positive and negative predictive value to predict treatment outcome was calculated for all the above parameters with the Receiver Operator Characteristics analysis. The prognostic value of all parameters were similar, the best cut-off value being 4 for DS (Area Under the Curve, AUC, 0.81 CI95%: 0.72-0.90), 3.81 for SUVMax (AUC 0.82 CI95%: 0.73-0.91), 3.20 for SUVPeak (AUC 0.86 CI95%: 0.77-0.94), 1.07 for QMax (AUC 0.84 CI95%: 0.75-0.93) and 1.38 for QRes (AUC 0.84 CI95%: 0.75-0.93). The reproducibility of different parameters was similar as the inter-observer variability measured with Cohen's kappa were 0.93 (95% CI 0.84-1.01) for the DS, 0.88 (0.77-0.98) for SUVMax, 0.82 (0.70-0.95) for SUVPeak, 0.85 (0.74-0.97) for QRes and 0.78 (0.65-0.92) for QMax. Due to the high specificity of SUVPeak (0.87) and to the good sensitivity of DS (0.86), upon the use of both parameters the positive predictive value increased from 0.65 of the DS alone to 0.79. When both parameters were positive in iPET, 3-years Failure-Free Survival (FFS) was significantly lower compared to patients whose iPET was interpreted with qualitative parameters only (DS 4 or 5): 21% vs 35%. On the other hand, the FFS of patients with negative results was not significantly different (88% vs 86%). In this study we demonstrated that, combining semi-quantitative parameters with SUVPeak to a pure qualitative interpretation key with DS, it is possible to increase the positive predictive value of iPET and to identify with higher precision the patients subset with a very dismal prognosis. However, these retrospective findings should be confirmed prospectively in a larger patient cohort.

  13. Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm

    NASA Technical Reports Server (NTRS)

    Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)

    2004-01-01

    In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.

  14. Prediction of body lipid change in pregnancy and lactation.

    PubMed

    Friggens, N C; Ingvartsen, K L; Emmans, G C

    2004-04-01

    A simple method to predict the genetically driven pattern of body lipid change through pregnancy and lactation in dairy cattle is proposed. The rationale and evidence for genetically driven body lipid change have their basis in evolutionary considerations and in the homeorhetic changes in lipid metabolism through the reproductive cycle. The inputs required to predict body lipid change are body lipid mass at calving (kg) and the date of conception (days in milk). Body lipid mass can be derived from body condition score and live weight. A key assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between calving and a genetically determined time in lactation (T') at which a particular level of body lipid (L') is sought. A second assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between T' and the next calving. The resulting model was evaluated using 2 sets of data. The first was from Holstein cows with 3 different levels of body fatness at calving. The second was from Jersey cows in first, second, and third parity. The model was found to reproduce the observed patterns of change in body lipid reserves through lactation in both data sets. The average error of prediction was low, less than the variation normally associated with the recording of condition score, and was similar for the 2 data sets. When the model was applied using the initially suggested parameter values derived from the literature the average error of prediction was 0.185 units of condition score (+/- 0.086 SD). After minor adjustments to the parameter values, the average error of prediction was 0.118 units of condition score (+/- 0.070 SD). The assumptions on which the model is based were sufficient to predict the changes in body lipid of both Holstein and Jersey cows under different nutritional conditions and parities. Thus, the model presented here shows that it is possible to predict genetically driven curves of body lipid change through lactation in a simple way that requires few parameters and inputs that can be derived in practice. It is expected that prediction of the cow's energy requirements can be substantially improved, particularly in early lactation, by incorporating a genetically driven body energy mobilization.

  15. A Final Approach Trajectory Model for Current Operations

    NASA Technical Reports Server (NTRS)

    Gong, Chester; Sadovsky, Alexander

    2010-01-01

    Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.

  16. Ambulance smartphone tool for field triage of ruptured aortic aneurysms (FILTR): study protocol for a prospective observational validation of diagnostic accuracy.

    PubMed

    Lewis, Thomas L; Fothergill, Rachael T; Karthikesalingam, Alan

    2016-10-24

    Rupture of an abdominal aortic aneurysm (rAAA) carries a considerable mortality rate and is often fatal. rAAA can be treated through open or endovascular surgical intervention and it is possible that more rapid access to definitive intervention might be a key aspect of improving mortality for rAAA. Diagnosis is not always straightforward with up to 42% of rAAA initially misdiagnosed, introducing potentially harmful delay. There is a need for an effective clinical decision support tool for accurate prehospital diagnosis and triage to enable transfer to an appropriate centre. Prospective multicentre observational study assessing the diagnostic accuracy of a prehospital smartphone triage tool for detection of rAAA. The study will be conducted across London in conjunction with London Ambulance Service (LAS). A logistic score predicting the risk of rAAA by assessing ten key parameters was developed and retrospectively validated through logistic regression analysis of ambulance records and Hospital Episode Statistics data for 2200 patients from 2005 to 2010. The triage tool is integrated into a secure mobile app for major smartphone platforms. Key parameters collected from the app will be retrospectively matched with final hospital discharge diagnosis for each patient encounter. The primary outcome is to assess the sensitivity, specificity and positive predictive value of the rAAA triage tool logistic score in prospective use as a mob app for prehospital ambulance clinicians. Data collection started in November 2014 and the study will recruit a minimum of 1150 non-consecutive patients over a time period of 2 years. Full ethical approval has been gained for this study. The results of this study will be disseminated in peer-reviewed publications, and international/national presentations. CPMS 16459; pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  17. Predicting Bone Mechanical State During Recovery After Long-Duration Skeletal Unloading Using QCT and Finite Element Modeling

    NASA Technical Reports Server (NTRS)

    Chang, Katarina L.; Pennline, James A.

    2013-01-01

    During long-duration missions at the International Space Station, astronauts experience weightlessness leading to skeletal unloading. Unloading causes a lack of a mechanical stimulus that triggers bone cellular units to remove mass from the skeleton. A mathematical system of the cellular dynamics predicts theoretical changes to volume fractions and ash fraction in response to temporal variations in skeletal loading. No current model uses image technology to gather information about a skeletal site s initial properties to calculate bone remodeling changes and then to compare predicted bone strengths with the initial strength. The goal of this study is to use quantitative computed tomography (QCT) in conjunction with a computational model of the bone remodeling process to establish initial bone properties to predict changes in bone mechanics during bone loss and recovery with finite element (FE) modeling. Input parameters for the remodeling model include bone volume fraction and ash fraction, which are both computed from the QCT images. A non-destructive approach to measure ash fraction is also derived. Voxel-based finite element models (FEM) created from QCTs provide initial evaluation of bone strength. Bone volume fraction and ash fraction outputs from the computational model predict changes to the elastic modulus of bone via a two-parameter equation. The modulus captures the effect of bone remodeling and functions as the key to evaluate of changes in strength. Application of this time-dependent modulus to FEMs and composite beam theory enables an assessment of bone mechanics during recovery. Prediction of bone strength is not only important for astronauts, but is also pertinent to millions of patients with osteoporosis and low bone density.

  18. Indigenous Waters: Applying the SWAT Hydrological Model to the Lumbee River Watershed

    NASA Astrophysics Data System (ADS)

    Painter, J.; Singh, N.; Martin, K. L.; Vose, J. M.; Wear, D. N.; Emanuel, R. E.

    2016-12-01

    Hydrological modeling can reveal insight about how rainfall becomes streamflow in a watershed comprising heterogeneous soils, terrain and land cover. Modeling can also help disentangle predicted impacts of climate and land use change on hydrological processes. We applied a hydrological model to the Lumbee River watershed, also known as the Lumber River Watershed, in the coastal plain of North Carolina (USA) to better understand how streamflow may be impacted by predicted climate and land use change in the mid-21st century. The Lumbee River flows through a predominantly Native American community, which may be affected by changing water resources during this period. The long-term goal of our project is to predict the effects of climate and land use change on the Lumbee River watershed and on the Native community that relies upon the river. We applied the Soil & Water Assessment Tool for ArcGIS (ArcSWAT), which was calibrated to historical climate and USGS streamflow data during the late 20th century, and we determined frequency distributions for key model parameters that best predicted streamflow during this time period. After calibrating and validating the model during the historical period, we identified land use and climate projections to represent a range of future conditions in the watershed. Specifically, we selected downscaled climate forcing data from four general circulation models running the RCP8.5 scenario. We also selected land use projections from a cornerstone scenario of the USDA Forest Service's Southern Forest Futures Project. This presentation reports on our methods for propagating parameter and climatic uncertainty through model predictions, and it reports on spatial patterns of land use change predicted by the cornerstone scenario.

  19. Automated Method for Estimating Nutation Time Constant Model Parameters for Spacecraft Spinning on Axis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Calculating an accurate nutation time constant (NTC), or nutation rate of growth, for a spinning upper stage is important for ensuring mission success. Spacecraft nutation, or wobble, is caused by energy dissipation anywhere in the system. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and, if it is in a state of resonance, the NTC can become short enough to violate mission constraints. The Spinning Slosh Test Rig (SSTR) is a forced-motion spin table where fluid dynamic effects in full-scale fuel tanks can be tested in order to obtain key parameters used to calculate the NTC. We accomplish this by independently varying nutation frequency versus the spin rate and measuring force and torque responses on the tank. This method was used to predict parameters for the Genesis, Contour, and Stereo missions, whose tanks were mounted outboard from the spin axis. These parameters are incorporated into a mathematical model that uses mechanical analogs, such as pendulums and rotors, to simulate the force and torque resonances associated with fluid slosh.

  20. Modelling and multi objective optimization of WEDM of commercially Monel super alloy using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu

    2016-09-01

    In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.

  1. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  2. A Model to Assess the Risk of Ice Accretion Due to Ice Crystal Ingestion in a Turbofan Engine and its Effects on Performance

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Wright, William B.; Struk, Peter M.

    2013-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that were attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was one or more of the following anomalies: degraded engine performance, engine roll back, compressor surge and stall, and flameout of the combustor. The main focus of this research is the development of a computational tool that can estimate whether there is a risk of ice accretion by tracking key parameters through the compression system blade rows at all engine operating points within the flight trajectory. The tool has an engine system thermodynamic cycle code, coupled with a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Assumptions are made to predict the complex physics involved in engine icing. Specifically, the code does not directly estimate ice accretion and does not have models for particle breakup or erosion. Two key parameters have been suggested as conditions that must be met at the same location for ice accretion to occur: the local wet-bulb temperature to be near freezing or below and the local melt ratio must be above 10%. These parameters were deduced from analyzing laboratory icing test data and are the criteria used to predict the possibility of ice accretion within an engine including the specific blade row where it could occur. Once the possibility of accretion is determined from these parameters, the degree of blockage due to ice accretion on the local stator vane can be estimated from an empirical model of ice growth rate and time spent at that operating point in the flight trajectory. The computational tool can be used to assess specific turbine engines to their susceptibility to ice accretion in an ice crystal environment.

  3. Microbially enhanced dissolution and reductive dechlorination of PCE by a mixed culture: Model validation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Abriola, Linda M.; Amos, Benjamin K.; Suchomel, Eric J.; Pennell, Kurt D.; Löffler, Frank E.; Christ, John A.

    2013-08-01

    Reductive dechlorination catalyzed by organohalide-respiring bacteria is often considered for remediation of non-aqueous phase liquid (NAPL) source zones due to cost savings, ease of implementation, regulatory acceptance, and sustainability. Despite knowledge of the key dechlorinators, an understanding of the processes and factors that control NAPL dissolution rates and detoxification (i.e., ethene formation) is lacking. A recent column study demonstrated a 5-fold cumulative enhancement in tetrachloroethene (PCE) dissolution and ethene formation (Amos et al., 2009). Spatial and temporal monitoring of key geochemical and microbial (i.e., Geobacter lovleyi and Dehalococcoides mccartyi strains) parameters in the column generated a data set used herein as the basis for refinement and testing of a multiphase, compositional transport model. The refined model is capable of simulating the reactive transport of multiple chemical constituents produced and consumed by organohalide-respiring bacteria and accounts for substrate limitations and competitive inhibition. Parameter estimation techniques were used to optimize the values of sensitive microbial kinetic parameters, including maximum utilization rates, biomass yield coefficients, and endogenous decay rates. Comparison and calibration of model simulations with the experimental data demonstrate that the model is able to accurately reproduce measured effluent concentrations, while delineating trends in dechlorinator growth and reductive dechlorination kinetics along the column. Sensitivity analyses performed on the optimized model parameters indicate that the rates of PCE and cis-1,2-dichloroethene (cis-DCE) transformation and Dehalococcoides growth govern bioenhanced dissolution, as long as electron donor (i.e., hydrogen flux) is not limiting. Dissolution enhancements were shown to be independent of cis-DCE accumulation; however, accumulation of cis-DCE, as well as column length and flow rate (i.e., column residence time), strongly influenced the extent of reductive dechlorination. When cis-DCE inhibition was neglected, the model over-predicted ethene production ten-fold, while reductions in residence time (i.e., a two-fold decrease in column length or two-fold increase in flow rate) resulted in a more than 70% decline in ethene production. These results suggest that spatial and temporal variations in microbial community composition and activity must be understood to model, predict, and manage bioenhanced NAPL dissolution.

  4. Progress in Aluminum Electrolysis Control and Future Direction for Smart Aluminum Electrolysis Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Hongliang; Li, Tianshuang; Li, Jie; Yang, Shuai; Zou, Zhong

    2017-02-01

    The industrial aluminum reduction cell is an electrochemistry reactor that operates under high temperatures and highly corrosive conditions. However, these conditions have restricted the measurement of key control parameters, making the control of aluminum reduction cells a difficult problem in the industry. Because aluminum electrolysis control systems have a significant economic influence, substantial research has been conducted on control algorithms, control systems and information systems for aluminum reduction cells. This article first summarizes the development of control systems and then focuses on the progress made since 2000, including alumina concentration control, temperature control and electrolyte molecular ratio control, fault diagnosis, cell condition prediction and control system expansion. Based on these studies, the concept of a smart aluminum electrolysis plant is proposed. The frame construction, key problems and current progress are introduced. Finally, several future directions are discussed.

  5. RAYLEIGH–TAYLOR UNSTABLE FLAMES—FAST OR FASTER?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, E. P., E-mail: eph2001@columbia.edu

    2015-04-20

    Rayleigh–Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate bothmore » models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.« less

  6. Rayleigh-Taylor Unstable Flames -- Fast or Faster?

    NASA Astrophysics Data System (ADS)

    Hicks, E. P.

    2015-04-01

    Rayleigh-Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate both models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.

  7. Application of Unmanned Aircraft System Instrumentation to Study Coastal Geochemistry

    NASA Astrophysics Data System (ADS)

    Coffin, R. B.; Osburn, C. L.; Smith, J. P.

    2016-02-01

    Coastal evaluation of key geochemical cycles is in strong need for thorough spatial data to address diverse topics. In many field studies we find that fixed station data taken from ship operations does not provide complete understanding of key research questions. In complicated systems where there is a need to integrate physical, chemical and biological parameters data taken from research vessels needs to be interpreted across large spatial areas. New technology in Unmanned Aircraft System (UAS) instrumentation coupled with ship board data can provide the thorough spatial data needed for a thorough evaluation of coastal sciences. This presentation will provide field data related to UAS application in two diverse environments. One study focuses on the flux of carbon dioxide and methane from Alaskan Arctic tundra and shallow Beaufort Sea coastal region to the atmosphere. In this study gas chemistry from samples is used to predict the relative fluxes to the atmosphere. A second study applies bio-optical analyses to differentiate between Gulf of Mexico coastal water column DOC and Lignin. This wide range of parameters in diverse ecosystems is selected to show current capability for application of UAS and the potential for understanding large scale questions about climate change and carbon cycling in coastal waters.

  8. Oxygen consumption by bovine granulosa cells with prediction of oxygen transport in preantral follicles.

    PubMed

    Li, Dongxing; Redding, Gabe P; Bronlund, John E

    2013-01-01

    The rate of oxygen consumption by granulosa cells is a key parameter in mathematical models that describe oxygen transport across ovarian follicles. This work measured the oxygen consumption rate of bovine granulosa cells in vitro to be in the range 2.1-3.3×10⁻¹⁶ mol cell⁻¹ s⁻¹ (0.16-0.25 mol m⁻³ s⁻¹). The implications of the rates for oxygen transport in large bovine preantral follicles were examined using a mathematical model. The results indicate that oocyte oxygenation becomes increasingly constrained as preantral follicles grow, reaching hypoxic levels near the point of antrum formation. Beyond a preantral follicle radius of 134 µm, oxygen cannot reach the oocyte surface at typical values of model parameters. Since reported sizes of large bovine preantral follicles range from 58 to 145 µm in radius, this suggests that oocyte oxygenation is possible in all but the largest preantral follicles, which are on the verge of antrum formation. In preantral bovine follicles, the oxygen consumption rate of granulosa cells and fluid voidage will be the key determinants of oxygen levels across the follicle.

  9. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  10. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design

    PubMed Central

    Wahid, Rezwanul; Toapanta, Franklin R.; Simon, Jakub K.; Sztein, Marcelo B.

    2018-01-01

    We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella′s lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella′s LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine. PMID:29304144

  11. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    PubMed

    Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B

    2018-01-01

    We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  12. First time-dependent study of H{sub 2} and H{sub 3}{sup +} ortho-para chemistry in the diffuse interstellar medium: Observations meet theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albertsson, T.; Semenov, D.; Henning, Th.

    The chemistry in the diffuse interstellar medium (ISM) initiates the gradual increase of molecular complexity during the life cycle of matter. A key molecule that enables build-up of new molecular bonds and new molecules via proton donation is H{sub 3}{sup +}. Its evolution is tightly related to molecular hydrogen and thought to be well understood. However, recent observations of ortho and para lines of H{sub 2} and H{sub 3}{sup +} in the diffuse ISM showed a puzzling discrepancy in nuclear spin excitation temperatures and populations between these two key species. H{sub 3}{sup +}, unlike H{sub 2}, seems to be outmore » of thermal equilibrium, contrary to the predictions of modern astrochemical models. We conduct the first time-dependent modeling of the para-fractions of H{sub 2} and H{sub 3}{sup +} in the diffuse ISM and compare our results to a set of line-of-sight observations, including new measurements presented in this study. We isolate a set of key reactions for H{sub 3}{sup +} and find that the destruction of the lowest rotational states of H{sub 3}{sup +} by dissociative recombination largely controls its ortho/para ratio. A plausible agreement with observations cannot be achieved unless a ratio larger than 1:5 for the destruction of (1, 1)- and (1, 0)-states of H{sub 3}{sup +} is assumed. Additionally, an increased cosmic-ray ionization rate to 10{sup –15} s{sup –1} further improves the fit whereas variations of other individual physical parameters, such as density and chemical age, have only a minor effect on the predicted ortho/para ratios. Thus, our study calls for new laboratory measurements of the dissociative recombination rate and branching ratio of the key ion H{sub 3}{sup +} under interstellar conditions.« less

  13. First Time-dependent Study of H2 and H_3^+ Ortho-Para Chemistry in the Diffuse Interstellar Medium: Observations Meet Theoretical Predictions

    NASA Astrophysics Data System (ADS)

    Albertsson, T.; Indriolo, N.; Kreckel, H.; Semenov, D.; Crabtree, K. N.; Henning, Th.

    2014-05-01

    The chemistry in the diffuse interstellar medium (ISM) initiates the gradual increase of molecular complexity during the life cycle of matter. A key molecule that enables build-up of new molecular bonds and new molecules via proton donation is H_3^+. Its evolution is tightly related to molecular hydrogen and thought to be well understood. However, recent observations of ortho and para lines of H2 and H_3^+ in the diffuse ISM showed a puzzling discrepancy in nuclear spin excitation temperatures and populations between these two key species. H_3^+, unlike H2, seems to be out of thermal equilibrium, contrary to the predictions of modern astrochemical models. We conduct the first time-dependent modeling of the para-fractions of H2 and H_3^+ in the diffuse ISM and compare our results to a set of line-of-sight observations, including new measurements presented in this study. We isolate a set of key reactions for H_3^+ and find that the destruction of the lowest rotational states of H_3^+ by dissociative recombination largely controls its ortho/para ratio. A plausible agreement with observations cannot be achieved unless a ratio larger than 1:5 for the destruction of (1, 1)- and (1, 0)-states of H_3^+ is assumed. Additionally, an increased cosmic-ray ionization rate to 10-15 s-1 further improves the fit whereas variations of other individual physical parameters, such as density and chemical age, have only a minor effect on the predicted ortho/para ratios. Thus, our study calls for new laboratory measurements of the dissociative recombination rate and branching ratio of the key ion H_{3}^{+} under interstellar conditions. Partly based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere, Chile, as part of program 088.C-0351.

  14. Effect of varying two key parameters in simulating evacuation for a dormitory in China

    NASA Astrophysics Data System (ADS)

    Lei, Wenjun; Li, Angui; Gao, Ran

    2013-01-01

    Student dormitories are both living and resting areas for students in their spare time. There are many small rooms in the dormitories. And the students are distributed densely in the dormitories. High occupant density is the main characteristic of student dormitories. Once there is an accident, such as fire or earthquake, the losses will be cruel. Computer evacuation models developed overseas are commonly applied in working out safety management schemes. The average minimum widths of corridor and exit are the two key parameters affecting the evacuation for the dormitory. The effect of varying these two parameters will be studied in this paper by taking a dormitory in our university as an example. Evacuation performance is predicted with the software FDS + Evac. The default values in the software are used and adjusted through a field survey. The effect of varying either of the two parameters is discussed. It is found that the simulated results agree well with the experimental results. From our study it seems that the evacuation time is not in proportion to the evacuation distance. And we also named a phenomenon of “the closer is not the faster”. For the building researched in this article, a corridor width of 3 m is the most appropriate. And the suitable exit width of the dormitory for evacuation is about 2.5 to 3 m. The number of people has great influence on the walking speed of people. The purpose of this study is to optimize the building, and to make the building in favor of personnel evacuation. Then the damage could be minimized.

  15. Hydrologic Modeling in the Kenai River Watershed using Event Based Calibration

    NASA Astrophysics Data System (ADS)

    Wells, B.; Toniolo, H. A.; Stuefer, S. L.

    2015-12-01

    Understanding hydrologic changes is key for preparing for possible future scenarios. On the Kenai Peninsula in Alaska the yearly salmon runs provide a valuable stimulus to the economy. It is the focus of a large commercial fishing fleet, but also a prime tourist attraction. Modeling of anadromous waters provides a tool that assists in the prediction of future salmon run size. Beaver Creek, in Kenai, Alaska, is a lowlands stream that has been modeled using the Army Corps of Engineers event based modeling package HEC-HMS. With the use of historic precipitation and discharge data, the model was calibrated to observed discharge values. The hydrologic parameters were measured in the field or calculated, while soil parameters were estimated and adjusted during the calibration. With the calibrated parameter for HEC-HMS, discharge estimates can be used by other researches studying the area and help guide communities and officials to make better-educated decisions regarding the changing hydrology in the area and the tied economic drivers.

  16. Determination of key diffusion and partition parameters and their use in migration modelling of benzophenone from low-density polyethylene (LDPE) into different foodstuffs.

    PubMed

    Maia, Joaquim; Rodríguez-Bernaldo de Quirós, Ana; Sendón, Raquel; Cruz, José Manuel; Seiler, Annika; Franz, Roland; Simoneau, Catherine; Castle, Laurence; Driffield, Malcolm; Mercea, Peter; Oldring, Peter; Tosa, Valer; Paseiro, Perfecto

    2016-01-01

    The mass transport process (migration) of a model substance, benzophenone (BZP), from LDPE into selected foodstuffs at three temperatures was studied. A mathematical model based on Fick's Second Law of Diffusion was used to simulate the migration process and a good correlation between experimental and predicted values was found. The acquired results contribute to a better understanding of this phenomenon and the parameters so-derived were incorporated into the migration module of the recently launched FACET tool (Flavourings, Additives and Food Contact Materials Exposure Tool). The migration tests were carried out at different time-temperature conditions, and BZP was extracted from LDPE and analysed by HPLC-DAD. With all data, the parameters for migration modelling (diffusion and partition coefficients) were calculated. Results showed that the diffusion coefficients (within both the polymer and the foodstuff) are greatly affected by the temperature and food's physical state, whereas the partition coefficient was affected significantly only by food characteristics, particularly fat content.

  17. Process Integration and Optimization of ICME Carbon Fiber Composites for Vehicle Lightweighting: A Preliminary Development

    DOE PAGES

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    2017-01-02

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  18. Robust parameter extraction for decision support using multimodal intensive care data

    PubMed Central

    Clifford, G.D.; Long, W.J.; Moody, G.B.; Szolovits, P.

    2008-01-01

    Digital information flow within the intensive care unit (ICU) continues to grow, with advances in technology and computational biology. Recent developments in the integration and archiving of these data have resulted in new opportunities for data analysis and clinical feedback. New problems associated with ICU databases have also arisen. ICU data are high-dimensional, often sparse, asynchronous and irregularly sampled, as well as being non-stationary, noisy and subject to frequent exogenous perturbations by clinical staff. Relationships between different physiological parameters are usually nonlinear (except within restricted ranges), and the equipment used to measure the observables is often inherently error-prone and biased. The prior probabilities associated with an individual's genetics, pre-existing conditions, lifestyle and ongoing medical treatment all affect prediction and classification accuracy. In this paper, we describe some of the key problems and associated methods that hold promise for robust parameter extraction and data fusion for use in clinical decision support in the ICU. PMID:18936019

  19. Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.

    PubMed

    Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L

    2018-01-01

    Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1

  20. Formability Analysis of Bamboo Fabric Reinforced Poly (Lactic) Acid Composites

    PubMed Central

    M. R., Nurul Fazita; Jayaraman, Krishnan; Bhattacharyya, Debes

    2016-01-01

    Poly (lactic) acid (PLA) composites have made their way into various applications that may require thermoforming to produce 3D shapes. Wrinkles are common in many forming processes and identification of the forming parameters to prevent them in the useful part of the mechanical component is a key consideration. Better prediction of such defects helps to significantly reduce the time required for a tooling design process. The purpose of the experiment discussed here is to investigate the effects of different test parameters on the occurrence of deformations during sheet forming of double curvature shapes with bamboo fabric reinforced-PLA composites. The results demonstrated that the domes formed using hot tooling conditions were better in quality than those formed using cold tooling conditions. Wrinkles were more profound in the warp direction of the composite domes compared to the weft direction. Grid Strain Analysis (GSA) identifies the regions of severe deformation and provides useful information regarding the optimisation of processing parameters. PMID:28773662

  1. Integrating effective drought index (EDI) and remote sensing derived parameters for agricultural drought assessment and prediction in Bundelkhand region of India

    NASA Astrophysics Data System (ADS)

    Padhee, S. K.; Nikam, B. R.; Aggarwal, S. P.; Garg, V.

    2014-11-01

    Drought is an extreme condition due to moisture deficiency and has adverse effect on society. Agricultural drought occurs when restraining soil moisture produces serious crop stress and affects the crop productivity. The soil moisture regime of rain-fed agriculture and irrigated agriculture behaves differently on both temporal and spatial scale, which means the impact of meteorologically and/or hydrological induced agriculture drought will be different in rain-fed and irrigated areas. However, there is a lack of agricultural drought assessment system in Indian conditions, which considers irrigated and rain-fed agriculture spheres as separate entities. On the other hand recent advancements in the field of earth observation through different satellite based remote sensing have provided researchers a continuous monitoring of soil moisture, land surface temperature and vegetation indices at global scale, which can aid in agricultural drought assessment/monitoring. Keeping this in mind, the present study has been envisaged with the objective to develop agricultural drought assessment and prediction technique by spatially and temporally assimilating effective drought index (EDI) with remote sensing derived parameters. The proposed technique takes in to account the difference in response of rain-fed and irrigated agricultural system towards agricultural drought in the Bundelkhand region (The study area). The key idea was to achieve the goal by utilizing the integrated scenarios from meteorological observations and soil moisture distribution. EDI condition maps were prepared from daily precipitation data recorded by Indian Meteorological Department (IMD), distributed within the study area. With the aid of frequent MODIS products viz. vegetation indices (VIs), and land surface temperature (LST), the coarse resolution soil moisture product from European Space Agency (ESA) were downscaled using linking model based on Triangle method to a finer resolution soil moisture product. EDI and spatially downscaled soil moisture products were later used with MODIS 16 days NDVI product as key elements to assess and predict agricultural drought in irrigated and rain-fed agricultural systems in Bundelkhand region of India. Meteorological drought, soil moisture deficiency and NDVI degradation were inhabited for each and every pixel of the image in GIS environment, for agricultural impact assessment at a 16 day temporal scale for Rabi seasons (October-April) between years 2000 to 2009. Based on the statistical analysis, good correlations were found among the parameters EDI and soil moisture anomaly; NDVI anomaly and soil moisture anomaly lagged to 16 days and these results were exploited for the development of a linear prediction model. The predictive capability of the developed model was validated on the basis of spatial distribution of predicted NDVI which was compared with MODIS NDVI product in the beginning of preceding Rabi season (Oct-Dec of 2010).The predictions of the model were based on future meteorological data (year 2010) and were found to be yielding good results. The developed model have good predictive capability based on future meteorological data (rainfall data) availability, which enhances its utility in analyzing future Agricultural conditions if meteorological data is available.

  2. A New Approach to Modeling Densities and Equilibria of Ice and Gas Hydrate Phases

    NASA Astrophysics Data System (ADS)

    Zyvoloski, G.; Lucia, A.; Lewis, K. C.

    2011-12-01

    The Gibbs-Helmholtz Constrained (GHC) equation is a new cubic equation of state that was recently derived by Lucia (2010) and Lucia et al. (2011) by constraining the energy parameter in the Soave form of the Redlich-Kwong equation to satisfy the Gibbs-Helmholtz equation. The key attributes of the GHC equation are: 1) It is a multi-scale equation because it uses the internal energy of departure, UD, as a natural bridge between the molecular and bulk phase length scales. 2) It does not require acentric factors, volume translation, regression of parameters to experimental data, binary (kij) interaction parameters, or other forms of empirical correlations. 3) It is a predictive equation of state because it uses a database of values of UD determined from NTP Monte Carlo simulations. 4) It can readily account for differences in molecular size and shape. 5) It has been successfully applied to non-electrolyte mixtures as well as weak and strong aqueous electrolyte mixtures over wide ranges of temperature, pressure and composition to predict liquid density and phase equilibrium with up to four phases. 6) It has been extensively validated with experimental data. 7) The AAD% error between predicted and experimental liquid density is 1% while the AAD% error in phase equilibrium predictions is 2.5%. 8) It has been used successfully within the subsurface flow simulation program FEHM. In this work we describe recent extensions of the multi-scale predictive GHC equation to modeling the phase densities and equilibrium behavior of hexagonal ice and gas hydrates. In particular, we show that radial distribution functions, which can be determined by NTP Monte Carlo simulations, can be used to establish correct standard state fugacities of 1h ice and gas hydrates. From this, it is straightforward to determine both the phase density of ice or gas hydrates as well as any equilibrium involving ice and/or hydrate phases. A number of numerical results for mixtures of N2, O2, CH4, CO2, water, and NaCl in permafrost conditions are presented to illustrate the predictive capabilities of the multi-scale GHC equation. In particular, we show that the GHC equation correctly predicts 1) The density of 1h ice and methane hydrate to within 1%. 2) The melting curve for hexagonal ice. 3) The hydrate-gas phase co-existence curve. 4) Various phase equilibrium involving ice and hydrate phases. We also show that the GHC equation approach can be readily incorporated into subsurface flow simulation programs like FEHM to predict the behavior of permafrost and other reservoirs where ice and/or hydrates are present. Many geometric illustrations are used to elucidate key concepts. References A. Lucia, A Multi-Scale Gibbs Helmholtz Constrained Cubic Equation of State. J. Thermodynamics: Special Issue on Advances in Gas Hydrate Thermodynamics and Transport Properties. Available on-line [doi:10.1155/2010/238365]. A. Lucia, B.M. Bonk, A. Roy and R.R. Waterman, A Multi-Scale Framework for Multi-Phase Equilibrium Flash. Comput. Chem. Engng. In press.

  3. Machine learning with neural networks - a case study of estimating thermal conductivity with ancient well-log data

    NASA Astrophysics Data System (ADS)

    Harrison, Benjamin; Sandiford, Mike; McLaren, Sandra

    2016-04-01

    Supervised machine learning algorithms attempt to build a predictive model using empirical data. Their aim is to take a known set of input data along with known responses to the data, and adaptively train a model to generate predictions for new data inputs. A key attraction to their use is the ability to perform as function approximators where the definition of an explicit relationship between variables is infeasible. We present a novel means of estimating thermal conductivity using a supervised self-organising map algorithm, trained on about 150 thermal conductivity measurements, and using a suite of five electric logs common to 14 boreholes. A key motivation of the study was to supplement the small number of direct measurements of thermal conductivity with the decades of borehole data acquired in the Gippsland Basin to produce more confident calculations of surface heat flow. A previous attempt to generate estimates from well-log data in the Gippsland Basin using classic petrophysical log interpretation methods was able to produce reasonable synthetic thermal conductivity logs for only four boreholes. The current study has extended this to a further ten boreholes. Interesting outcomes from the study are: the method appears stable at very low sample sizes (< ~100); the SOM permits quantitative analysis of essentially qualitative uncalibrated well-log data; and the method's moderate success at prediction with minimal effort tuning the algorithm's parameters.

  4. Event-based total suspended sediment particle size distribution model

    NASA Astrophysics Data System (ADS)

    Thompson, Jennifer; Sattar, Ahmed M. A.; Gharabaghi, Bahram; Warner, Richard C.

    2016-05-01

    One of the most challenging modelling tasks in hydrology is prediction of the total suspended sediment particle size distribution (TSS-PSD) in stormwater runoff generated from exposed soil surfaces at active construction sites and surface mining operations. The main objective of this study is to employ gene expression programming (GEP) and artificial neural networks (ANN) to develop a new model with the ability to more accurately predict the TSS-PSD by taking advantage of both event-specific and site-specific factors in the model. To compile the data for this study, laboratory scale experiments using rainfall simulators were conducted on fourteen different soils to obtain TSS-PSD. This data is supplemented with field data from three construction sites in Ontario over a period of two years to capture the effect of transport and deposition within the site. The combined data sets provide a wide range of key overlooked site-specific and storm event-specific factors. Both parent soil and TSS-PSD in runoff are quantified by fitting each to a lognormal distribution. Compared to existing regression models, the developed model more accurately predicted the TSS-PSD using a more comprehensive list of key model input parameters. Employment of the new model will increase the efficiency of deployment of required best management practices, designed based on TSS-PSD, to minimize potential adverse effects of construction site runoff on aquatic life in the receiving watercourses.

  5. An accurate empirical method to predict the adsorption strength for π-orbital contained molecules on two dimensional materials.

    PubMed

    Li, Hongping; Wang, Changwei; Xun, Suhang; He, Jing; Jiang, Wei; Zhang, Ming; Zhu, Wenshuai; Li, Huaming

    2018-06-01

    To obtain the adsorption strength is the key point for materials design and parameters optimization in chemical engineering. Here we report a simple but accuracy method to estimate the adsorptive energies by counting the number of π-orbital involved atoms based on theoretical computations for hexagonal boron nitride (h-BN) and graphene. Computational results by density function theory (DFT) as well as spin-component scaled second-order Møller-Plesset perturbation theory (SCS-MP2) both confirm that the adsorptive energies correlate well with the number of π-orbital involved atoms for π-orbital contained molecules. The selected molecules (adsorbates) are commonly used in chemical industry, which contains C, N, S, O atoms. The predicted results for the proposed formulas agree well with the current and previous DFT calculated values both on h-BN and graphene surfaces. Further, it can be also used to predict the adsorptive energies for small π-orbital contained molecules on BN and carbon nanotubes. The interaction type for these adsorptions is typical π-π interaction. Further investigations show that the physical origin of these interactions source from the polar interactions between the adsorbents and adsorbates. Hence, for separation or removal of aromatic molecules, how to modify the aromaticity and polarity of both adsorbents and adsorbates will be the key points for experiments. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Hydrograph Predictions of Glacial Lake Outburst Floods From an Ice-Dammed Lake

    NASA Astrophysics Data System (ADS)

    McCoy, S. W.; Jacquet, J.; McGrath, D.; Koschitzki, R.; Okuinghttons, J.

    2017-12-01

    Understanding the time evolution of glacial lake outburst floods (GLOFs), and ultimately predicting peak discharge, is crucial to mitigating the impacts of GLOFs on downstream communities and understanding concomitant surface change. The dearth of in situ measurements taken during GLOFs has left many GLOF models currently in use untested. Here we present a dataset of 13 GLOFs from Lago Cachet Dos, Aysen Region, Chile in which we detail measurements of key environmental variables (total volume drained, lake temperature, and lake inflow rate) and high temporal resolution discharge measurements at the source lake, in addition to well-constrained ice thickness and bedrock topography. Using this dataset we test two common empirical equations as well as the physically-based model of Spring-Hutter-Clarke. We find that the commonly used empirical relationships based solely on a dataset of lake volume drained fail to predict the large variability in observed peak discharges from Lago Cachet Dos. This disagreement is likely because these equations do not consider additional environmental variables that we show also control peak discharge, primarily, lake water temperature and the rate of meltwater inflow to the source lake. We find that the Spring-Hutter-Clarke model can accurately simulate the exponentially rising hydrographs that are characteristic of ice-dammed GLOFs, as well as the order of magnitude variation in peak discharge between events if the hydraulic roughness parameter is allowed to be a free fitting parameter. However, the Spring-Hutter-Clarke model over predicts peak discharge in all cases by 10 to 35%. The systematic over prediction of peak discharge by the model is related to its abrupt flood termination that misses the observed steep falling limb of the flood hydrograph. Although satisfactory model fits are produced, the range in hydraulic roughness required to obtain these fits across all events was large, which suggests that current models do not completely capture the physics of these systems, thus limiting their ability to truly predict peak discharges using only independently constrained parameters. We suggest what some of these missing physics might be.

  7. Evaluation of charge mobility in organic materials: from localized to delocalized descriptions at a first-principles level.

    PubMed

    Shuai, Zhigang; Wang, Linjun; Li, Qikai

    2011-03-04

    The carrier mobility for carbon electronic materials is an important parameter for optoelectronics. We report here some recently developed theoretical tools to predict the mobility without any free parameters. Carrier scatterings with phonons and traps are the key factors in evaluating the mobility. We consider three major scattering regimes: i) where the molecular internal vibration severely induces charge self-trapping and, thus, the hopping mechanism dominates; ii) where both intermolecular and intramolecular scatterings come to play roles, so the Holstein-Peierls polaron model is applied; and, iii) where charge is well delocalized with coherence length comparable with acoustic phonon wavelength, so that a deformation potential approach is more appropriate. We develop computational methods at the first-principles level for the three different cases that have extensive potential application in rationalizing material design.

  8. Superconductivity. Observation of broken time-reversal symmetry in the heavy-fermion superconductor UPt₃.

    PubMed

    Schemm, E R; Gannon, W J; Wishne, C M; Halperin, W P; Kapitulnik, A

    2014-07-11

    Models of superconductivity in unconventional materials can be experimentally differentiated by the predictions they make for the symmetries of the superconducting order parameter. In the case of the heavy-fermion superconductor UPt3, a key question is whether its multiple superconducting phases preserve or break time-reversal symmetry (TRS). We tested for asymmetry in the phase shift between left and right circularly polarized light reflected from a single crystal of UPt3 at normal incidence and found that this so-called polar Kerr effect appears only below the lower of the two zero-field superconducting transition temperatures. Our results provide evidence for broken TRS in the low-temperature superconducting phase of UPt3, implying a complex two-component order parameter for superconductivity in this system. Copyright © 2014, American Association for the Advancement of Science.

  9. An approximate model for cancellous bone screw fixation.

    PubMed

    Brown, C J; Sinclair, R A; Day, A; Hess, B; Procter, P

    2013-04-01

    This paper presents a finite element (FE) model to identify parameters that affect the performance of an improved cancellous bone screw fixation technique, and hence potentially improve fracture treatment. In cancellous bone of low apparent density, it can be difficult to achieve adequate screw fixation and hence provide stable fracture fixation that enables bone healing. Data from predictive FE models indicate that cements can have a significant potential to improve screw holding power in cancellous bone. These FE models are used to demonstrate the key parameters that determine pull-out strength in a variety of screw, bone and cement set-ups, and to compare the effectiveness of different configurations. The paper concludes that significant advantages, up to an order of magnitude, in screw pull-out strength in cancellous bone might be gained by the appropriate use of a currently approved calcium phosphate cement.

  10. Application of tire dynamics to aircraft landing gear design analysis

    NASA Technical Reports Server (NTRS)

    Black, R. J.

    1983-01-01

    The tire plays a key part in many analyses used for design of aircraft landing gear. Examples include structural design of wheels, landing gear shimmy, brake whirl, chatter and squeal, complex combination of chatter and shimmy on main landing gear (MLG) systems, anti-skid performance, gear walk, and rough terrain loads and performance. Tire parameters needed in the various analyses are discussed. Two tire models are discussed for shimmy analysis, the modified Moreland approach and the von Schlippe-Dietrich approach. It is shown that the Moreland model can be derived from the Von Schlippe-Dietrich model by certain approximations. The remaining analysis areas are discussed in general terms and the tire parameters needed for each are identified. Accurate tire data allows more accurate design analysis and the correct prediction of dynamic performance of aircraft landing gear.

  11. Discrete Event Simulation Modeling and Analysis of Key Leader Engagements

    DTIC Science & Technology

    2012-06-01

    to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing

  12. Dynamic imaging of adaptive stress response pathway activation for prediction of drug induced liver injury.

    PubMed

    Wink, Steven; Hiemstra, Steven W; Huppelschoten, Suzanne; Klip, Janna E; van de Water, Bob

    2018-05-01

    Drug-induced liver injury remains a concern during drug treatment and development. There is an urgent need for improved mechanistic understanding and prediction of DILI liabilities using in vitro approaches. We have established and characterized a panel of liver cell models containing mechanism-based fluorescent protein toxicity pathway reporters to quantitatively assess the dynamics of cellular stress response pathway activation at the single cell level using automated live cell imaging. We have systematically evaluated the application of four key adaptive stress pathway reporters for the prediction of DILI liability: SRXN1-GFP (oxidative stress), CHOP-GFP (ER stress/UPR response), p21 (p53-mediated DNA damage-related response) and ICAM1 (NF-κB-mediated inflammatory signaling). 118 FDA-labeled drugs in five human exposure relevant concentrations were evaluated for reporter activation using live cell confocal imaging. Quantitative data analysis revealed activation of single or multiple reporters by most drugs in a concentration and time dependent manner. Hierarchical clustering of time course dynamics and refined single cell analysis allowed the allusion of key events in DILI liability. Concentration response modeling was performed to calculate benchmark concentrations (BMCs). Extracted temporal dynamic parameters and BMCs were used to assess the predictive power of sub-lethal adaptive stress pathway activation. Although cellular adaptive responses were activated by non-DILI and severe-DILI compounds alike, dynamic behavior and lower BMCs of pathway activation were sufficiently distinct between these compound classes. The high-level detailed temporal- and concentration-dependent evaluation of the dynamics of adaptive stress pathway activation adds to the overall understanding and prediction of drug-induced liver liabilities.

  13. CSI feedback-based CS for underwater acoustic adaptive modulation OFDM system with channel prediction

    NASA Astrophysics Data System (ADS)

    Kuai, Xiao-yan; Sun, Hai-xin; Qi, Jie; Cheng, En; Xu, Xiao-ka; Guo, Yu-hui; Chen, You-gan

    2014-06-01

    In this paper, we investigate the performance of adaptive modulation (AM) orthogonal frequency division multiplexing (OFDM) system in underwater acoustic (UWA) communications. The aim is to solve the problem of large feedback overhead for channel state information (CSI) in every subcarrier. A novel CSI feedback scheme is proposed based on the theory of compressed sensing (CS). We propose a feedback from the receiver that only feedback the sparse channel parameters. Additionally, prediction of the channel state is proposed every several symbols to realize the AM in practice. We describe a linear channel prediction algorithm which is used in adaptive transmission. This system has been tested in the real underwater acoustic channel. The linear channel prediction makes the AM transmission techniques more feasible for acoustic channel communications. The simulation and experiment show that significant improvements can be obtained both in bit error rate (BER) and throughput in the AM scheme compared with the fixed Quadrature Phase Shift Keying (QPSK) modulation scheme. Moreover, the performance with standard CS outperforms the Discrete Cosine Transform (DCT) method.

  14. Prediction of S-wave velocity using complete ensemble empirical mode decomposition and neural networks

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2017-04-01

    One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.

  15. Soil moisture dynamics modeling considering multi-layer root zone.

    PubMed

    Kumar, R; Shankar, V; Jat, M K

    2013-01-01

    The moisture uptake by plant from soil is a key process for plant growth and movement of water in the soil-plant system. A non-linear root water uptake (RWU) model was developed for a multi-layer crop root zone. The model comprised two parts: (1) model formulation and (2) moisture flow prediction. The developed model was tested for its efficiency in predicting moisture depletion in a non-uniform root zone. A field experiment on wheat (Triticum aestivum) was conducted in the sub-temperate sub-humid agro-climate of Solan, Himachal Pradesh, India. Model-predicted soil moisture parameters, i.e., moisture status at various depths, moisture depletion and soil moisture profile in the root zone, are in good agreement with experiment results. The results of simulation emphasize the utility of the RWU model across different agro-climatic regions. The model can be used for sound irrigation management especially in water-scarce humid, temperate, arid and semi-arid regions and can also be integrated with a water transport equation to predict the solute uptake by plant biomass.

  16. Novel Estimation of Pilot Performance Characteristics

    NASA Technical Reports Server (NTRS)

    Bachelder, Edward N.; Aponso, Bimal

    2017-01-01

    Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.

  17. Recurrent noise-induced phase singularities in drifting patterns.

    PubMed

    Clerc, M G; Coulibaly, S; del Campo, F; Garcia-Nustes, M A; Louvergneaux, E; Wilson, M

    2015-11-01

    We show that the key ingredients for creating recurrent traveling spatial phase defects in drifting patterns are a noise-sustained structure regime together with the vicinity of a phase transition, that is, a spatial region where the control parameter lies close to the threshold for pattern formation. They both generate specific favorable initial conditions for local spatial gradients, phase, and/or amplitude. Predictions from the stochastic convective Ginzburg-Landau equation with real coefficients agree quite well with experiments carried out on a Kerr medium submitted to shifted optical feedback that evidence noise-induced traveling phase slips and vortex phase-singularities.

  18. On the Superficial Gas Velocity in Deep Gas-Solid Fluidized Beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Grace, John; Shadle, Lawrence

    2011-11-15

    The superficial gas velocity is one of the key parameters used to determine the flow hydrodynamics in gas–solids fluidized beds. However, the superficial velocity varies with height in practice, and there is no consistent basis for its specification. Different approaches to determine the superficial gas velocity in a deep gas–solids system are shown to cause difficulties in developing models and in comparing predictions with experimental results. In addition, the reference conditions for superficial gas velocity are important in modeling of deep gas–solids systems where there is a considerable pressure drop.

  19. Development of a miniature fan motor

    NASA Astrophysics Data System (ADS)

    Wang, Chien-Chang; Yao, Yeong-Der; Liang, Kun-Yi; Huang, Chung-Chun; Chang, Yu-Choung

    2012-04-01

    A novel compact axial flux fan motor was developed. Such a micromotor could be a potential candidate for using as the cooling solution for the next generation mobile devices, for example, smart phones and pico-projectors. The key parameters of the motor, such as back electromotive force, cogging torque, and axial preload are predicted using finite element method. In addition, new approaches are proposed to measure these items, and the corresponding experimental results are in good agreement with the simulated one. Moreover, the undesired vibration harmonic is successfully suppressed, and the fan motor represents a high static pressure and air flow rate.

  20. Heritability in the genomics era--concepts and misconceptions.

    PubMed

    Visscher, Peter M; Hill, William G; Wray, Naomi R

    2008-04-01

    Heritability allows a comparison of the relative importance of genes and environment to the variation of traits within and across populations. The concept of heritability and its definition as an estimable, dimensionless population parameter was introduced by Sewall Wright and Ronald Fisher nearly a century ago. Despite continuous misunderstandings and controversies over its use and application, heritability remains key to the response to selection in evolutionary biology and agriculture, and to the prediction of disease risk in medicine. Recent reports of substantial heritability for gene expression and new estimation methods using marker data highlight the relevance of heritability in the genomics era.

  1. Assessment of Stage 35 With APNASA

    NASA Technical Reports Server (NTRS)

    Celestina, Mark L.; Mulac, Richard

    2009-01-01

    An assessment of APNASA was conducted at NASA Glenn Research Center under the Fundamental Aeronautics Program to determine their predictive capabilities. The geometry selected for this study was Stage 35 which is a single stage transonic compressor. A speedline at 100% speed was generated and compared to experimental data at 100% speed for two turbulence models. Performance of the stage at 100% speed and profiles of several key aerodynamic parameters are compared to the survey data downstream of the stator in this report. In addition, hub leakage was modeled and compared to solutions without leakage and the available experimental data.

  2. First trimester prediction of maternal glycemic status.

    PubMed

    Gabbay-Benziv, Rinat; Doyle, Lauren E; Blitzer, Miriam; Baschat, Ahmet A

    2015-05-01

    To predict gestational diabetes mellitus (GDM) or normoglycemic status using first trimester maternal characteristics. We used data from a prospective cohort study. First trimester maternal characteristics were compared between women with and without GDM. Association of these variables with sugar values at glucose challenge test (GCT) and subsequent GDM was tested to identify key parameters. A predictive algorithm for GDM was developed and receiver operating characteristics (ROC) statistics was used to derive the optimal risk score. We defined normoglycemic state, when GCT and all four sugar values at oral glucose tolerance test, whenever obtained, were normal. Using same statistical approach, we developed an algorithm to predict the normoglycemic state. Maternal age, race, prior GDM, first trimester BMI, and systolic blood pressure (SBP) were all significantly associated with GDM. Age, BMI, and SBP were also associated with GCT values. The logistic regression analysis constructed equation and the calculated risk score yielded sensitivity, specificity, positive predictive value, and negative predictive value of 85%, 62%, 13.8%, and 98.3% for a cut-off value of 0.042, respectively (ROC-AUC - area under the curve 0.819, CI - confidence interval 0.769-0.868). The model constructed for normoglycemia prediction demonstrated lower performance (ROC-AUC 0.707, CI 0.668-0.746). GDM prediction can be achieved during the first trimester encounter by integration of maternal characteristics and basic measurements while normoglycemic status prediction is less effective.

  3. A Mechanism-Based Model for the Prediction of the Metabolic Sites of Steroids Mediated by Cytochrome P450 3A4.

    PubMed

    Dai, Zi-Ru; Ai, Chun-Zhi; Ge, Guang-Bo; He, Yu-Qi; Wu, Jing-Jing; Wang, Jia-Yue; Man, Hui-Zi; Jia, Yan; Yang, Ling

    2015-06-30

    Early prediction of xenobiotic metabolism is essential for drug discovery and development. As the most important human drug-metabolizing enzyme, cytochrome P450 3A4 has a large active cavity and metabolizes a broad spectrum of substrates. The poor substrate specificity of CYP3A4 makes it a huge challenge to predict the metabolic site(s) on its substrates. This study aimed to develop a mechanism-based prediction model based on two key parameters, including the binding conformation and the reaction activity of ligands, which could reveal the process of real metabolic reaction(s) and the site(s) of modification. The newly established model was applied to predict the metabolic site(s) of steroids; a class of CYP3A4-preferred substrates. 38 steroids and 12 non-steroids were randomly divided into training and test sets. Two major metabolic reactions, including aliphatic hydroxylation and N-dealkylation, were involved in this study. At least one of the top three predicted metabolic sites was validated by the experimental data. The overall accuracy for the training and test were 82.14% and 86.36%, respectively. In summary, a mechanism-based prediction model was established for the first time, which could be used to predict the metabolic site(s) of CYP3A4 on steroids with high predictive accuracy.

  4. Model Adaptation for Prognostics in a Particle Filtering Framework

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, Kai Frank

    2011-01-01

    One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.

  5. Small scale green infrastructure design to meet different urban hydrological criteria.

    PubMed

    Jia, Z; Tang, S; Luo, W; Li, S; Zhou, M

    2016-04-15

    As small scale green infrastructures, rain gardens have been widely advocated for urban stormwater management in the contemporary low impact development (LID) era. This paper presents a simple method that consists of hydrological models and the matching plots of nomographs to provide an informative and practical tool for rain garden sizing and hydrological evaluation. The proposed method considers design storms, infiltration rates and the runoff contribution area ratio of the rain garden, allowing users to size a rain garden for a specific site with hydrological reference and predict overflow of the rain garden under different storms. The nomographs provide a visual presentation on the sensitivity of different design parameters. Subsequent application of the proposed method to a case study conducted in a sub-humid region in China showed that, the method accurately predicted the design storms for the existing rain garden, the predicted overflows under large storm events were within 13-50% of the measured volumes. The results suggest that the nomographs approach is a practical tool for quick selection or assessment of design options that incorporate key hydrological parameters of rain gardens or other infiltration type green infrastructure. The graphic approach as displayed by the nomographs allow urban planners to demonstrate the hydrological effect of small scale green infrastructure and gain more support for promoting low impact development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE PAGES

    Ilas, Germina; Liljenfeldt, Henrik

    2017-05-19

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  7. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Liljenfeldt, Henrik

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  8. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.

  9. Pilot Mental Workload with Predictive System Status Information

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    1998-01-01

    Research has shown a strong pilot preference for predictive information of aircraft system status in the flight deck. However, the mental workload associated with using this predictive information has not been ascertained. The study described here attempted to measure mental workload. In this simulator experiment, three types of predictive information (none, whether a parameter was changing abnormally, and the time for a parameter to reach an alert range) and four initial times to a parameter alert range (1 minute, 5 minutes, 15 minutes, and ETA+45 minutes) were tested to determine their effects on subjects mental workload. Subjective workload ratings increased with increasing predictive information (whether a parameter was changing abnormally or the time for a parameter to reach an alert range). Subjective situation awareness decreased with more predictive information but it became greater with increasing initial times to a parameter alert range. Also, subjective focus changed depending on the type of predictive information. Lastly, skin temperature fluctuated less as the initial time to a parameter alert range increased.

  10. Scalar utility theory and proportional processing: what does it actually imply?

    PubMed Central

    Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I

    2017-01-01

    Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. PMID:27288541

  11. Prediction of surface distress using neural networks

    NASA Astrophysics Data System (ADS)

    Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo; Cortez, Paulo

    2017-06-01

    Road infrastructures contribute to a healthy economy throughout a sustainable distribution of goods and services. A road network requires appropriately programmed maintenance treatments in order to keep roads assets in good condition, providing maximum safety for road users under a cost-effective approach. Surface Distress is the key element to identify road condition and may be generated by many different factors. In this paper, a new approach is aimed to predict Surface Distress Index (SDI) values following a data-driven approach. Later this model will be accordingly applied by using data obtained from the Integrated Road Management System (IRMS) database. Artificial Neural Networks (ANNs) are used to predict SDI index using input variables related to the surface of distress, i.e., crack area and width, pothole, rutting, patching and depression. The achieved results show that ANN is able to predict SDI with high correlation factor (R2 = 0.996%). Moreover, a sensitivity analysis was applied to the ANN model, revealing the influence of the most relevant input parameters for SDI prediction, namely rutting (59.8%), crack width (29.9%) and crack area (5.0%), patching (3.0%), pothole (1.7%) and depression (0.3%).

  12. Modeling enzyme production with Aspergillus oryzae in pilot scale vessels with different agitation, aeration, and agitator types.

    PubMed

    Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M

    2011-08-01

    The purpose of this article is to demonstrate how a model can be constructed such that the progress of a submerged fed-batch fermentation of a filamentous fungus can be predicted with acceptable accuracy. The studied process was enzyme production with Aspergillus oryzae in 550 L pilot plant stirred tank reactors. Different conditions of agitation and aeration were employed as well as two different impeller geometries. The limiting factor for the productivity was oxygen supply to the fermentation broth, and the carbon substrate feed flow rate was controlled by the dissolved oxygen tension. In order to predict the available oxygen transfer in the system, the stoichiometry of the reaction equation including maintenance substrate consumption was first determined. Mainly based on the biomass concentration a viscosity prediction model was constructed, because rising viscosity of the fermentation broth due to hyphal growth of the fungus leads to significant lower mass transfer towards the end of the fermentation process. Each compartment of the model was shown to predict the experimental results well. The overall model can be used to predict key process parameters at varying fermentation conditions. Copyright © 2011 Wiley Periodicals, Inc.

  13. Scalar utility theory and proportional processing: What does it actually imply?

    PubMed

    Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I

    2016-09-07

    Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Physically-based slope stability modelling and parameter sensitivity: a case study in the Quitite and Papagaio catchments, Rio de Janeiro, Brazil

    NASA Astrophysics Data System (ADS)

    de Lima Neves Seefelder, Carolina; Mergili, Martin

    2016-04-01

    We use the software tools r.slope.stability and TRIGRS to produce factor of safety and slope failure susceptibility maps for the Quitite and Papagaio catchments, Rio de Janeiro, Brazil. The key objective of the work consists in exploring the sensitivity of the geotechnical (r.slope.stability) and geohydraulic (TRIGRS) parameterization on the model outcomes in order to define suitable parameterization strategies for future slope stability modelling. The two landslide-prone catchments Quitite and Papagaio together cover an area of 4.4 km², extending between 12 and 995 m a.s.l. The study area is dominated by granitic bedrock and soil depths of 1-3 m. Ranges of geotechnical and geohydraulic parameters are derived from literature values. A landslide inventory related to a rainfall event in 1996 (250 mm in 48 hours) is used for model evaluation. We attempt to identify those combinations of effective cohesion and effective internal friction angle yielding the best correspondence with the observed landslide release areas in terms of the area under the ROC Curve (AUCROC), and in terms of the fraction of the area affected by the release of landslides. Thereby we test multiple parameter combinations within defined ranges to derive the slope failure susceptibility (fraction of tested parameter combinations yielding a factor of safety smaller than 1). We use the tool r.slope.stability (comparing the infinite slope stability model and an ellipsoid-based sliding surface model) to test and to optimize the geotechnical parameters, and TRIGRS (a coupled hydraulic-infinite slope stability model) to explore the sensitivity of the model results to the geohydraulic parameters. The model performance in terms of AUCROC is insensitive to the variation of the geotechnical parameterization within much of the tested ranges. Assuming fully saturated soils, r.slope.stability produces rather conservative predictions, whereby the results yielded with the sliding surface model are more conservative than those yielded with the infinite slope stability model. The sensitivity of AUCROC to variations in the geohydraulic parameters remains small as long as the calculated degree of saturation of the soils is sufficient to result in the prediction of a significant amount of landslide release pixels. Due to the poor sensitivity of AUCROC to variations of the geotechnical and geohydraulic parameters it is hard to optimize the parameters by means of statistics. Instead, the results produced with many different combinations of parameters correspond reasonably well with the distribution of the observed landslide release areas, even though they vary considerably in terms of their conservativeness. Considering the uncertainty inherent in all geotechnical and geohydraulic data, and the impossibility to capture the spatial distribution of the parameters by means of laboratory tests in sufficient detail, we conclude that landslide susceptibility maps yielded by catchment-scale physically-based models should not be interpreted in absolute terms. Building on the assumption that our findings are generally valid, we suggest that efforts to develop better strategies for dealing with the uncertainties in the spatial variation of the key parameters should be given priority in future slope stability modelling efforts.

  15. Utilization of Integrated Assessment Modeling for determining geologic CO2 storage security

    NASA Astrophysics Data System (ADS)

    Pawar, R.

    2017-12-01

    Geologic storage of carbon dioxide (CO2) has been extensively studied as a potential technology to mitigate atmospheric concentration of CO2. Multiple international research & development efforts, large-scale demonstration and commercial projects are helping advance the technology. One of the critical areas of active investigation is prediction of long-term CO2 storage security and risks. A quantitative methodology for predicting a storage site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale projects where projects will require quantitative assessments of potential long-term liabilities. These predictions are challenging given that they require simulating CO2 and in-situ fluid movements as well as interactions through the primary storage reservoir, potential leakage pathways (such as wellbores, faults, etc.) and shallow resources such as groundwater aquifers. They need to take into account the inherent variability and uncertainties at geologic sites. This talk will provide an overview of an approach based on integrated assessment modeling (IAM) to predict long-term performance of a geologic storage site including, storage reservoir, potential leakage pathways and shallow groundwater aquifers. The approach utilizes reduced order models (ROMs) to capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. Applicability of the approach will be demonstrated through examples that are focused on key storage security questions such as what is the probability of leakage of CO2 from a storage reservoir? how does storage security vary for different geologic environments and operational conditions? how site parameter variability and uncertainties affect storage security, etc.

  16. Modelling rapid flow response of a tile drained hillslope with explicit representation of preferential flow paths and consideration of equifinal model structures

    NASA Astrophysics Data System (ADS)

    Klaus, Julian; Zehe, Erwin

    2010-05-01

    Rapid water flow along spatially connected - often biologically mediated - flow paths of minimum flow resistance is widely acknowledged to play a key role in runoff generation at the hillslope and small catchment scales but also in the transport of solutes like agro chemicals and nutrients in cohesive soils. Especially at tile drained fields site connected vertical flow structures such as worm burrows, roots or shrinkage cracks act as short cuts allowing water flow to bypass the soil matrix. In the present study we propose a spatially explicit approach to represent worm burrows as connected structures of high conductivity and low retention capacity in a 2D physically model. With this approach tile drain discharge and preferential flow patterns in soil observed during the irrigation of a tile drained hillslope in the Weiherbach catchment were modelled. The model parameters derived from measurements and are considered to be uncertain. Given this uncertainty of key factors that organise flow and transport at tile drained sites the main objectives of the present studies are to shed light on the following three questions: 1. Does a simplified approach that explicitly represents worm burrows as continuous flow paths of small flow resistance and low retention properties in a 2D physically model allow successful reproduction of event flow response at a tile drained field site in the Weiherbach catchment? 2. Does the above described uncertainty in key factors cause equifinality i.e. are there several model structural setups that reproduce event flow response in an acceptable manner without compromising our physical understanding of the system? 3. If so, what are the key factors that have to be known at high accuracy to reduce the equifinality of model structures? The issue of equifinality is usually discussed in catchment modelling to indicate that often a large set of conceptual model parameter sets allows acceptable reproduction of the behaviour of the system of interest - in many cases catchment stream flow response. Beven and Binley (1992) suggest that these model structures should be considered to be equally likely to account for predictive uncertainty. In this study we show that the above outline approach allows successful prediction of the tile drain discharge and preferential flow patterns in soil observed during the irrigation of a tile drained hillslope in the Weiherbach catchment flow event. Strikingly we a found a considerable equifinality in the model structural setup, when key parameters such as the area density of worm burrows, their hydraulic conductivity and the conductivity of the tile drains were varied within the ranges of either our measurements or measurements reported in the literature. Thirteen different model setups yielded a normalised time-shifted Nash-Sutcliffe of more than 0.9, which means that more than 90% of the flow variability is explained by the model. Also the flow volumes were in good accordance and timing errors were less or equal than 20 min (which corresponds to two simulation output time steps). It is elaborated that this uncertainty/equifinality could be reduced when more precise data on initial states of the subsurface and on the drainage area of a single drainage tube could be made available. However, such data are currently most difficult to assess even at very well investigated site as the one that is dealt with here. We thus suggest non uniqueness of process based model structures seems thus to be an important factor causing predictive uncertainty at many sites where preferential flow dominates systems response. References Beven, K.J. and Binley, A.M., 1992. The future of distributed models: model calibration and uncertainty prediction, Hydrological Processes, 6, p.279-298.

  17. Personalized Nutrition-Genes, Diet, and Related Interactive Parameters as Predictors of Cancer in Multiethnic Colorectal Cancer Families.

    PubMed

    Shiao, S Pamela K; Grayson, James; Lie, Amanda; Yu, Chong Ho

    2018-06-20

    To personalize nutrition, the purpose of this study was to examine five key genes in the folate metabolism pathway, and dietary parameters and related interactive parameters as predictors of colorectal cancer (CRC) by measuring the healthy eating index (HEI) in multiethnic families. The five genes included methylenetetrahydrofolate reductase ( MTHFR ) 677 and 1298, methionine synthase ( MTR ) 2756, methionine synthase reductase ( MTRR 66), and dihydrofolate reductase ( DHFR ) 19bp , and they were used to compute a total gene mutation score. We included 53 families, 53 CRC patients and 53 paired family friend members of diverse population groups in Southern California. We measured multidimensional data using the ensemble bootstrap forest method to identify variables of importance within domains of genetic, demographic, and dietary parameters to achieve dimension reduction. We then constructed predictive generalized regression (GR) modeling with a supervised machine learning validation procedure with the target variable (cancer status) being specified to validate the results to allow enhanced prediction and reproducibility. The results showed that the CRC group had increased total gene mutation scores compared to the family members ( p < 0.05). Using the Akaike's information criterion and Leave-One-Out cross validation GR methods, the HEI was interactive with thiamine (vitamin B1), which is a new finding for the literature. The natural food sources for thiamine include whole grains, legumes, and some meats and fish which HEI scoring included as part of healthy portions (versus limiting portions on salt, saturated fat and empty calories). Additional predictors included age, as well as gender and the interaction of MTHFR 677 with overweight status (measured by body mass index) in predicting CRC, with the cancer group having more men and overweight cases. The HEI score was significant when split at the median score of 77 into greater or less scores, confirmed through the machine-learning recursive tree method and predictive modeling, although an HEI score of greater than 80 is the US national standard set value for a good diet. The HEI and healthy eating are modifiable factors for healthy living in relation to dietary parameters and cancer prevention, and they can be used for personalized nutrition in the precision-based healthcare era.

  18. Mapping of Biophysical Parameters of Rice Agriculture System from Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Moharana, Shreedevi; Duta, Subashisa

    2017-04-01

    Chlorophyll, nitrogen and leaf water content are the most essential parameters for paddy crop growth. Ground hyperspectral observations were collected at canopy level during critical growth period of rice by using hand held Spectroradiometer. Chemical analysis was carried out to quantify the total chlorophyll, nitrogen and leaf water content. By exploiting the in-situ hyperspectral measurements, regression models were established between each of the crop growth parameters and the spectral indices specifically designed for chlorophyll, nitrogen and water stress. Narrow band vegetation index models were developed for mapping these parameters from Hyperion imagery in an agriculture system. It was inferred that the modified simple ratio (SR) and leaf nitrogen concentration (LNC) predictive index models, which followed a linear and nonlinear relationship respectively, produced satisfactory results in predicting rice nitrogen content from hyperspectral imagery. The presently developed model was compared with other models proposed by researchers. It was ascertained that, nitrogen content varied widely from 1-4 percentage and only 2-3 percentage for paddy crop using present modified index models and well-known predicted Tian et al. (2011) model respectively. The modified present LNC index model performed better than the established Tian et al. (2011) model as far as the estimated nitrogen content from Hyperion imagery was concerned. Moreover, within the observed chlorophyll range attained from the rice genotypes cultivated in the studied rice agriculture system, the index models (LNC, OASVI, Gitelson, mSR and MTCI) accomplished satisfactory results in the spatial distribution of rice chlorophyll content from Hyperion imagery. Spatial distribution of total chlorophyll content widely varied from 1.77-5.81 mg/g (LNC), 3.0-13 mg/g (OASVI) and 2.90-5.40 mg/g (MTCI). Following the similar guideline, it was found that normalized difference water index (NDWI) and normalized difference infrared index (NDII) predictive models demonstrated the spatial variability of leaf water content from 40 percentage to 90 percentage in the same rice agriculture system which has a good agreement with observed in-situ leaf water measurements. The spatial information of these parameters will be useful for crop nutrient management and yield forecasting, and will serve as inputs to various crop-forecasting models for developing a precision rice agriculture system. Key words: Rice agriculture system, nitrogen, chlorophyll, leaf water content, vegetation index

  19. Measured Copper Toxicity to Cnesterodon decemmaculatus (Pisces: Poeciliidae) and Predicted by Biotic Ligand Model in Pilcomayo River Water: A Step for a Cross-Fish-Species Extrapolation

    PubMed Central

    Casares, María Victoria; de Cabo, Laura I.; Seoane, Rafael S.; Natale, Oscar E.; Castro Ríos, Milagros; Weigandt, Cristian; de Iorio, Alicia F.

    2012-01-01

    In order to determine copper toxicity (LC50) to a local species (Cnesterodon decemmaculatus) in the South American Pilcomayo River water and evaluate a cross-fish-species extrapolation of Biotic Ligand Model, a 96 h acute copper toxicity test was performed. The dissolved copper concentrations tested were 0.05, 0.19, 0.39, 0.61, 0.73, 1.01, and 1.42 mg Cu L−1. The 96 h Cu LC50 calculated was 0.655 mg L−1 (0.823 − 0.488). 96-h Cu LC50 predicted by BLM for Pimephales promelas was 0.722 mg L−1. Analysis of the inter-seasonal variation of the main water quality parameters indicates that a higher protective effect of calcium, magnesium, sodium, sulphate, and chloride is expected during the dry season. The very high load of total suspended solids in this river might be a key factor in determining copper distribution between solid and solution phases. A cross-fish-species extrapolation of copper BLM is valid within the water quality parameters and experimental conditions of this toxicity test. PMID:22523491

  20. Transition Experiments on Large Bluntness Cones with Distributed Roughness in Hypersonic Flight

    NASA Technical Reports Server (NTRS)

    Reda, Daniel. C.; Wilder, Michael C.; Prabhu, Dinesh K.

    2012-01-01

    Large bluntness cones with smooth nosetips and roughened frusta were flown in the NASA Ames hypersonic ballistic range at a Mach number of 10 through quiescent air environments. Global surface intensity (temperature) distributions were optically measured and analyzed to determine transition onset and progression over the roughened surface. Real-gas Navier-Stokes calculations of model flowfields, including laminar boundary layer development in these flowfields, were conducted to predict values of key dimensionless parameters used to correlate transition on such configurations in hypersonic flow. For these large bluntness cases, predicted axial distributions of the roughness Reynolds number showed (for each specified freestream pressure) that this parameter was a maximum at the physical beginning of the roughened zone and decreased with increasing run length along the roughened surface. Roughness-induced transition occurred downstream of this maximum roughness Reynolds number location, and progressed upstream towards the beginning of the roughened zone as freestream pressure was systematically increased. Roughness elements encountered at the upstream edge of the roughened frusta thus acted like a finite-extent trip array, consistent with published results concerning the tripping effectiveness of roughness bands placed on otherwise smooth surfaces.

  1. Frequency of Tropical Ocean Deep Convection and Global Warming

    NASA Astrophysics Data System (ADS)

    Aumann, H. H.; Behrangi, A.; Ruzmaikin, A.

    2017-12-01

    The average of 36 CMIP5 models predicts about 3K of warming and a 4.7% increase in precipitation for the tropical oceans with a doubling of the CO2 by the end of this century. For this scenario we evaluate the increase in the frequency of Deep Convective Clouds (DCC) in the tropical oceans. We select only DCC which reach or penetrate the tropopause in the 15 km AIRS footprint. The evaluation is based on Probability Distribution Functions (PDFs) of the current temperatures of the tropical oceans, those predicted by the mean of the CMIP5 models and the PDF of the DCC process. The PDF of the DCC process is derived from the Atmospheric Infrared Sounder (AIRS) between the years 2003 and 2016. During this time the variability due Enso years provided a 1 K p-p change in the mean tropical SST. The key parameter is the SST associated with the onset of the DCC process. This parameter shifts only 0.5 K for each K of warming of the oceans. As a result the frequency of DCC is expected to increases by the end of this century by about 50% above the current frequency.

  2. Surface Roughness Optimization of Polyamide-6/Nanoclay Nanocomposites Using Artificial Neural Network: Genetic Algorithm Approach

    PubMed Central

    Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud

    2014-01-01

    During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636

  3. Predictive analysis of photodynamic therapy applied to esophagus cancer

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, F.; del Campo-Gutiérrez, M.; Ortega-Quijano, N.; Arce-Diego, J. L.

    2008-04-01

    The use of optical techniques in medicine has revolutionized in many cases the medical praxis, providing new tools for practitioners or improving the existing ones in the fight against diseases. The application of this technology comprises mainly two branches, characterization and treatment of biological tissues. Photodynamic Therapy (PDT) provides a solution for malignant tissue destruction, by means of the inoculation of a photosensitizer and irradiation by an optical source. The key factor of the procedure is the localization of the damage to avoid collateral harmful effects. The volume of tissue destroyed depends on the type of photosensitizer inoculated, both on its reactive characteristics and its distribution inside the tissue, and also on the specific properties of the optical source, that is, the optical power, wavelength and exposition time. In this work, a model for PDT based on the one-dimensional diffusion equation, extensible to 3D, to estimate the optical distribution in tissue, and on photosensitizer parameters to take into account the photobleaching effect is proposed. The application to esophagus cancer allows the selection of the right optical source parameters, like irradiance, wavelength or exposition time, in order to predict the area of tissue destruction.

  4. Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.

    PubMed

    Davidich, Maria; Köster, Gerta

    2013-01-01

    Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.

  5. Local Burn-Up Effects in the NBSR Fuel Element

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown N. R.; Hanson A.; Diamond, D.

    2013-01-31

    This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less

  6. Analysis of mesenchymal stem cell differentiation in vitro using classification association rule mining.

    PubMed

    Wang, Weiqi; Wang, Yanbo Justin; Bañares-Alcántara, René; Coenen, Frans; Cui, Zhanfeng

    2009-12-01

    In this paper, data mining is used to analyze the data on the differentiation of mammalian Mesenchymal Stem Cells (MSCs), aiming at discovering known and hidden rules governing MSC differentiation, following the establishment of a web-based public database containing experimental data on the MSC proliferation and differentiation. To this effect, a web-based public interactive database comprising the key parameters which influence the fate and destiny of mammalian MSCs has been constructed and analyzed using Classification Association Rule Mining (CARM) as a data-mining technique. The results show that the proposed approach is technically feasible and performs well with respect to the accuracy of (classification) prediction. Key rules mined from the constructed MSC database are consistent with experimental observations, indicating the validity of the method developed and the first step in the application of data mining to the study of MSCs.

  7. Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.

    PubMed

    Glöckner, Andreas; Pachur, Thorsten

    2012-04-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Harnessing Orbital Debris to Sense the Space Environment

    NASA Astrophysics Data System (ADS)

    Mutschler, S.; Axelrad, P.; Matsuo, T.

    A key requirement for accurate space situational awareness (SSA) is knowledge of the non-conservative forces that act on space objects. These effects vary temporally and spatially, driven by the dynamical behavior of space weather. Existing SSA algorithms adjust space weather models based on observations of calibration satellites. However, lack of sufficient data and mismodeling of non-conservative forces cause inaccuracies in space object motion prediction. The uncontrolled nature of debris makes it particularly sensitive to the variations in space weather. Our research takes advantage of this behavior by inverting observations of debris objects to infer the space environment parameters causing their motion. In addition, this research will produce more accurate predictions of the motion of debris objects. The hypothesis of this research is that it is possible to utilize a "cluster" of debris objects, objects within relatively close proximity of each other, to sense their local environment. We focus on deriving parameters of an atmospheric density model to more precisely predict the drag force on LEO objects. An Ensemble Kalman Filter (EnKF) is used for assimilation; the prior ensemble to the posterior ensemble is transformed during the measurement update in a manner that does not require inversion of large matrices. A prior ensemble is utilized to empirically determine the nonlinear relationship between measurements and density parameters. The filter estimates an extended state that includes position and velocity of the debris object, and atmospheric density parameters. The density is parameterized as a grid of values, distributed by latitude and local sidereal time over a spherical shell encompassing Earth. This research focuses on LEO object motion, but it can also be extended to additional orbital regimes for observation and refinement of magnetic field and solar radiation models. An observability analysis of the proposed approach is presented in terms of the measurement cadence necessary to estimate the local space environment.

  9. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  10. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...

    2017-07-10

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  11. Modeling the evolution of infrared galaxies: a parametric backward evolution model

    NASA Astrophysics Data System (ADS)

    Béthermin, M.; Dole, H.; Lagache, G.; Le Borgne, D.; Penin, A.

    2011-05-01

    Aims: We attempt to model the infrared galaxy evolution in as simple a way as possible and reproduce statistical properties such as the number counts between 15 μm and 1.1 mm, the luminosity functions, and the redshift distributions. We then use the fitted model to interpret observations from Spitzer, AKARI, BLAST, LABOCA, AzTEC, SPT, and Herschel, and make predictions for Planck and future experiments such as CCAT or SPICA. Methods: This model uses an evolution in density and luminosity of the luminosity function parametrized by broken power-laws with two breaks at redshift ~0.9 and 2, and contains the two populations of the Lagache model: normal and starburst galaxies. We also take into account the effect of the strong lensing of high-redshift sub-millimeter galaxies. This effect is significant in the sub-mm and mm range near 50 mJy. It has 13 free parameters and eight additional calibration parameters. We fit the parameters to the IRAS, Spitzer, Herschel, and AzTEC measurements with a Monte Carlo Markov chain. Results: The model adjusted to deep counts at key wavelengths reproduces the counts from mid-infrared to millimeter wavelengths, as well as the mid-infrared luminosity functions. We discuss the contribution to both the cosmic infrared background (CIB) and the infrared luminosity density of the different populations. We also estimate the effect of the lensing on the number counts, and discuss the discovery by the South Pole Telescope (SPT) of a very bright population lying at high redshift. We predict the contribution of the lensed sources to the Planck number counts, the confusion level for future missions using a P(D) formalism, and the Universe opacity to TeV photons caused by the CIB. Material of the model (software, tables and predictions) is available online.

  12. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    NASA Astrophysics Data System (ADS)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  13. A ‘frozen volume’ transition model and working mechanism for the shape memory effect in amorphous polymers

    NASA Astrophysics Data System (ADS)

    Lu, Haibao; Wang, Xiaodong; Yao, Yongtao; Qing Fu, Yong

    2018-06-01

    Phenomenological models based on frozen volume parameters could well predict shape recovery behavior of shape memory polymers (SMPs), but the physical meaning of using the frozen volume parameters to describe thermomechanical properties has not been well-established. In this study, the fundamental working mechanisms of the shape memory effect (SME) in amorphous SMPs, whose temperature-dependent viscoelastic behavior follows the Eyring equation, have been established with the considerations of both internal stress and its resulted frozen volume. The stress-strain constitutive relation was initially modeled to quantitatively describe effects of internal stresses at the macromolecular scale based on the transient network theory. A phenomenological ‘frozen volume’ model was then established to characterize the macromolecule structure and SME of amorphous SMPs based on a two-site stress-relaxation model. Effects of the internal stress, frozen volume and strain rate on shape memory behavior and thermomechanical properties of the SMP were investigated. Finally, the simulation results were compared with the experimental results reported in the literature, and good agreements between the theoretical and experimental results were achieved. The novelty and key differences of our newly proposed model with respect to the previous reports are (1). The ‘frozen volume’ in our study is caused by the internal stress and governed by the two-site model theory, thus has a good physical meaning. (2). The model can be applied to characterize and predict both the thermal and thermomechanical behaviors of SMPs based on the constitutive relationship with internal stress parameters. It is expected to provide a power tool to investigate the thermomechanical behavior of the SMPs, of which both the macromolecular structure characteristics and SME could be predicted using this ‘frozen volume’ model.

  14. Predicting key malaria transmission factors, biting and entomological inoculation rates, using modelled soil moisture in Kenya.

    PubMed

    Patz, J A; Strzepek, K; Lele, S; Hedden, M; Greene, S; Noden, B; Hay, S I; Kalkstein, L; Beier, J C

    1998-10-01

    While malaria transmission varies seasonally, large inter-annual heterogeneity of malaria incidence occurs. Variability in entomological parameters, biting rates and entomological inoculation rates (EIR) have been strongly associated with attack rates in children. The goal of this study was to assess the weather's impact on weekly biting and EIR in the endemic area of Kisian, Kenya. Entomological data collected by the U.S. Army from March 1986 through June 1988 at Kisian, Kenya was analysed with concurrent weather data from nearby Kisumu airport. A soil moisture model of surface-water availability was used to combine multiple weather parameters with landcover and soil features to improve disease prediction. Modelling soil moisture substantially improved prediction of biting rates compared to rainfall; soil moisture lagged two weeks explained up to 45% of An. gambiae biting variability, compared to 8% for raw precipitation. For An. funestus, soil moisture explained 32% variability, peaking after a 4-week lag. The interspecies difference in response to soil moisture was significant (P < 0.00001). A satellite normalized differential vegetation index (NDVI) of the study site yielded a similar correlation (r = 0.42 An. gambiae). Modelled soil moisture accounted for up to 56% variability of An. gambiae EIR, peaking at a lag of six weeks. The relationship between temperature and An. gambiae biting rates was less robust; maximum temperature r2 = -0.20, and minimum temperature r2 = 0.12 after lagging one week. Benefits of hydrological modelling are compared to raw weather parameters and to satellite NDVI. These findings can improve both current malaria risk assessments and those based on El Niño forecasts or global climate change model projections.

  15. The water retention curve and relative permeability for gas production from hydrate-bearing sediments: pore-network model simulation

    NASA Astrophysics Data System (ADS)

    Mahabadi, Nariman; Dai, Sheng; Seol, Yongkoo; Sup Yun, Tae; Jang, Jaewon

    2016-08-01

    The water retention curve and relative permeability are critical to predict gas and water production from hydrate-bearing sediments. However, values for key parameters that characterize gas and water flows during hydrate dissociation have not been identified due to experimental challenges. This study utilizes the combined techniques of micro-focus X-ray computed tomography (CT) and pore-network model simulation to identify proper values for those key parameters, such as gas entry pressure, residual water saturation, and curve fitting values. Hydrates with various saturation and morphology are realized in the pore-network that was extracted from micron-resolution CT images of sediments recovered from the hydrate deposit at the Mallik site, and then the processes of gas invasion, hydrate dissociation, gas expansion, and gas and water permeability are simulated. Results show that greater hydrate saturation in sediments lead to higher gas entry pressure, higher residual water saturation, and steeper water retention curve. An increase in hydrate saturation decreases gas permeability but has marginal effects on water permeability in sediments with uniformly distributed hydrate. Hydrate morphology has more significant impacts than hydrate saturation on relative permeability. Sediments with heterogeneously distributed hydrate tend to result in lower residual water saturation and higher gas and water permeability. In this sense, the Brooks-Corey model that uses two fitting parameters individually for gas and water permeability properly capture the effect of hydrate saturation and morphology on gas and water flows in hydrate-bearing sediments.

  16. Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables

    DTIC Science & Technology

    2013-06-01

    1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team

  17. The significance of serum urea and renal function in patients with heart failure.

    PubMed

    Gotsman, Israel; Zwas, Donna; Planer, David; Admon, Dan; Lotan, Chaim; Keren, Andre

    2010-07-01

    Renal function and urea are frequently abnormal in patients with heart failure (HF) and are predictive of increased mortality. The relative importance of each parameter is less clear. We prospectively compared the predictive value of renal function and serum urea on clinical outcome in patients with HF. Patients hospitalized with definite clinical diagnosis of HF (n = 355) were followed for short-term (1 yr) and long-term (mean, 6.5 yr) survival and HF rehospitalization. Increasing tertiles of discharge estimated glomerular filtration rate (eGFR) were an independent predictor of increased long-term survival (hazard ratio [HR], 0.65; 95% confidence interval [CI], 0.47-0.91; p = 0.01) but not short-term survival. Admission and discharge serum urea and blood urea nitrogen (BUN)/creatinine ratio were predictors of reduced short- and long-term survival on multivariate Cox regression analysis. Increasing tertiles of discharge urea were a predictor of reduced 1-year survival (HR, 2.13; 95% CI, 1.21-3.73; p = 0.009) and long-term survival (HR, 1.93; 95% CI, 1.37-2.71; p < 0.0001). Multivariate analysis including discharge eGFR and serum urea demonstrated that only serum urea remained a significant predictor of long-term survival; however, eGFR and BUN/creatinine ratio were both independently predictive of survival. Urea was more discriminative than eGFR in predicting long-term survival by area under the receiver operating characteristic curve (0.803 vs. 0.787; p = 0.01). Increasing tertiles of discharge serum urea and BUN/creatinine were independent predictors of HF rehospitalization and combined death and HF rehospitalization. This study suggests that serum urea is a more powerful predictor of survival than eGFR in patients with HF. This may be due to urea's relation to key biological parameters including renal, hemodynamic, and neurohormonal parameters pertaining to the overall clinical status of the patient with chronic HF.

  18. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  19. From local uncertainty to global predictions: Making predictions on fractal basins

    PubMed Central

    2018-01-01

    In nonlinear systems long term dynamics is governed by the attractors present in phase space. The presence of a chaotic saddle gives rise to basins of attraction with fractal boundaries and sometimes even to Wada boundaries. These two phenomena involve extreme difficulties in the prediction of the future state of the system. However, we show here that it is possible to make statistical predictions even if we do not have any previous knowledge of the initial conditions or the time series of the system until it reaches its final state. In this work, we develop a general method to make statistical predictions in systems with fractal basins. In particular, we have applied this new method to the Duffing oscillator for a choice of parameters where the system possesses the Wada property. We have computed the statistical properties of the Duffing oscillator for different phase space resolutions, to obtain information about the global dynamics of the system. The key idea is that the fraction of initial conditions that evolve towards each attractor is scale free—which we illustrate numerically. We have also shown numerically how having partial information about the initial conditions of the system does not improve in general the predictions in the Wada regions. PMID:29668687

  20. Prediction of frozen food properties during freezing using product composition.

    PubMed

    Boonsupthip, W; Heldman, D R

    2007-06-01

    Frozen water fraction (FWF), as a function of temperature, is an important parameter for use in the design of food freezing processes. An FWF-prediction model, based on concentrations and molecular weights of specific product components, has been developed. Published food composition data were used to determine the identity and composition of key components. The model proposed in this investigation had been verified using published experimental FWF data and initial freezing temperature data, and by comparison to outputs from previously published models. It was found that specific food components with significant influence on freezing temperature depression of food products included low molecular weight water-soluble compounds with molality of 50 micromol per 100 g food or higher. Based on an analysis of 200 high-moisture food products, nearly 45% of the experimental initial freezing temperature data were within an absolute difference (AD) of +/- 0.15 degrees C and standard error (SE) of +/- 0.65 degrees C when compared to values predicted by the proposed model. The predicted relationship between temperature and FWF for all analyzed food products provided close agreements with experimental data (+/- 0.06 SE). The proposed model provided similar prediction capability for high- and intermediate-moisture food products. In addition, the proposed model provided statistically better prediction of initial freezing temperature and FWF than previous published models.

  1. A statistical analysis of RNA folding algorithms through thermodynamic parameter perturbation.

    PubMed

    Layton, D M; Bundschuh, R

    2005-01-01

    Computational RNA secondary structure prediction is rather well established. However, such prediction algorithms always depend on a large number of experimentally measured parameters. Here, we study how sensitive structure prediction algorithms are to changes in these parameters. We found already that for changes corresponding to the actual experimental error to which these parameters have been determined, 30% of the structure are falsely predicted whereas the ground state structure is preserved under parameter perturbation in only 5% of all the cases. We establish that base-pairing probabilities calculated in a thermal ensemble are viable although not a perfect measure for the reliability of the prediction of individual structure elements. Here, a new measure of stability using parameter perturbation is proposed, and its limitations are discussed.

  2. Predicting coastal cliff erosion using a Bayesian probabilistic model

    USGS Publications Warehouse

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  3. Model‐Based Approach to Predict Adherence to Protocol During Antiobesity Trials

    PubMed Central

    Sharma, Vishnu D.; Combes, François P.; Vakilynejad, Majid; Lahu, Gezim; Lesko, Lawrence J.

    2017-01-01

    Abstract Development of antiobesity drugs is continuously challenged by high dropout rates during clinical trials. The objective was to develop a population pharmacodynamic model that describes the temporal changes in body weight, considering disease progression, lifestyle intervention, and drug effects. Markov modeling (MM) was applied for quantification and characterization of responder and nonresponder as key drivers of dropout rates, to ultimately support the clinical trial simulations and the outcome in terms of trial adherence. Subjects (n = 4591) from 6 Contrave® trials were included in this analysis. An indirect‐response model developed by van Wart et al was used as a starting point. Inclusion of drug effect was dose driven using a population dose‐ and time‐dependent pharmacodynamic (DTPD) model. Additionally, a population‐pharmacokinetic parameter‐ and data (PPPD)‐driven model was developed using the final DTPD model structure and final parameter estimates from a previously developed population pharmacokinetic model based on available Contrave® pharmacokinetic concentrations. Last, MM was developed to predict transition rate probabilities among responder, nonresponder, and dropout states driven by the pharmacodynamic effect resulting from the DTPD or PPPD model. Covariates included in the models and parameters were diabetes mellitus and race. The linked DTPD‐MM and PPPD‐MM was able to predict transition rates among responder, nonresponder, and dropout states well. The analysis concluded that body‐weight change is an important factor influencing dropout rates, and the MM depicted that overall a DTPD model‐driven approach provides a reasonable prediction of clinical trial outcome probabilities similar to a pharmacokinetic‐driven approach. PMID:28858397

  4. Modeling bronchial circulation with application to soluble gas exchange: description and sensitivity analysis.

    PubMed

    Bui, T D; Dabdub, D; George, S C

    1998-06-01

    The steady-state exchange of inert gases across an in situ canine trachea has recently been shown to be limited equally by diffusion and perfusion over a wide range (0.01-350) of blood solubilities (betablood; ml . ml-1 . atm-1). Hence, we hypothesize that the exchange of ethanol (betablood = 1,756 at 37 degrees C) in the airways depends on the blood flow rate from the bronchial circulation. To test this hypothesis, the dynamics of the bronchial circulation were incorporated into an existing model that describes the simultaneous exchange of heat, water, and a soluble gas in the airways. A detailed sensitivity analysis of key model parameters was performed by using the method of Latin hypercube sampling. The model accurately predicted a previously reported experimental exhalation profile of ethanol (R2 = 0.991) as well as the end-exhalation airstream temperature (34.6 degrees C). The model predicts that 27, 29, and 44% of exhaled ethanol in a single exhalation are derived from the tissues of the mucosa and submucosa, the bronchial circulation, and the tissue exterior to the submucosa (which would include the pulmonary circulation), respectively. Although the concentration of ethanol in the bronchial capillary decreased during inspiration, the three key model outputs (end-exhaled ethanol concentration, the slope of phase III, and end-exhaled temperature) were all statistically insensitive (P > 0.05) to the parameters describing the bronchial circulation. In contrast, the model outputs were all sensitive (P < 0.05) to the thickness of tissue separating the core body conditions from the bronchial smooth muscle. We conclude that both the bronchial circulation and the pulmonary circulation impact soluble gas exchange when the entire conducting airway tree is considered.

  5. Risk-assessment and risk-taking behavior predict potassium- and amphetamine-induced dopamine response in the dorsal striatum of rats

    PubMed Central

    Palm, Sara; Momeni, Shima; Lundberg, Stina; Nylander, Ingrid; Roman, Erika

    2014-01-01

    Certain personality types and behavioral traits display high correlations to drug use and an increased level of dopamine in the reward system is a common denominator of all drugs of abuse. Dopamine response to drugs has been suggested to correlate with some of these personality types and to be a key factor influencing the predisposition to addiction. This study investigated if behavioral traits can be related to potassium- and amphetamine-induced dopamine response in the dorsal striatum, an area hypothesized to be involved in the shift from drug use to addiction. The open field and multivariate concentric square field™ tests were used to assess individual behavior in male Wistar rats. Chronoamperometric recordings were then made to study the potassium- and amphetamine-induced dopamine response in vivo. A classification based on risk-taking behavior in the open field was used for further comparisons. Risk-taking behavior was correlated between the behavioral tests and high risk takers displayed a more pronounced response to the dopamine uptake blocking effects of amphetamine. Behavioral parameters from both tests could also predict potassium- and amphetamine-induced dopamine responses showing a correlation between neurochemistry and behavior in risk-assessment and risk-taking parameters. In conclusion, the high risk-taking rats showed a more pronounced reduction of dopamine uptake in the dorsal striatum after amphetamine indicating that this area may contribute to the sensitivity of these animals to psychostimulants and proneness to addiction. Further, inherent dopamine activity was related to risk-assessment behavior, which may be of importance for decision-making and inhibitory control, key components in addiction. PMID:25076877

  6. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  7. Predictive value of seven preoperative prognostic scoring systems for spinal metastases

    PubMed Central

    Leithner, Andreas; Radl, Roman; Gruber, Gerald; Hochegger, Markus; Leithner, Katharina; Welkerling, Heike; Rehak, Peter

    2008-01-01

    Predicting prognosis is the key factor in selecting the proper treatment modality for patients with spinal metastases. Therefore, various assessment systems have been designed in order to provide a basis for deciding the course of treatment. Such systems have been proposed by Tokuhashi, Sioutos, Tomita, Van der Linden, and Bauer. The scores differ greatly in the kind of parameters assessed. The aim of this study was to evaluate the prognostic value of each score. Eight parameters were assessed for 69 patients (37 male, 32 female): location, general condition, number of extraspinal bone metastases, number of spinal metastases, visceral metastases, primary tumour, severity of spinal cord palsy, and pathological fracture. Scores according to Tokuhashi (original and revised), Sioutos, Tomita, Van der Linden, and Bauer were assessed as well as a modified Bauer score without scoring for pathologic fracture. Nineteen patients were still alive as of September 2006 with a minimum follow-up of 12 months. All other patients died after a mean period of 17 months after operation. The mean overall survival period was only 3 months for lung cancer, followed by prostate (7 months), kidney (23 months), breast (35 months), and multiple myeloma (51 months). At univariate survival analysis, primary tumour and visceral metastases were significant parameters, while Karnofsky score was only significant in the group including myeloma patients. In multivariate analysis of all seven parameters assessed, primary tumour and visceral metastases were the only significant parameters. Of all seven scoring systems, the original Bauer score and a Bauer score without scoring for pathologic fracture had the best association with survival (P < 0.001). The data of the present study emphasize that the original Bauer score and a modified Bauer score without scoring for pathologic fracture seem to be practicable and highly predictive preoperative scoring systems for patients with spinal metastases. However, decision for or against surgery should never be based alone on a prognostic score but should take symptoms like pain or neurological compromise into account. PMID:18787846

  8. Performance of STICS model to predict rainfed corn evapotranspiration and biomass evaluated for 6 years between 1995 and 2006 using daily aggregated eddy covariance fluxes and ancillary measurements.

    NASA Astrophysics Data System (ADS)

    Pattey, Elizabeth; Jégo, Guillaume; Bourgeois, Gaétan

    2010-05-01

    Verifying the performance of process-based crop growth models to predict evapotranspiration and crop biomass is a key component of the adaptation of agricultural crop production to climate variations. STICS, developed by INRA, was part of the models selected by Agriculture and Agri-Food Canada to be implemented for environmental assessment studies on climate variations, because of its built-in ability to assimilate biophysical descriptors such as LAI derived from satellite imagery and its open architecture. The model prediction of shoot biomass was calibrated using destructive biomass measurements over one season, by adjusting six cultivar parameters and three generic plant parameters to define two grain corn cultivars adapted to the 1000-km long Mixedwood Plains ecozone. Its performance was then evaluated using a database of 40 years-sites of corn destructive biomass and yield. In this study we evaluate the temporal response of STICS evapotranspiration and biomass accumulation predictions against estimates using daily aggregated eddy covariance fluxes. The flux tower was located in an experimental farm south of Ottawa and measurements carried out over corn fields in 1995, 1996, 1998, 2000, 2002 and 2006. Daytime and nighttime fluxes were QC/QA and gap-filled separately. Soil respiration was partitioned to calculate the corn net daily CO2 uptake, which was converted into dry biomass. Out of the six growing seasons, three (1995, 1998, 2002) had water stress periods during corn grain filling. Year 2000 was cool and wet, while 1996 had heat and rainfall distributed evenly over the season and 2006 had a wet spring. STICS can predict evapotranspiration using either crop coefficients, when wind speed and air moisture are not available, or resistance. The first approach provided higher prediction for all the years than the resistance approach and the flux measurements. The dynamic of evapotranspiration prediction of STICS was very good for the growing seasons without water stress and was overestimated by 12-34% when rainfall deficit occurred. The preliminary comparison with intra-seasonal biomass accumulation showed that the total corn biomass derived from eddy fluxes was closer to the shoot biomass predicted by STICS than to the total biomass. The root to shoot ratio predicted by STICS was higher (30-40%) than the ratio reported in the literature (~20%). Some of the parameters controlling root growth might need a better calibration. The assembled database will help us identify the areas of greater uncertainty requiring improvement.

  9. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  10. Quantifying model-structure- and parameter-driven uncertainties in spring wheat phenology prediction with Bayesian analysis

    DOE PAGES

    Alderman, Phillip D.; Stanfill, Bryan

    2016-10-06

    Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less

  11. Consumption, Retirement and Social Security: Evaluating the Efficiency of Reform that Encourages Longer Careers*

    PubMed Central

    Laitner, John; Silverman, Dan

    2012-01-01

    This paper proposes and analyzes a Social Security reform in which individuals no longer face the OASI payroll tax after, say, age 54 or a career of 34 years, and their subsequent earnings have no bearing on their benefits. We first estimate parameters of a life–cycle model. Our specification includes non-separable preferences and possible disability. It predicts a consumption–expenditure change at retirement. We use the magnitude of the expenditure change, together with households’ retirement–age decisions, to identify key structural parameters. The estimated magnitude of the change in consumption–expenditure depends importantly on the treatment of consumption by adult children of the household. Simulations indicate that the reform could increase retirement ages one year or more, equivalent variations could average more than $4,000 per household, and income tax revenues per household could increase by more than $14,000. PMID:23729902

  12. Energy transfer and motion synchronization between mechanical oscillators through microhydrodynamic coupling

    NASA Astrophysics Data System (ADS)

    Wan, Yu; Jin, Kai; Ahmad, Talha J.; Black, Michael J.; Xu, Zhiping

    2017-03-01

    Fluidic environment is encountered for mechanical components in many circumstances, which not only damps the oscillation but also modulates their dynamical behaviors through hydrodynamic interactions. In this study, we examine energy transfer and motion synchronization between two mechanical micro-oscillators by performing thermal lattice-Boltzmann simulations. The coefficient of inter-oscillator energy transfer is measured to quantify the strength of microhydrodynamic coupling, which depends on their distance and fluid properties such as density and viscosity. Synchronized motion of the oscillators is observed in the simulations for typical parameter sets in relevant applications, with the formation and loss of stable anti-phase synchronization controlled by the oscillating frequency, amplitude, and hydrodynamic coupling strength. The critical ranges of key parameters to assure efficient energy transfer or highly synchronized motion are predicted. These findings could be used to advise mechanical design of passive and active devices that operate in fluid.

  13. The frequency response of dynamic friction: Enhanced rate-and-state models

    NASA Astrophysics Data System (ADS)

    Cabboi, A.; Putelat, T.; Woodhouse, J.

    2016-07-01

    The prediction and control of friction-induced vibration requires a sufficiently accurate constitutive law for dynamic friction at the sliding interface: for linearised stability analysis, this requirement takes the form of a frictional frequency response function. Systematic measurements of this frictional frequency response function are presented for small samples of nylon and polycarbonate sliding against a glass disc. Previous efforts to explain such measurements from a theoretical model have failed, but an enhanced rate-and-state model is presented which is shown to match the measurements remarkably well. The tested parameter space covers a range of normal forces (10-50 N), of sliding speeds (1-10 mm/s) and frequencies (100-2000 Hz). The key new ingredient in the model is the inclusion of contact stiffness to take into account elastic deformations near the interface. A systematic methodology is presented to discriminate among possible variants of the model, and then to identify the model parameter values.

  14. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  15. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  16. A simple analytical method to estimate all exit parameters of a cross-flow air dehumidifier using liquid desiccant.

    PubMed

    Bassuoni, M M

    2014-03-01

    The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and -5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio.

  17. Displacement back analysis for a high slope of the Dagangshan Hydroelectric Power Station based on BP neural network and particle swarm optimization.

    PubMed

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes.

  18. Displacement Back Analysis for a High Slope of the Dagangshan Hydroelectric Power Station Based on BP Neural Network and Particle Swarm Optimization

    PubMed Central

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes. PMID:25140345

  19. Gradient Theory simulations of pure fluid interfaces using a generalized expression for influence parameters and a Helmholtz energy equation of state for fundamentally consistent two-phase calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahms, Rainer N.

    2014-12-31

    The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phasemore » components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of previous temperature-dependent expressions, remains well-defined at supercritical temperatures, and is fully suitable for calculations of general multi-component two-phase interfaces.« less

  20. Predicting Key Events in the Popularity Evolution of Online Information.

    PubMed

    Hu, Ying; Hu, Changjun; Fu, Shushen; Fang, Mingzhe; Xu, Wenwen

    2017-01-01

    The popularity of online information generally experiences a rising and falling evolution. This paper considers the "burst", "peak", and "fade" key events together as a representative summary of popularity evolution. We propose a novel prediction task-predicting when popularity undergoes these key events. It is of great importance to know when these three key events occur, because doing so helps recommendation systems, online marketing, and containment of rumors. However, it is very challenging to solve this new prediction task due to two issues. First, popularity evolution has high variation and can follow various patterns, so how can we identify "burst", "peak", and "fade" in different patterns of popularity evolution? Second, these events usually occur in a very short time, so how can we accurately yet promptly predict them? In this paper we address these two issues. To handle the first one, we use a simple moving average to smooth variation, and then a universal method is presented for different patterns to identify the key events in popularity evolution. To deal with the second one, we extract different types of features that may have an impact on the key events, and then a correlation analysis is conducted in the feature selection step to remove irrelevant and redundant features. The remaining features are used to train a machine learning model. The feature selection step improves prediction accuracy, and in order to emphasize prediction promptness, we design a new evaluation metric which considers both accuracy and promptness to evaluate our prediction task. Experimental and comparative results show the superiority of our prediction solution.

  1. Predicting Key Events in the Popularity Evolution of Online Information

    PubMed Central

    Fu, Shushen; Fang, Mingzhe; Xu, Wenwen

    2017-01-01

    The popularity of online information generally experiences a rising and falling evolution. This paper considers the “burst”, “peak”, and “fade” key events together as a representative summary of popularity evolution. We propose a novel prediction task—predicting when popularity undergoes these key events. It is of great importance to know when these three key events occur, because doing so helps recommendation systems, online marketing, and containment of rumors. However, it is very challenging to solve this new prediction task due to two issues. First, popularity evolution has high variation and can follow various patterns, so how can we identify “burst”, “peak”, and “fade” in different patterns of popularity evolution? Second, these events usually occur in a very short time, so how can we accurately yet promptly predict them? In this paper we address these two issues. To handle the first one, we use a simple moving average to smooth variation, and then a universal method is presented for different patterns to identify the key events in popularity evolution. To deal with the second one, we extract different types of features that may have an impact on the key events, and then a correlation analysis is conducted in the feature selection step to remove irrelevant and redundant features. The remaining features are used to train a machine learning model. The feature selection step improves prediction accuracy, and in order to emphasize prediction promptness, we design a new evaluation metric which considers both accuracy and promptness to evaluate our prediction task. Experimental and comparative results show the superiority of our prediction solution. PMID:28046121

  2. Representing winter wheat in the Community Land Model (version 4.5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less

  3. Representing winter wheat in the Community Land Model (version 4.5)

    NASA Astrophysics Data System (ADS)

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.

    2017-05-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.

  4. Representing winter wheat in the Community Land Model (version 4.5)

    DOE PAGES

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; ...

    2017-05-05

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less

  5. Deconstructing three-dimensional (3D) structure of absorptive glass mat (AGM) separator to tailor pore dimensions and amplify electrolyte uptake

    NASA Astrophysics Data System (ADS)

    Rawal, Amit; Rao, P. V. Kameswara; Kumar, Vijay

    2018-04-01

    Absorptive glass mat (AGM) separator is a vital technical component in valve regulated lead acid (VRLA) batteries that can be tailored for a desired application. To selectively design and tailor the AGM separator, the intricate three-dimensional (3D) structure needs to be unraveled. Herein, a toolkit of 3D analytical models of pore size distribution and electrolyte uptake expressed via wicking characteristics of AGM separators under unconfined and confined states is presented. 3D data of fiber orientation distributions obtained previously through X-ray micro-computed tomography (microCT) analysis are used as key set of input parameters. The predictive ability of pore size distribution model is assessed through the commonly used experimental set-up that usually apply high level of compressive stresses. Further, the existing analytical model of wicking characteristics of AGM separators has been extended to account for 3D characteristics, and subsequently, compared with the experimental results. A good agreement between the theory and experiments pave the way to simulate the realistic charge-discharge modes of the battery by applying cyclic loading condition. A threshold criterion describing the invariant behavior of pore size and wicking characteristics in terms of maximum permissible limit of key structural parameters during charge-discharge mode of the battery has also been proposed.

  6. Quantifying the past and future impact of climate on outbreak patterns of bank voles (Myodes glareolus).

    PubMed

    Imholt, Christian; Reil, Daniela; Eccard, Jana A; Jacob, Daniela; Hempelmann, Nils; Jacob, Jens

    2015-02-01

    Central European outbreak populations of the bank vole (Myodes glareolus Schreber) are known to cause damage in forestry and to transmit the most common type of Hantavirus (Puumala virus, PUUV) to humans. A sound estimation of potential effects of future climate scenarios on population dynamics is a prerequisite for long-term management strategies. Historic abundance time series were used to identify the key weather conditions associated with bank vole abundance, and were extrapolated to future climate scenarios to derive potential long-term changes in bank vole abundance dynamics. Classification and regression tree analysis revealed the most relevant weather parameters associated with high and low bank vole abundances. Summer temperatures 2 years prior to trapping had the highest impact on abundance fluctuation. Extrapolation of the identified parameters to future climate conditions revealed an increase in years with high vole abundance. Key weather patterns associated with vole abundance reflect the importance of superabundant food supply through masting to the occurrence of bank vole outbreaks. Owing to changing climate, these outbreaks are predicted potentially to increase in frequency 3-4-fold by the end of this century. This may negatively affect damage patterns in forestry and the risk of human PUUV infection in the long term. © 2014 Society of Chemical Industry.

  7. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie

    2014-12-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.

  8. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.

  9. A discrete element modelling approach for block impacts on trees

    NASA Astrophysics Data System (ADS)

    Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic

    2015-04-01

    These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input parameters that control the block kinematics after impact.

  10. Q-marker based strategy for CMC research of Chinese medicine: A case study of Panax Notoginseng saponins.

    PubMed

    Zhong, Yi; Zhu, Jieqiang; Yang, Zhenzhong; Shao, Qing; Fan, Xiaohui; Cheng, Yiyu

    2018-01-31

    To ensure pharmaceutical quality, chemistry, manufacturing and control (CMC) research is essential. However, due to the inherent complexity of Chinese medicine (CM), CMC study of CM remains a great challenge for academia, industry, and regulatory agencies. Recently, quality-marker (Q-marker) was proposed to establish quality standards or quality analysis approaches of Chinese medicine, which sheds a light on Chinese medicine's CMC study. Here manufacture processes of Panax Notoginseng Saponins (PNS) is taken as a case study and the present work is to establish a Q-marker based research strategy for CMC of Chinese medicine. The Q-markers of Panax Notoginseng Saponins (PNS) is selected and established by integrating chemical profile with pharmacological activities. Then, the key processes of PNS manufacturing are identified by material flow analysis. Furthermore, modeling algorithms are employed to explore the relationship between Q-markers and critical process parameters (CPPs) of the key processes. At last, CPPs of the key processes are optimized in order to improving the process efficiency. Among the 97 identified compounds, Notoginsenoside R 1 , ginsenoside Rg 1 , Re, Rb 1 and Rd are selected as the Q-markers of PNS. Our analysis on PNS manufacturing show the extraction process and column chromatography process are the key processes. With the CPPs of each process as the inputs and Q-markers' contents as the outputs, two process prediction models are built separately for the extraction process and column chromatography process of Panax notoginseng, which both possess good prediction ability. Based on the efficiency models of extraction process and column chromatography process we constructed, the optimal CPPs of both processes are calculated. Our results show that the Q-markers derived from CMC research strategy can be applied to analyze the manufacturing processes of Chinese medicine to assure product's quality and promote key processes' efficiency simultaneously. Copyright © 2018 Elsevier GmbH. All rights reserved.

  11. Analysis of Hydrogen Generation through Thermochemical Gasification of Coconut Shell Using Thermodynamic Equilibrium Model Considering Char and Tar

    PubMed Central

    Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel

    2014-01-01

    This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm3 at a gasification temperature of 1500 K and equivalence ratio of 0.15. PMID:27433487

  12. Analysis of Hydrogen Generation through Thermochemical Gasification of Coconut Shell Using Thermodynamic Equilibrium Model Considering Char and Tar.

    PubMed

    Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel

    2014-01-01

    This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm(3) at a gasification temperature of 1500 K and equivalence ratio of 0.15.

  13. Channel Temperature Model for Microwave AlGaN/GaN HEMTs on SiC and Sapphire MMICs in High Power, High Efficiency SSPAs

    NASA Technical Reports Server (NTRS)

    Freeman, Jon C.

    2004-01-01

    A key parameter in the design trade-offs made during AlGaN/GaN HEMTs development for microwave power amplifiers is the channel temperature. An accurate determination can, in general, only be found using detailed software; however, a quick estimate is always helpful, as it speeds up the design cycle. This paper gives a simple technique to estimate the channel temperature of a generic microwave AlGaN/GaN HEMT on SiC or Sapphire, while incorporating the temperature dependence of the thermal conductivity. The procedure is validated by comparing its predictions with the experimentally measured temperatures in microwave devices presented in three recently published articles. The model predicts the temperature to within 5 to 10 percent of the true average channel temperature. The calculation strategy is extended to determine device temperature in power combining MMICs for solid-state power amplifiers (SSPAs).

  14. Micro-Ramp Flow Control for Oblique Shock Interactions: Comparisons of Computational and Experimental Data

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie M.; Reich, David B.; O'Connor, Michael B.

    2010-01-01

    Computational fluid dynamics was used to study the effectiveness of micro-ramp vortex generators to control oblique shock boundary layer interactions. Simulations were based on experiments previously conducted in the 15 x 15 cm supersonic wind tunnel at NASA Glenn Research Center. Four micro-ramp geometries were tested at Mach 2.0 varying the height, chord length, and spanwise spacing between micro-ramps. The overall flow field was examined. Additionally, key parameters such as boundary-layer displacement thickness, momentum thickness and incompressible shape factor were also examined. The computational results predicted the effects of the micro-ramps well, including the trends for the impact that the devices had on the shock boundary layer interaction. However, computing the shock boundary layer interaction itself proved to be problematic since the calculations predicted more pronounced adverse effects on the boundary layer due to the shock than were seen in the experiment.

  15. Micro-Ramp Flow Control for Oblique Shock Interactions: Comparisons of Computational and Experimental Data

    NASA Technical Reports Server (NTRS)

    Hirt, Stephanie M.; Reich, David B.; O'Connor, Michael B.

    2012-01-01

    Computational fluid dynamics was used to study the effectiveness of micro-ramp vortex generators to control oblique shock boundary layer interactions. Simulations were based on experiments previously conducted in the 15- by 15-cm supersonic wind tunnel at the NASA Glenn Research Center. Four micro-ramp geometries were tested at Mach 2.0 varying the height, chord length, and spanwise spacing between micro-ramps. The overall flow field was examined. Additionally, key parameters such as boundary-layer displacement thickness, momentum thickness and incompressible shape factor were also examined. The computational results predicted the effects of the microramps well, including the trends for the impact that the devices had on the shock boundary layer interaction. However, computing the shock boundary layer interaction itself proved to be problematic since the calculations predicted more pronounced adverse effects on the boundary layer due to the shock than were seen in the experiment.

  16. Sediment sorting along tidal sand waves: A comparison between field observations and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Van Oyen, Tomas; Blondeaux, Paolo; Van den Eynde, Dries

    2013-07-01

    A site-by-site comparison between field observations and theoretical predictions of sediment sorting patterns along tidal sand waves is performed for ten locations in the North Sea. At each site, the observed grain size distribution along the bottom topography and the geometry of the bed forms is described in detail and the procedure used to obtain the model parameters is summarized. The model appears to accurately describe the wavelength of the observed sand waves for the majority of the locations; still providing a reliable estimate for the other sites. In addition, it is found that for seven out of the ten locations, the qualitative sorting process provided by the model agrees with the observed grain size distribution. A discussion of the site-by-site comparison is provided which, taking into account uncertainties in the field data, indicates that the model grasps the major part of the key processes controlling the phenomenon.

  17. A kinetic model of municipal sludge degradation during non-catalytic wet oxidation.

    PubMed

    Prince-Pike, Arrian; Wilson, David I; Baroutian, Saeid; Andrews, John; Gapes, Daniel J

    2015-12-15

    Wet oxidation is a successful process for the treatment of municipal sludge. In addition, the resulting effluent from wet oxidation is a useful carbon source for subsequent biological nutrient removal processes in wastewater treatment. Owing to limitations with current kinetic models, this study produced a kinetic model which predicts the concentrations of key intermediate components during wet oxidation. The model was regressed from lab-scale experiments and then subsequently validated using data from a wet oxidation pilot plant. The model was shown to be accurate in predicting the concentrations of each component, and produced good results when applied to a plant 500 times larger in size. A statistical study was undertaken to investigate the validity of the regressed model parameters. Finally the usefulness of the model was demonstrated by suggesting optimum operating conditions such that volatile fatty acids were maximised. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Heave-pitch-roll analysis and testing of air cushion landing systems

    NASA Technical Reports Server (NTRS)

    Boghani, A. B.; Captain, K. M.; Wormley, D. N.

    1978-01-01

    The analytical tools (analysis and computer simulation) needed to explain and predict the dynamic operation of air cushion landing systems (ACLS) is described. The following tasks were performed: the development of improved analytical models for the fan and the trunk; formulation of a heave pitch roll analysis for the complete ACLS; development of a general purpose computer simulation to evaluate landing and taxi performance of an ACLS equipped aircraft; and the verification and refinement of the analysis by comparison with test data obtained through lab testing of a prototype cushion. Demonstration of simulation capabilities through typical landing and taxi simulation of an ACLS aircraft are given. Initial results show that fan dynamics have a major effect on system performance. Comparison with lab test data (zero forward speed) indicates that the analysis can predict most of the key static and dynamic parameters (pressure, deflection, acceleration, etc.) within a margin of a 10 to 25 percent.

  19. Improving hot region prediction by parameter optimization of density clustering in PPI.

    PubMed

    Hu, Jing; Zhang, Xiaolong

    2016-11-01

    This paper proposed an optimized algorithm which combines density clustering of parameter selection with feature-based classification for hot region prediction. First, all the residues are classified by SVM to remove non-hot spot residues, then density clustering of parameter selection is used to find hot regions. In the density clustering, this paper studies how to select input parameters. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius. Copyright © 2016. Published by Elsevier Inc.

  20. Prediction of stream volatilization coefficients

    USGS Publications Warehouse

    Rathbun, Ronald E.

    1990-01-01

    Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.

  1. DIANA-microT web server: elucidating microRNA functions through target prediction.

    PubMed

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  2. A cosmic-ray-mediated shock in the solar system

    NASA Technical Reports Server (NTRS)

    Eichler, D.

    1981-01-01

    It is pointed out that the flare-induced blast wave of Aug. 4, 1972, the most violent disturbance in the solar wind on record, produced cosmic rays with an efficiency of about 50%. Such a high efficiency is predicted by the self-regulating production model of cosmic-ray origin in shocks. Most interplanetary shocks, according to simple theoretical analysis, are not strong enough to produce cosmic rays efficiently. However, if shock strength is the key parameter governing efficiency, as present interplanetary data suggest, then shocks from supernova blasts, quasar outbursts, and other violent astrophysical phenomena should be extremely efficient sources of cosmic rays.

  3. Analytical approach to the multi-state lasing phenomenon in quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2013-03-01

    We introduce an analytical approach to describe the multi-state lasing phenomenon in quantum dot lasers. We show that the key parameter is the hole-to-electron capture rate ratio. If it is lower than a certain critical value, the complete quenching of ground-state lasing takes place at high injection levels. At higher values of the ratio, the model predicts saturation of the ground-state power. This explains the diversity of experimental results and their contradiction to the conventional rate equation model. Recently found enhancement of ground-state lasing in p-doped samples and temperature dependence of the ground-state power are also discussed.

  4. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  5. Bioadsorber efficiency, design, and performance forecasting for alachlor removal.

    PubMed

    Badriyha, Badri N; Ravindran, Varadarajan; Den, Walter; Pirbazari, Massoud

    2003-10-01

    This study discusses a mathematical modeling and design protocol for bioactive granular activated carbon (GAC) adsorbers employed for purification of drinking water contaminated by chlorinated pesticides, exemplified by alachlor. A thin biofilm model is discussed that incorporates the following phenomenological aspects: film transfer from the bulk fluid to the adsorbent particles, diffusion through the biofilm immobilized on adsorbent surface, adsorption of the contaminant into the adsorbent particle. The modeling approach involved independent laboratory-scale experiments to determine the model input parameters. These experiments included adsorption isotherm studies, adsorption rate studies, and biokinetic studies. Bioactive expanded-bed adsorber experiments were conducted to obtain realistic experimental data for determining the ability of the model for predicting adsorber dynamics under different operating conditions. The model equations were solved using a computationally efficient hybrid numerical technique combining orthogonal collocation and finite difference methods. The model provided accurate predictions of adsorber dynamics for bioactive and non-bioactive scenarios. Sensitivity analyses demonstrated the significance of various model parameters, and focussed on enhancement in certain key parameters to improve the overall process efficiency. Scale-up simulation studies for bioactive and non-bioactive adsorbers provided comparisons between their performances, and illustrated the advantages of bioregeneration for enhancing their effective service life spans. Isolation of microbial species revealed that fungal strains were more efficient than bacterial strains in metabolizing alachlor. Microbial degradation pathways for alachlor were proposed and confirmed by the detection of biotransformation metabolites and byproducts using gas chromatography/mass spectrometry.

  6. Predicting dredging-associated effects to coral reefs in Apra Harbor, Guam - Part 1: Sediment exposure modeling.

    PubMed

    Gailani, Joseph Z; Lackey, Tahirih C; King, David B; Bryant, Duncan; Kim, Sung-Chan; Shafer, Deborah J

    2016-03-01

    Model studies were conducted to investigate the potential coral reef sediment exposure from dredging associated with proposed development of a deepwater wharf in Apra Harbor, Guam. The Particle Tracking Model (PTM) was applied to quantify the exposure of coral reefs to material suspended by the dredging operations at two alternative sites. Key PTM features include the flexible capability of continuous multiple releases of sediment parcels, control of parcel/substrate interaction, and the ability to efficiently track vast numbers of parcels. This flexibility has facilitated simulating the combined effects of sediment released from clamshell dredging and chiseling within Apra Harbor. Because the rate of material released into the water column by some of the processes is not well understood or known a priori, the modeling approach was to bracket parameters within reasonable ranges to produce a suite of potential results from multiple model runs. Sensitivity analysis to model parameters is used to select the appropriate parameter values for bracketing. Data analysis results include mapping the time series and the maximum values of sedimentation, suspended sediment concentration, and deposition rate. Data were used to quantify various exposure processes that affect coral species in Apra Harbor. The goal of this research is to develop a robust methodology for quantifying and bracketing exposure mechanisms to coral (or other receptors) from dredging operations. These exposure values were utilized in an ecological assessment to predict effects (coral reef impacts) from various dredging scenarios. Copyright © 2015. Published by Elsevier Ltd.

  7. Balancing the Interactions of Ions, Water, and DNA in the Drude Polarizable Force Field

    PubMed Central

    2015-01-01

    Recently we presented a first-generation all-atom Drude polarizable force field for DNA based on the classical Drude oscillator model, focusing on optimization of key dihedral angles followed by extensive validation of the force field parameters. Presently, we describe the procedure for balancing the electrostatic interactions between ions, water, and DNA as required for development of the Drude force field for DNA. The proper balance of these interactions is shown to impact DNA stability and subtler conformational properties, including the conformational equilibrium between the BI and BII states, and the A and B forms of DNA. The parametrization efforts were simultaneously guided by gas-phase quantum mechanics (QM) data on small model compounds and condensed-phase experimental data on the hydration and osmotic properties of biologically relevant ions and their solutions, as well as theoretical predictions for ionic distribution around DNA oligomer. In addition, fine-tuning of the internal base parameters was performed to obtain the final DNA model. Notably, the Drude model is shown to more accurately reproduce counterion condensation theory predictions of DNA charge neutralization by the condensed ions as compared to the CHARMM36 additive DNA force field, indicating an improved physical description of the forces dictating the ionic solvation of DNA due to the explicit treatment of electronic polarizability. In combination with the polarizable DNA force field, the availability of Drude polarizable parameters for proteins, lipids, and carbohydrates will allow for simulation studies of heterogeneous biological systems. PMID:24874104

  8. Rotor Wake Vortex Definition: Initial Evaluation of 3-C PIV Results of the Hart-II Study

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; vanderWall, Berend; Richard, Hughes; Raffel, Markus; Beaumier, Philippe; Delrieux, Yves; Lim, Joon W.; Yu, Yung H.; Tung, Chee

    2002-01-01

    An initial evaluation is made of extensive three-component (3C) particle image velocimetry (PIV) measurements within the wake across a rotor disk plane. The model is a 40 percent scale BO-105 helicopter main rotor in forward flight simulation. This study is part of the HART II test program conducted in the German-Dutch Wind Tunnel (DNW). Included are wake vortex field measurements over the advancing and retreating sides of the rotor operating at a typical descent landing condition important for impulsive blade-vortex interaction (BVI) noise. Also included are advancing side results for rotor angle variations from climb to steep descent. Using detailed PIV vector maps of the vortex fields, methods of extracting key vortex parameters are examined and a new method was developed and evaluated. An objective processing method, involving a center-of-vorticity criterion and a vorticity 'disk' integration, was used to determine vortex core size, strength, core velocity distribution characteristics, and unsteadiness. These parameters are mapped over the rotor disk and offer unique physical insight for these parameters of importance for rotor noise and vibration prediction.

  9. Recent advances in non-LTE stellar atmosphere models

    NASA Astrophysics Data System (ADS)

    Sander, Andreas A. C.

    2017-11-01

    In the last decades, stellar atmosphere models have become a key tool in understanding massive stars. Applied for spectroscopic analysis, these models provide quantitative information on stellar wind properties as well as fundamental stellar parameters. The intricate non-LTE conditions in stellar winds dictate the development of adequate sophisticated model atmosphere codes. The increase in both, the computational power and our understanding of physical processes in stellar atmospheres, led to an increasing complexity in the models. As a result, codes emerged that can tackle a wide range of stellar and wind parameters. After a brief address of the fundamentals of stellar atmosphere modeling, the current stage of clumped and line-blanketed model atmospheres will be discussed. Finally, the path for the next generation of stellar atmosphere models will be outlined. Apart from discussing multi-dimensional approaches, I will emphasize on the coupling of hydrodynamics with a sophisticated treatment of the radiative transfer. This next generation of models will be able to predict wind parameters from first principles, which could open new doors for our understanding of the various facets of massive star physics, evolution, and death.

  10. Impact Assessment of Effective Parameters on Drivers' Attention Level to Urban Traffic Signs

    NASA Astrophysics Data System (ADS)

    Kazemi, Mojtaba; Rahimi, Amir Masoud; Roshankhah, Sheida

    2016-03-01

    Traffic signs are one of the oldest safety and traffic control equipments. Drivers' reaction to installed signs is an important issue that could be studied using statistical models developed for target groups. There are 527 questionnaires have been filled up randomly in 45 days, some by drivers passing two northern cities of Iran and some by e-mail. Therefore, minimum sample size of 384 is fulfilled. In addition, Cronbach Alpha of more than 90 % verifies the questionnaire's validity. Ordinal logistic regression is used for 5-level answer variables. This relatively novel method predicts probability of different cases' considering other effective independent variables. There are 18 parameters of factor, man, vehicle, and environment are assessed and 5 parameters of number of accidents in last 5 years, occupation, driving time, number of accidents per day, and driving speed are eventually found as the most important ones. Age and gender, that are considered as key factors in other safety and accident studies, are not recognized as effective ones in this paper. The results could be useful for safety planning programs.

  11. A modeling framework for integrated harvest and habitat management of North American waterfowl: Case-study of northern pintail metapopulation dynamics

    USGS Publications Warehouse

    Mattsson, Brady J.; Runge, M.C.; Devries, J.H.; Boomer, G.S.; Eadie, J.M.; Haukos, D.A.; Fleskes, J.P.; Koons, D.N.; Thogmartin, W.E.; Clark, R.G.

    2012-01-01

    We developed and evaluated the performance of a metapopulation model enabling managers to examine, for the first time, the consequences of alternative management strategies involving habitat conditions and hunting on both harvest opportunity and carrying capacity (i.e., equilibrium population size in the absence of harvest) for migratory waterfowl at a continental scale. Our focus is on the northern pintail (Anas acuta; hereafter, pintail), which serves as a useful model species to examine the potential for integrating waterfowl harvest and habitat management in North America. We developed submodel structure capturing important processes for pintail populations during breeding, fall migration, winter, and spring migration while encompassing spatial structure representing three core breeding areas and two core nonbreeding areas. A number of continental-scale predictions from our baseline parameterization (e.g., carrying capacity of 5.5 million, equilibrium population size of 2.9 million and harvest rate of 12% at maximum sustained yield [MSY]) were within 10% of those from the pintail harvest strategy under current use by the U.S. Fish and Wildlife Service. To begin investigating the interaction of harvest and habitat management, we examined equilibrium population conditions for pintail at the continental scale across a range of harvest rates while perturbing model parameters to represent: (1) a 10% increase in breeding habitat quality in the Prairie Pothole population (PR); and (2) a 10% increase in nonbreeding habitat quantity along in the Gulf Coast (GC). Based on our model and analysis, a greater increase in carrying capacity and sustainable harvest was seen when increasing a proxy for habitat quality in the Prairie Pothole population. This finding and underlying assumptions must be critically evaluated, however, before specific management recommendations can be made. To make such recommendations, we require (1) extended, refined submodels with additional parameters linking influences of habitat management and environmental conditions to key life-history parameters; (2) a formal sensitivity analysis of the revised model; (3) an integrated population model that incorporates empirical data for estimating key vital rates; and (4) cost estimates for changing these additional parameters through habitat management efforts. We foresee great utility in using an integrated modeling approach to predict habitat and harvest management influences on continental-scale population responses while explicitly considering putative effects of climate change. Such a model could be readily adapted for management of many habitat-limited species.

  12. Surveillance and Control of Malaria Transmission in Thailand using Remotely Sensed Meteorological and Environmental Parameters

    NASA Technical Reports Server (NTRS)

    Kiang, Richard K.; Adimi, Farida; Soika, Valerii; Nigro, Joseph

    2007-01-01

    These slides address the use of remote sensing in a public health application. Specifically, this discussion focuses on the of remote sensing to detect larval habitats to predict current and future endemicity and identify key factors that sustain or promote transmission of malaria in a targeted geographic area (Thailand). In the Malaria Modeling and Surveillance Project, which is part of the NASA Applied Sciences Public Health Applications Program, we have been developing techniques to enhance public health's decision capability for malaria risk assessments and controls. The main objectives are: 1) identification of the potential breeding sites for major vector species; 2) implementation of a risk algorithm to predict the occurrence of malaria and its transmission intensity; 3) implementation of a dynamic transmission model to identify the key factors that sustain or intensify malaria transmission. The potential benefits are: 1) increased warning time for public health organizations to respond to malaria outbreaks; 2) optimized utilization of pesticide and chemoprophylaxis; 3) reduced likelihood of pesticide and drug resistance; and 4) reduced damage to environment. !> Environmental parameters important to malaria transmission include temperature, relative humidity, precipitation, and vegetation conditions. The NASA Earth science data sets that have been used for malaria surveillance and risk assessment include AVHRR Pathfinder, TRMM, MODIS, NSIPP, and SIESIP. Textural-contextual classifications are used to identify small larval habitats. Neural network methods are used to model malaria cases as a function of the remotely sensed parameters. Hindcastings based on these environmental parameters have shown good agreement to epidemiological records. Discrete event simulations are used for modeling the detailed interactions among the vector life cycle, sporogonic cycle and human infection cycle, under the explicit influences of selected extrinsic and intrinsic factors. The output of the model includes the individual infection status and the quantities normally observed in field studies, such as mosquito biting rates, sporozoite infection rates, gametocyte prevalence and incidence. Results are in good agreement with mosquito vector and human malaria data acquired by Coleman et al. over 4.5 years in Kong Mong Tha, a remote village in western Thailand. Application of our models is not restricted to the Greater Mekong Subregion. Our models have been applied to malaria in Indonesia, Korea, and other regions in the world with similar success.

  13. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  14. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  15. The role of grazer predation strategies in the dynamics of consumer-resource based ecological models

    NASA Astrophysics Data System (ADS)

    Cropp, Roger; Moroz, Irene; Norbury, John

    2017-07-01

    We analyse a simple plankton system to provide a heuristic for more complex models such as Dynamic Green Ocean Models (DGOMs). Zooplankton foraging is either by generalist grazers that consume whatever they bump into or specialist grazers that actively seek particular prey. The zooplankton may further be classified as either facultative grazers that can survive on any of their prey or obligate grazers that depend on the presence of specific prey. A key result is that different prey dependencies can result in dramatically different impacts of grazing strategies on system outcomes. The grazing strategy can determine whether a system with obligate grazers will be stable, have regular, predictable cycles or be chaotic. Conversely, whether facultative zooplankton functioned as specialist or generalist grazers makes no qualitative difference to the dynamics of the system. These results demonstrate that the effect of different grazing strategies can be critically dependent on the grazer's dependency on specific prey. Great care must be taken when choosing functional forms for population interactions in DGOMs, particularly in scenarios such as climate change where parameters such as mortality and growth coefficients may change. A robust theoretical framework supporting model development and analysis is key to understanding how such choices can affect model properties and hence predictions.

  16. Norms and values in sociohydrological models

    NASA Astrophysics Data System (ADS)

    Roobavannan, Mahendran; van Emmerik, Tim H. M.; Elshafei, Yasmina; Kandasamy, Jaya; Sanderson, Matthew R.; Vigneswaran, Saravanamuthu; Pande, Saket; Sivapalan, Murugesu

    2018-02-01

    Sustainable water resources management relies on understanding how societies and water systems coevolve. Many place-based sociohydrology (SH) modeling studies use proxies, such as environmental degradation, to capture key elements of the social component of system dynamics. Parameters of assumed relationships between environmental degradation and the human response to it are usually obtained through calibration. Since these relationships are not yet underpinned by social-science theories, confidence in the predictive power of such place-based sociohydrologic models remains low. The generalizability of SH models therefore requires major advances in incorporating more realistic relationships, underpinned by appropriate hydrological and social-science data and theories. The latter is a critical input, since human culture - especially values and norms arising from it - influences behavior and the consequences of behaviors. This paper reviews a key social-science theory that links cultural factors to environmental decision-making, assesses how to better incorporate social-science insights to enhance SH models, and raises important questions to be addressed in moving forward. This is done in the context of recent progress in sociohydrological studies and the gaps that remain to be filled. The paper concludes with a discussion of challenges and opportunities in terms of generalization of SH models and the use of available data to allow future prediction and model transfer to ungauged basins.

  17. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling.

    PubMed

    Feng, Xiaohui; Uriarte, María; González, Grizelle; Reed, Sasha; Thompson, Jill; Zimmerman, Jess K; Murphy, Lora

    2018-01-01

    Tropical forests play a critical role in carbon and water cycles at a global scale. Rapid climate change is anticipated in tropical regions over the coming decades and, under a warmer and drier climate, tropical forests are likely to be net sources of carbon rather than sinks. However, our understanding of tropical forest response and feedback to climate change is very limited. Efforts to model climate change impacts on carbon fluxes in tropical forests have not reached a consensus. Here, we use the Ecosystem Demography model (ED2) to predict carbon fluxes of a Puerto Rican tropical forest under realistic climate change scenarios. We parameterized ED2 with species-specific tree physiological data using the Predictive Ecosystem Analyzer workflow and projected the fate of this ecosystem under five future climate scenarios. The model successfully captured interannual variability in the dynamics of this tropical forest. Model predictions closely followed observed values across a wide range of metrics including aboveground biomass, tree diameter growth, tree size class distributions, and leaf area index. Under a future warming and drying climate scenario, the model predicted reductions in carbon storage and tree growth, together with large shifts in forest community composition and structure. Such rapid changes in climate led the forest to transition from a sink to a source of carbon. Growth respiration and root allocation parameters were responsible for the highest fraction of predictive uncertainty in modeled biomass, highlighting the need to target these processes in future data collection. Our study is the first effort to rely on Bayesian model calibration and synthesis to elucidate the key physiological parameters that drive uncertainty in tropical forests responses to climatic change. We propose a new path forward for model-data synthesis that can substantially reduce uncertainty in our ability to model tropical forest responses to future climate. © 2017 John Wiley & Sons Ltd.

  18. Neural network prediction of carbonate lithofacies from well logs, Big Bow and Sand Arroyo Creek fields, Southwest Kansas

    USGS Publications Warehouse

    Qi, L.; Carr, T.R.

    2006-01-01

    In the Hugoton Embayment of southwestern Kansas, St. Louis Limestone reservoirs have relatively low recovery efficiencies, attributed to the heterogeneous nature of the oolitic deposits. This study establishes quantitative relationships between digital well logs and core description data, and applies these relationships in a probabilistic sense to predict lithofacies in 90 uncored wells across the Big Bow and Sand Arroyo Creek fields. In 10 wells, a single hidden-layer neural network based on digital well logs and core described lithofacies of the limestone depositional texture was used to train and establish a non-linear relationship between lithofacies assignments from detailed core descriptions and selected log curves. Neural network models were optimized by selecting six predictor variables and automated cross-validation with neural network parameters and then used to predict lithofacies on the whole data set of the 2023 half-foot intervals from the 10 cored wells with the selected network size of 35 and a damping parameter of 0.01. Predicted lithofacies results compared to actual lithofacies displays absolute accuracies of 70.37-90.82%. Incorporating adjoining lithofacies, within-one lithofacies improves accuracy slightly (93.72%). Digital logs from uncored wells were batch processed to predict lithofacies and probabilities related to each lithofacies at half-foot resolution corresponding to log units. The results were used to construct interpolated cross-sections and useful depositional patterns of St. Louis lithofacies were illustrated, e.g., the concentration of oolitic deposits (including lithofacies 5 and 6) along local highs and the relative dominance of quartz-rich carbonate grainstone (lithofacies 1) in the zones A and B of the St. Louis Limestone. Neural network techniques are applicable to other complex reservoirs, in which facies geometry and distribution are the key factors controlling heterogeneity and distribution of rock properties. Future work involves extension of the neural network to predict reservoir properties, and construction of three-dimensional geo-models. ?? 2005 Elsevier Ltd. All rights reserved.

  19. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling

    USGS Publications Warehouse

    Feng, Xiaohui; Uriarte, María; González, Grizelle; Reed, Sasha C.; Thompson, Jill; Zimmerman, Jess K.; Murphy, Lora

    2018-01-01

    Tropical forests play a critical role in carbon and water cycles at a global scale. Rapid climate change is anticipated in tropical regions over the coming decades and, under a warmer and drier climate, tropical forests are likely to be net sources of carbon rather than sinks. However, our understanding of tropical forest response and feedback to climate change is very limited. Efforts to model climate change impacts on carbon fluxes in tropical forests have not reached a consensus. Here we use the Ecosystem Demography model (ED2) to predict carbon fluxes of a Puerto Rican tropical forest under realistic climate change scenarios. We parameterized ED2 with species-specific tree physiological data using the Predictive Ecosystem Analyzer workflow and projected the fate of this ecosystem under five future climate scenarios. The model successfully captured inter-annual variability in the dynamics of this tropical forest. Model predictions closely followed observed values across a wide range of metrics including above-ground biomass, tree diameter growth, tree size class distributions, and leaf area index. Under a future warming and drying climate scenario, the model predicted reductions in carbon storage and tree growth, together with large shifts in forest community composition and structure. Such rapid changes in climate led the forest to transition from a sink to a source of carbon. Growth respiration and root allocation parameters were responsible for the highest fraction of predictive uncertainty in modeled biomass, highlighting the need to target these processes in future data collection. Our study is the first effort to rely on Bayesian model calibration and synthesis to elucidate the key physiological parameters that drive uncertainty in tropical forests responses to climatic change. We propose a new path forward for model-data synthesis that can substantially reduce uncertainty in our ability to model tropical forest responses to future climate.

  20. Data-Conditioned Distributions of Groundwater Recharge Under Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    McLaughlin, D.; Ng, G. C.; Entekhabi, D.; Scanlon, B.

    2008-12-01

    Groundwater recharge is likely to be impacted by climate change, with changes in precipitation amounts altering moisture availability and changes in temperature affecting evaporative demand. This could have major implications for sustainable aquifer pumping rates and contaminant transport into groundwater reservoirs in the future, thus making predictions of recharge under climate change very important. Unfortunately, in dry environments where groundwater resources are often most critical, low recharge rates are difficult to resolve due to high sensitivity to modeling and input errors. Some recent studies on climate change and groundwater have considered recharge using a suite of general circulation model (GCM) weather predictions, an obvious and key source of uncertainty. This work extends beyond those efforts by also accounting for uncertainty in other land-surface model inputs in a probabilistic manner. Recharge predictions are made using a range of GCM projections for a rain-fed cotton site in the semi-arid Southern High Plains region of Texas. Results showed that model simulations using a range of unconstrained literature-based parameter values produce highly uncertain and often misleading recharge rates. Thus, distributional recharge predictions are found using soil and vegetation parameters conditioned on current unsaturated zone soil moisture and chloride concentration observations; assimilation of observations is carried out with an ensemble importance sampling method. Our findings show that the predicted distribution shapes can differ for the various GCM conditions considered, underscoring the importance of probabilistic analysis over deterministic simulations. The recharge predictions indicate that the temporal distribution (over seasons and rain events) of climate change will be particularly critical for groundwater impacts. Overall, changes in recharge amounts and intensity were often more pronounced than changes in annual precipitation and temperature, thus suggesting high susceptibility of groundwater systems to future climate change. Our approach provides a probabilistic sensitivity analysis of recharge under potential climate changes, which will be critical for future management of water resources.

  1. Emission line models for the lowest mass core-collapse supernovae - I. Case study of a 9 M⊙ one-dimensional neutrino-driven explosion

    NASA Astrophysics Data System (ADS)

    Jerkstrand, A.; Ertl, T.; Janka, H.-T.; Müller, E.; Sukhbold, T.; Woosley, S. E.

    2018-03-01

    A large fraction of core-collapse supernovae (CCSNe), 30-50 per cent, are expected to originate from the low-mass end of progenitors with MZAMS = 8-12 M⊙. However, degeneracy effects make stellar evolution modelling of such stars challenging, and few predictions for their supernova light curves and spectra have been presented. Here, we calculate synthetic nebular spectra of a 9 M⊙ Fe CCSN model exploded with the neutrino mechanism. The model predicts emission lines with FWHM ˜ 1000 km s-1, including signatures from each deep layer in the metal core. We compare this model to the observations of the three subluminous IIP SNe with published nebular spectra; SN 1997D, SN 2005cs and SN 2008bk. The predictions of both line profiles and luminosities are in good agreement with SN 1997D and SN 2008bk. The close fit of a model with no tuning parameters provides strong evidence for an association of these objects with low-mass Fe CCSNe. For SN 2005cs, the interpretation is less clear, as the observational coverage ended before key diagnostic lines from the core had emerged. We perform a parametrized study of the amount of explosively made stable nickel, and find that none of these three SNe show the high 58Ni/56Ni ratio predicted by current models of electron capture SNe (ECSNe) and ECSN-like explosions. Combined with clear detection of lines from O and He shell material, these SNe rather originate from Fe core progenitors. We argue that the outcome of self-consistent explosion simulations of low-mass stars, which gives fits to many key observables, strongly suggests that the class of subluminous Type IIP SNe is the observational counterpart of the lowest mass CCSNe.

  2. Realizing the measure-device-independent quantum-key-distribution with passive heralded-single photon sources

    PubMed Central

    Wang, Qin; Zhou, Xing-Yu; Guo, Guang-Can

    2016-01-01

    In this paper, we put forward a new approach towards realizing measurement-device-independent quantum key distribution with passive heralded single-photon sources. In this approach, both Alice and Bob prepare the parametric down-conversion source, where the heralding photons are labeled according to different types of clicks from the local detectors, and the heralded ones can correspondingly be marked with different tags at the receiver’s side. Then one can obtain four sets of data through using only one-intensity of pump light by observing different kinds of clicks of local detectors. By employing the newest formulae to do parameter estimation, we could achieve very precise prediction for the two-single-photon pulse contribution. Furthermore, by carrying out corresponding numerical simulations, we compare the new method with other practical schemes of measurement-device-independent quantum key distribution. We demonstrate that our new proposed passive scheme can exhibit remarkable improvement over the conventional three-intensity decoy-state measurement-device-independent quantum key distribution with either heralded single-photon sources or weak coherent sources. Besides, it does not need intensity modulation and can thus diminish source-error defects existing in several other active decoy-state methods. Therefore, if taking intensity modulating errors into account, our new method will show even more brilliant performance. PMID:27759085

  3. Fluorometric In Situ Monitoring of an Escherichia coli Cell Factory with Cytosolic Expression of Human Glycosyltransferase GalNAcT2: Prospects and Limitations

    PubMed Central

    Schwab, Karen; Lauber, Jennifer; Hesse, Friedemann

    2016-01-01

    The glycosyltransferase HisDapGalNAcT2 is the key protein of the Escherichia coli (E. coli) SHuffle® T7 cell factory which was genetically engineered to allow glycosylation of a protein substrate in vivo. The specific activity of the glycosyltransferase requires time-intensive analytics, but is a critical process parameter. Therefore, it has to be monitored closely. This study evaluates fluorometric in situ monitoring as option to access this critical process parameter during complex E. coli fermentations. Partial least square regression (PLS) models were built based on the fluorometric data recorded during the EnPresso® B fermentations. Capable models for the prediction of glucose and acetate concentrations were built for these fermentations with rout mean squared errors for prediction (RMSEP) of 0.19 g·L−1 and 0.08 g·L−1, as well as for the prediction of the optical density (RMSEP 0.24). In situ monitoring of soluble enzyme to cell dry weight ratios (RMSEP 5.5 × 10−4 µg w/w) and specific activity of the glycosyltransferase (RMSEP 33.5 pmol·min−1·µg−1) proved to be challenging, since HisDapGalNAcT2 had to be extracted from the cells and purified. However, fluorescence spectroscopy, in combination with PLS modeling, proved to be feasible for in situ monitoring of complex expression systems. PMID:28952595

  4. Validation of a metabolic network for Saccharomyces cerevisiae using mixed substrate studies.

    PubMed

    Vanrolleghem, P A; de Jong-Gubbels, P; van Gulik, W M; Pronk, J T; van Dijken, J P; Heijnen, S

    1996-01-01

    Setting up a metabolic network model for respiratory growth of Saccharomyces cerevisiae requires the estimation of only two (energetic) stoichiometric parameters: (1) the operational PO ratio and (2) a growth-related maintenance factor k. It is shown, both theoretically and practically, how chemostat cultivations with different mixtures of two substrates allow unique values to be given to these unknowns of the proposed metabolic model. For the yeast and model considered, an effective PO ratio of 1.09 mol of ATP/mol of O (95% confidence interval 1.07-1.11) and a k factor of 0.415 mol of ATP/C-mol of biomass (0.385-0.445) were obtained from biomass substrate yield data on glucose/ethanol mixtures. Symbolic manipulation software proved very valuable in this study as it supported the proof of theoretical identifiability and significantly reduced the necessary computations for parameter estimation. In the transition from 100% glucose to 100% ethanol in the feed, four metabolic regimes occur. Switching between these regimes is determined by cessation of an irreversible reaction and initiation of an alternative reaction. Metabolic network predictions of these metabolic switches compared well with activity measurements of key enzymes. As a second validation of the network, the biomass yield of S. cerevisiae on acetate was also compared to the network prediction. An excellent agreement was found for a network in which acetate transport was modeled with a proton symport, while passive diffusion of acetate gave significantly higher yield predictions.

  5. Tuning Glass Transition in Polymer Nanocomposites with Functionalized Cellulose Nanocrystals through Nanoconfinement.

    PubMed

    Qin, Xin; Xia, Wenjie; Sinko, Robert; Keten, Sinan

    2015-10-14

    Cellulose nanocrystals (CNCs) exhibit impressive interfacial and mechanical properties that make them promising candidates to be used as fillers within nanocomposites. While glass-transition temperature (Tg) is a common metric for describing thermomechanical properties, its prediction is extremely difficult as it depends on filler surface chemistry, volume fraction, and size. Here, taking CNC-reinforced poly(methyl-methacrylate) (PMMA) nanocomposites as a relevant model system, we present a multiscale analysis that combines atomistic molecular dynamics (MD) surface energy calculations with coarse-grained (CG) simulations of relaxation dynamics near filler-polymer interfaces to predict composite properties. We discover that increasing the volume fraction of CNCs results in nanoconfinement effects that lead to an appreciation of the composite Tg provided that strong interfacial interactions are achieved, as in the case of TEMPO-mediated surface modifications that promote hydrogen bonding. The upper and lower bounds of shifts in Tg are predicted by fully accounting for nanoconfinement and interfacial properties, providing new insight into tuning these aspects in nanocomposite design. Our multiscale, materials-by-design framework is validated by recent experiments and breaks new ground in predicting, without any empirical parameters, key structure-property relationships for nanocomposites.

  6. A low-density cDNA microarray with a unique reference RNA: pattern recognition analysis for IFN efficacy prediction to HCV as a model.

    PubMed

    Daiba, Akito; Inaba, Niro; Ando, Satoshi; Kajiyama, Naoki; Yatsuhashi, Hiroshi; Terasaki, Hiroshi; Ito, Atsushi; Ogasawara, Masanori; Abe, Aki; Yoshioka, Junichi; Hayashida, Kazuhiro; Kaneko, Shuichi; Kohara, Michinori; Ito, Satoru

    2004-03-19

    We have designed and established a low-density (295 genes) cDNA microarray for the prediction of IFN efficacy in hepatitis C patients. To obtain a precise and consistent microarray data, we collected a data set from three spots for each gene (mRNA) and using three different scanning conditions. We also established an artificial reference RNA representing pseudo-inflammatory conditions from established hepatocyte cell lines supplemented with synthetic RNAs to 48 inflammatory genes. We also developed a novel algorithm that replaces the standard hierarchical-clustering method and allows handling of the large data set with ease. This algorithm utilizes a standard space database (SSDB) as a key scale to calculate the Mahalanobis distance (MD) from the center of gravity in the SSDB. We further utilized sMD (divided by parameter k: MD/k) to reduce MD number as a predictive value. The efficacy prediction of conventional IFN mono-therapy was 100% for non-responder (NR) vs. transient responder (TR)/sustained responder (SR) (P < 0.0005). Finally, we show that this method is acceptable for clinical application.

  7. An updated view of global water cycling

    NASA Astrophysics Data System (ADS)

    Houser, P. R.; Schlosser, A.; Lehr, J.

    2009-04-01

    Unprecedented new observation capacities combined with revolutions in modeling, we are poised to make huge advances in water cycle assessment, understanding, and prediction. To realize this goal, we must develop a discipline of prediction and verification through the integration of water and energy cycle observations and models, and to verify model predictions against observed phenomena to ensure that research delivers reliable improvements in prediction skill. Accomplishing these goals will require, in part, an accurate accounting of the key reservoirs and fluxes associated with the global water and energy cycle, including their spatial and temporal variability, through integration of all necessary observations and research tools. A brief history of the lineage of the conventional water balance and a summary accounting of all major parameters of the water balance using highly respected secondary sources will be presented. Principally, recently published peer reviewed papers reporting results of original work involving direct measurements and new data generated by high-tech devices (e.g. satellite / airborne instruments, supercomputers, geophysical tools) will be employed. This work lends credence to the conventional water balance ideas, but also reveals anachronistic scientific concepts/models, questionable underlying data, longstanding oversights and outright errors in the water balance.

  8. Hemodynamics-Driven Deposition of Intraluminal Thrombus in Abdominal Aortic Aneurysms

    PubMed Central

    Di Achille, P.; Tellides, G.; Humphrey, J.D.

    2016-01-01

    Accumulating evidence suggests that intraluminal thrombus plays many roles in the natural history of abdominal aortic aneurysms. There is, therefore, a pressing need for computational models that can describe and predict the initiation and progression of thrombus in aneurysms. In this paper, we introduce a phenomenological metric for thrombus deposition potential and use hemodynamic simulations based on medical images from six patients to identify best-fit values of the two key model parameters. We then introduce a shape optimization method to predict the associated radial growth of the thrombus into the lumen based on the expectation that thrombus initiation will create a thrombogenic surface, which in turn will promote growth until increasing hemodynamically induced frictional forces prevent any further cell or protein deposition. Comparisons between predicted and actual intraluminal thrombus in the six patient-specific aneurysms suggest that this phenomenological description provides a good first estimate of thrombus deposition. We submit further that, because the biologically active region of the thrombus appears to be confined to a thin luminal layer, predictions of morphology alone may be sufficient to inform fluid-solid-growth models of aneurysmal growth and remodeling. PMID:27569676

  9. Uncertainty Quantification of Medium-Term Heat Storage From Short-Term Geophysical Experiments Using Bayesian Evidential Learning

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2018-04-01

    In theory, aquifer thermal energy storage (ATES) systems can recover in winter the heat stored in the aquifer during summer to increase the energy efficiency of the system. In practice, the energy efficiency is often lower than expected from simulations due to spatial heterogeneity of hydraulic properties or non-favorable hydrogeological conditions. A proper design of ATES systems should therefore consider the uncertainty of the prediction related to those parameters. We use a novel framework called Bayesian Evidential Learning (BEL) to estimate the heat storage capacity of an alluvial aquifer using a heat tracing experiment. BEL is based on two main stages: pre- and postfield data acquisition. Before data acquisition, Monte Carlo simulations and global sensitivity analysis are used to assess the information content of the data to reduce the uncertainty of the prediction. After data acquisition, prior falsification and machine learning based on the same Monte Carlo are used to directly assess uncertainty on key prediction variables from observations. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data, without any explicit full model inversion. We demonstrate the methodology in field conditions and validate the framework using independent measurements.

  10. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  11. Project 5322 Mid-Term Report: Key Eco-Hydrological Parameters Retrieval And Land Data Assimilation System Development In A Typical Inland River Basin Of Chinas Arid Region

    NASA Astrophysics Data System (ADS)

    Faivre, R.; Colin, J.; Menenti, M.; Lindenbergh, R.; Van Den Bergh, L.; Yu, H.; Jia, L.; Xin, L.

    2010-10-01

    Improving the understanding and the monitoring of high elevation regions hydrology is of major relevance from both societal and environmental points of view for many Asian countries, in particular in terms of flood and drought, but also in terms of food security in a chang- ing environment. Satellite and airborne remote sensing technologies are of utmost for such a challenge. Exist- ing imaging spectro-radiometers, radars, microwave ra- diometers and backscatter LIDAR provide a very com- prehensive suite of measurements over a wide rage of wavelengths, time frequencies and spatial resolu- tions. It is however needed to devise new algorithms to convert these radiometric measurements into useful eco-hydrological quantitative parameters for hydrologi- cal modeling and water management. The DRAGON II project entitled Key Eco-Hydrological Parameters Re- trieval and Land Data Assimilation System Development in a Typical Inland River Basin of Chinas Arid Region (ID 5322) aims at improving the monitoring, understand- ing, and predictability of hydrological and ecological pro- cesses at catchment scale, and promote the applicability of quantitative remote sensing in watershed science. Ex- isting Earth Observation platforms provided by the Euro- pean Space Agency as well as prototype airborne systems developed in China - ENVISAT/AATSR, ALOS/PRISM and PALSAR, Airborne LIDAR - are used and combined to retrieve advanced land surface physical properties over high elevation arid regions of China. The existing syn- ergies between this project, the CEOP-AEGIS project (FP7) and the WATER project (CAS) provide incentives for innovative studies. The investigations presented in the following report focus on the development of advanced and innovative methodologies and algorithms to monitor both the state and the trend of key eco-hydrological vari- ables: 3D vegetation properties, land surface evaporation, glacier mass balance and drought indicators.

  12. The longevity of lava dome eruptions

    NASA Astrophysics Data System (ADS)

    Wolpert, Robert L.; Ogburn, Sarah E.; Calder, Eliza S.

    2016-02-01

    Understanding the duration of past, ongoing, and future volcanic eruptions is an important scientific goal and a key societal need. We present a new methodology for forecasting the duration of ongoing and future lava dome eruptions based on a database (DomeHaz) recently compiled by the authors. The database includes duration and composition for 177 such eruptions, with "eruption" defined as the period encompassing individual episodes of dome growth along with associated quiescent periods during which extrusion pauses but unrest continues. In a key finding, we show that probability distributions for dome eruption durations are both heavy tailed and composition dependent. We construct objective Bayesian statistical models featuring heavy-tailed Generalized Pareto distributions with composition-specific parameters to make forecasts about the durations of new and ongoing eruptions that depend on both eruption duration to date and composition. Our Bayesian predictive distributions reflect both uncertainty about model parameter values (epistemic uncertainty) and the natural variability of the geologic processes (aleatoric uncertainty). The results are illustrated by presenting likely trajectories for 14 dome-building eruptions ongoing in 2015. Full representation of the uncertainty is presented for two key eruptions, Soufriére Hills Volcano in Montserrat (10-139 years, median 35 years) and Sinabung, Indonesia (1-17 years, median 4 years). Uncertainties are high but, importantly, quantifiable. This work provides for the first time a quantitative and transferable method and rationale on which to base long-term planning decisions for lava dome-forming volcanoes, with wide potential use and transferability to forecasts of other types of eruptions and other adverse events across the geohazard spectrum.

  13. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  14. An easy-to-use tool for the evaluation of leachate production at landfill sites.

    PubMed

    Grugnaletti, Matteo; Pantini, Sara; Verginelli, Iason; Lombardi, Francesco

    2016-09-01

    A simulation program for the evaluation of leachate generation at landfill sites is herein presented. The developed tool is based on a water balance model that accounts for all the key processes influencing leachate generation through analytical and empirical equations. After a short description of the tool, different simulations on four Italian landfill sites are shown. The obtained results revealed that when literature values were assumed for the unknown input parameters, the model provided a rough estimation of the leachate production measured in the field. In this case, indeed, the deviations between observed and predicted data appeared, in some cases, significant. Conversely, by performing a preliminary calibration for some of the unknown input parameters (e.g. initial moisture content of wastes, compression index), in nearly all cases the model performances significantly improved. These results although showed the potential capability of a water balance model to estimate the leachate production at landfill sites also highlighted the intrinsic limitation of a deterministic approach to accurately forecast the leachate production over time. Indeed, parameters such as the initial water content of incoming waste and the compression index, that have a great influence on the leachate production, may exhibit temporal variation due to seasonal changing of weather conditions (e.g. rainfall, air humidity) as well as to seasonal variability in the amount and type of specific waste fractions produced (e.g. yard waste, food, plastics) that make their prediction quite complicated. In this sense, we believe that a tool such as the one proposed in this work that requires a limited number of unknown parameters, can be easier handled to quantify the uncertainties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Sun-to-Earth simulations of geo-effective Coronal Mass Ejections with EUHFORIA: a heliospheric-magnetospheric model chain approach

    NASA Astrophysics Data System (ADS)

    Scolini, C.; Verbeke, C.; Gopalswamy, N.; Wijsen, N.; Poedts, S.; Mierla, M.; Rodriguez, L.; Pomoell, J.; Cramer, W. D.; Raeder, J.

    2017-12-01

    Coronal Mass Ejections (CMEs) and their interplanetary counterparts are considered to be the major space weather drivers. An accurate modelling of their onset and propagation up to 1 AU represents a key issue for more reliable space weather forecasts, and predictions about their actual geo-effectiveness can only be performed by coupling global heliospheric models to 3D models describing the terrestrial environment, e.g. magnetospheric and ionospheric codes in the first place. In this work we perform a Sun-to-Earth comprehensive analysis of the July 12, 2012 CME with the aim of testing the space weather predictive capabilities of the newly developed EUHFORIA heliospheric model integrated with the Gibson-Low (GL) flux rope model. In order to achieve this goal, we make use of a model chain approach by using EUHFORIA outputs at Earth as input parameters for the OpenGGCM magnetospheric model. We first reconstruct the CME kinematic parameters by means of single- and multi- spacecraft reconstruction methods based on coronagraphic and heliospheric CME observations. The magnetic field-related parameters of the flux rope are estimated based on imaging observations of the photospheric and low coronal source regions of the eruption. We then simulate the event with EUHFORIA, testing the effect of the different CME kinematic input parameters on simulation results at L1. We compare simulation outputs with in-situ measurements of the Interplanetary CME and we use them as input for the OpenGGCM model, so to investigate the magnetospheric response to solar perturbations. From simulation outputs we extract some global geomagnetic activity indexes and compare them with actual data records and with results obtained by the use of empirical relations. Finally, we discuss the forecasting capabilities of such kind of approach and its future improvements.

  16. Construction of drug-polymer thermodynamic phase diagrams using Flory-Huggins interaction theory: identifying the relevance of temperature and drug weight fraction to phase separation within solid dispersions.

    PubMed

    Tian, Yiwei; Booth, Jonathan; Meehan, Elizabeth; Jones, David S; Li, Shu; Andrews, Gavin P

    2013-01-07

    Amorphous drug-polymer solid dispersions have the potential to enhance the dissolution performance and thus bioavailability of BCS class II drug compounds. The principle drawback of this approach is the limited physical stability of amorphous drug within the dispersion. Accurate determination of the solubility and miscibility of drug in the polymer matrix is the key to the successful design and development of such systems. In this paper, we propose a novel method, based on Flory-Huggins theory, to predict and compare the solubility and miscibility of drug in polymeric systems. The systems chosen for this study are (1) hydroxypropyl methylcellulose acetate succinate HF grade (HPMCAS-HF)-felodipine (FD) and (2) Soluplus (a graft copolymer of polyvinyl caprolactam-polyvinyl acetate-polyethylene glycol)-FD. Samples containing different drug compositions were mixed, ball milled, and then analyzed by differential scanning calorimetry (DSC). The value of the drug-polymer interaction parameter χ was calculated from the crystalline drug melting depression data and extrapolated to lower temperatures. The interaction parameter χ was also calculated at 25 °C for both systems using the van Krevelen solubility parameter method. The rank order of interaction parameters of the two systems obtained at this temperature was comparable. Diagrams of drug-polymer temperature-composition and free energy of mixing (ΔG(mix)) were constructed for both systems. The maximum crystalline drug solubility and amorphous drug miscibility may be predicted based on the phase diagrams. Hyper-DSC was used to assess the validity of constructed phase diagrams by annealing solid dispersions at specific drug loadings. Three different samples for each polymer were selected to represent different regions within the phase diagram.

  17. VP-Nets : Efficient automatic localization of key brain structures in 3D fetal neurosonography.

    PubMed

    Huang, Ruobing; Xie, Weidi; Alison Noble, J

    2018-04-23

    Three-dimensional (3D) fetal neurosonography is used clinically to detect cerebral abnormalities and to assess growth in the developing brain. However, manual identification of key brain structures in 3D ultrasound images requires expertise to perform and even then is tedious. Inspired by how sonographers view and interact with volumes during real-time clinical scanning, we propose an efficient automatic method to simultaneously localize multiple brain structures in 3D fetal neurosonography. The proposed View-based Projection Networks (VP-Nets), uses three view-based Convolutional Neural Networks (CNNs), to simplify 3D localizations by directly predicting 2D projections of the key structures onto three anatomical views. While designed for efficient use of data and GPU memory, the proposed VP-Nets allows for full-resolution 3D prediction. We investigated parameters that influence the performance of VP-Nets, e.g. depth and number of feature channels. Moreover, we demonstrate that the model can pinpoint the structure in 3D space by visualizing the trained VP-Nets, despite only 2D supervision being provided for a single stream during training. For comparison, we implemented two other baseline solutions based on Random Forest and 3D U-Nets. In the reported experiments, VP-Nets consistently outperformed other methods on localization. To test the importance of loss function, two identical models are trained with binary corss-entropy and dice coefficient loss respectively. Our best VP-Net model achieved prediction center deviation: 1.8 ± 1.4 mm, size difference: 1.9 ± 1.5 mm, and 3D Intersection Over Union (IOU): 63.2 ± 14.7% when compared to the ground truth. To make the whole pipeline intervention free, we also implement a skull-stripping tool using 3D CNN, which achieves high segmentation accuracy. As a result, the proposed processing pipeline takes a raw ultrasound brain image as input, and output a skull-stripped image with five detected key brain structures. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Hyper-resolution hydrological modeling: Completeness of Formulation, Appropriateness of Descritization, and Physical LImits of Predictability

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.

    2017-12-01

    HIgh performance computing and the widespread availabilities of geospatial physiographic and forcing datasets have enabled consideration of flood impact predictions with longer lead times and more detailed spatial descriptions. We are now considering multi-hour flash flood forecast lead times at the subdivision level in so-called hydroblind regions away from the National Hydrography network. However, the computational demands of such models are high, necessitating a nested simulation approach. Research on hyper-resolution hydrologic modeling over the past three decades have illustrated some fundamental limits on predictability that are simultaneously related to runoff generation mechanism(s), antecedent conditions, rates and total amounts of precipitation, discretization of the model domain, and complexity or completeness of the model formulation. This latter point is an acknowledgement that in some ways hydrologic understanding in key areas related to land use, land cover, tillage practices, seasonality, and biological effects has some glaring deficiencies. This presentation represents a review of what is known related to the interacting effects of precipitation amount, model spatial discretization, antecedent conditions, physiographic characteristics and model formulation completeness for runoff predictions. These interactions define a region in multidimensional forcing, parameter and process space where there are in some cases clear limits on predictability, and in other cases diminished uncertainty.

  19. Prediction of compressibility parameters of the soils using artificial neural network.

    PubMed

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  20. Protein pathway activation associated with sustained virologic response in patients with chronic hepatitis C treated with pegylated interferon (PEG-IFN) and ribavirin (RBV).

    PubMed

    Younossi, Zobair M; Limongi, Dolores; Stepanova, Maria; Pierobon, Mariaelena; Afendy, Arian; Mehta, Rohini; Baranova, Ancha; Liotta, Lance; Petricoin, Emanuel

    2011-02-04

    Only half of chronic hepatitis C (CH-C) patients treated with pegylated interferon and ribavirin (PEG-IFN+RBV) achieve sustained virologic response) SVR. In addition to known factors, we postulated that activation of key protein signaling networks in the peripheral blood mononuclear cells (PBMCs) may contribute to SVR due to inherent patient-specific basal immune cell signaling architecture. In this study, we included 92 patients with CH-C. PBMCs were collected while patients were not receiving treatment and used for phosphoprotein-based network profiling. Patients received a full course of PEG-IFN+RBV with overall SVR of 55%. From PBMC, protein lysates were extracted and then used for Reverse Phase Protein Microarray (RPMA) analysis, which quantitatively measured the levels of cytokines and activation levels of 25 key protein signaling molecules involved in immune cell regulation and interferon alpha signaling. Regression models for predicting SVR were generated by stepwise bidirectional selection. Both clinical-laboratory and RPMA parameters were used as predictor variables. Model accuracies were estimated using 10-fold cross-validation. Our results show that by comparing patients who achieved SVR to those who did not, phosphorylation levels of 6 proteins [AKT(T308), JAK1(Y1022/1023), p70 S6 Kinase (S371), PKC zeta/lambda(T410/403), TYK2(Y1054/1055), ZAP-70(Y319)/Syk(Y352)] and overall levels of 6 unmodified proteins [IL2, IL10, IL4, IL5, TNF-alpha, CD5L] were significantly different (P < 0.05). For SVR, the model based on a combination of clinical and proteome parameters was developed, with an AUC = 0.914, sensitivity of 92.16%, and specificity of 85.0%. This model included the following parameters: viral genotype, previous treatment status, BMI, phosphorylated states of STAT2, AKT, LCK, and TYK2 kinases as well as steady state levels of IL4, IL5, and TNF-alpha. In conclusion, SVR could be predicted by a combination of clinical, cytokine, and protein signaling activation profiles. Signaling events elucidated in the study may shed some light into molecular mechanisms of response to anti-HCV treatment.

  1. Development and deployment of a water-crop-nutrient simulation model embedded in a web application

    NASA Astrophysics Data System (ADS)

    Langella, Giuliano; Basile, Angelo; Coppola, Antonio; Manna, Piero; Orefice, Nadia; Terribile, Fabio

    2016-04-01

    It is long time by now that scientific research on environmental and agricultural issues spent large effort in the development and application of models for prediction and simulation in spatial and temporal domains. This is fulfilled by studying and observing natural processes (e.g. rainfall, water and chemicals transport in soils, crop growth) whose spatiotemporal behavior can be reproduced for instance to predict irrigation and fertilizer requirements and yield quantities/qualities. In this work a mechanistic model to simulate water flow and solute transport in the soil-plant-atmosphere continuum is presented. This desktop computer program was written according to the specific requirement of developing web applications. The model is capable to solve the following issues all together: (a) water balance and (b) solute transport; (c) crop modelling; (d) GIS-interoperability; (e) embedability in web-based geospatial Decision Support Systems (DSS); (f) adaptability at different scales of application; and (g) ease of code modification. We maintained the desktop characteristic in order to further develop (e.g. integrate novel features) and run the key program modules for testing and validation purporses, but we also developed a middleware component to allow the model run the simulations directly over the web, without software to be installed. The GIS capabilities allows the web application to make simulations in a user-defined region of interest (delimited over a geographical map) without the need to specify the proper combination of model parameters. It is possible since the geospatial database collects information on pedology, climate, crop parameters and soil hydraulic characteristics. Pedological attributes include the spatial distribution of key soil data such as soil profile horizons and texture. Further, hydrological parameters are selected according to the knowledge about the spatial distribution of soils. The availability and definition in the geospatial domain of these attributes allow the simulation outputs at a different spatial scale. Two different applications were implemented using the same framework but with different configurations of the software pieces making the physically based modelling chain: an irrigation tool simulating water requirements and their dates and a fertilization tool for optimizing in particular mineral nitrogen adds.

  2. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  3. Integrating sequence stratigraphy and rock-physics to interpret seismic amplitudes and predict reservoir quality

    NASA Astrophysics Data System (ADS)

    Dutta, Tanima

    This dissertation focuses on the link between seismic amplitudes and reservoir properties. Prediction of reservoir properties, such as sorting, sand/shale ratio, and cement-volume from seismic amplitudes improves by integrating knowledge from multiple disciplines. The key contribution of this dissertation is to improve the prediction of reservoir properties by integrating sequence stratigraphy and rock physics. Sequence stratigraphy has been successfully used for qualitative interpretation of seismic amplitudes to predict reservoir properties. Rock physics modeling allows quantitative interpretation of seismic amplitudes. However, often there is uncertainty about selecting geologically appropriate rock physics model and its input parameters, away from the wells. In the present dissertation, we exploit the predictive power of sequence stratigraphy to extract the spatial trends of sedimentological parameters that control seismic amplitudes. These spatial trends of sedimentological parameters can serve as valuable constraints in rock physics modeling, especially away from the wells. Consequently, rock physics modeling, integrated with the trends from sequence stratigraphy, become useful for interpreting observed seismic amplitudes away from the wells in terms of underlying sedimentological parameters. We illustrate this methodology using a comprehensive dataset from channelized turbidite systems, deposited in minibasin settings in the offshore Equatorial Guinea, West Africa. First, we present a practical recipe for using closed-form expressions of effective medium models to predict seismic velocities in unconsolidated sandstones. We use an effective medium model that combines perfectly rough and smooth grains (the extended Walton model), and use that model to derive coordination number, porosity, and pressure relations for P and S wave velocities from experimental data. Our recipe provides reasonable fits to other experimental and borehole data, and specifically improves the predictions of shear wave velocities. In addition, we provide empirical relations on normal compaction depth trends of porosity, velocities, and VP/VS ratio for shale and clean sands in shallow, supra-salt sediments in the Gulf of Mexico. Next, we identify probable spatial trends of sand/shale ratio and sorting as predicted by the conventional sequence stratigraphic model in minibasin settings (spill-and-fill model). These spatial trends are evaluated using well data from offshore West Africa, and the same well data are used to calibrate rock physics models (modified soft-sand model) that provide links between P-impedance and quartz/clay ratio, and sorting. The spatial increase in sand/shale ratio and sorting corresponds to an overall increase in P-impedance, and AVO intercept and gradient. The results are used as a guide to interpret sedimentological parameters from seismic attributes, away from the well locations. We present a quantitative link between carbonate cement and seismic attributes by combining stratigraphie cycles and the rock physics model (modified differential effective medium model). The variation in carbonate cement volume in West Africa can be linked with two distinct stratigraphic cycles: the coarsening-upward cycles and the fining-upward cycles. Cemented sandstones associated with these cycles exhibit distinct signatures on P-impedance vs. porosity and AVO intercept vs. gradient crossplots. These observations are important for assessing reservoir properties in the West Africa as well as in other analogous depositional environments. Finally, we investigate the relationship between seismic velocities and time temperature index (TTI) using basin and petroleum system modeling at Rio Muni basin, West Africa. We find that both VP and VS increase exponentially with TTI. The results can be applied to predict TTI, and thereby thermal maturity, from observed velocities.

  4. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  5. Key Questions in Building Defect Prediction Models in Practice

    NASA Astrophysics Data System (ADS)

    Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas

    The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.

  6. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  7. Reasoning and Knowledge Acquisition Framework for 5G Network Analytics

    PubMed Central

    2017-01-01

    Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration. PMID:29065473

  8. [Bioinformatics analysis of mosquito densovirus nostructure protein NS1].

    PubMed

    Dong, Yun-qiao; Ma, Wen-li; Gu, Jin-bao; Zheng, Wen-ling

    2009-12-01

    To analyze and predict the structure and function of mosquito densovirus (MDV) nostructual protein1 (NS1). Using different bioinformatics software, the EXPASY pmtparam tool, ClustalX1.83, Bioedit, MEGA3.1, ScanProsite, and Motifscan, respectively to comparatively analyze and predict the physic-chemical parameters, homology, evolutionary relation, secondary structure and main functional motifs of NS1. MDV NS1 protein was a unstable hydrophilic protein and the amino acid sequence was highly conserved which had a relatively closer evolutionary distance with infectious hypodermal and hematopoietic necrosis virus (IHHNV). MDV NS1 has a specific domain of superfamily 3 helicase of small DNA viruses. This domain contains the NTP-binding region with a metal ion-dependent ATPase activity. A virus replication roller rolling-circle replication(RCR) initiation domain was found near the N terminal of this protein. This protien has the biological function of single stranded incision enzyme. The bioinformatics prediction results suggest that MDV NS1 protein plays a key role in viral replication, packaging, and the other stages of viral life.

  9. High Precision Prediction of Functional Sites in Protein Structures

    PubMed Central

    Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin

    2014-01-01

    We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601

  10. Spin-lattice relaxation and the calculation of gain, pump power, and noise temperature in ruby

    NASA Technical Reports Server (NTRS)

    Lyons, J. R.

    1989-01-01

    The use of a quantitative analysis of the dominant source of relaxation in ruby spin systems to make predictions of key maser amplifier parameters is described. The spin-lattice Hamiltonian which describes the interaction of the electron spins with the thermal vibrations of the surrounding lattice is obtained from the literature. Taking into account the vibrational anisotropy of ruby, Fermi's rule is used to calculate the spin transition rates between the maser energy levels. The spin population rate equations are solved for the spin transition relaxation times, and a comparison with previous calculations is made. Predictions of ruby gain, inversion ratio, and noise temperature as a function of physical temperature are made for 8.4-GHz and 32-GHz maser pumping schemes. The theory predicts that ruby oriented at 90 deg will have approximately 50 percent higher gain in dB and slightly lower noise temperature than a 54.7-deg ruby at 32 GHz (assuming pump saturation). A specific calculation relating pump power to inversion ratio is given for a single channel of the 32-GHz reflected wave maser.

  11. On the precision of aero-thermal simulations for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Thompson, Hugh

    2016-08-01

    Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.

  12. Reasoning and Knowledge Acquisition Framework for 5G Network Analytics.

    PubMed

    Sotelo Monge, Marco Antonio; Maestre Vidal, Jorge; García Villalba, Luis Javier

    2017-10-21

    Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration.

  13. Development and application of a soil organic matter-based soil quality index in mineralized terrane of the Western US

    USGS Publications Warehouse

    Blecker, S.W.; Stillings, Lisa L.; Amacher, M.C.; Ippolito, J.A.; DeCrappeo, N.M.

    2013-01-01

    Soil quality indices provide a means of distilling large amounts of data into a single metric that evaluates the soil’s ability to carry out key ecosystem functions. Primarily developed in agroecosytems, then forested ecosystems, an index using the relation between soil organic matter and other key soil properties in more semi-arid systems of the Western US impacted by different geologic mineralization was developed. Three different sites in two different mineralization types, acid sulfate and Cu/Mo porphyry in California and Nevada, were studied. Soil samples were collected from undisturbed soils in both mineralized and nearby unmineralized terrane as well as waste rock and tailings. Eight different microbial parameters (carbon substrate utilization, microbial biomass-C, mineralized-C, mineralized-N and enzyme activities of acid phosphatase, alkaline phosphatase, arylsulfatase, and fluorescein diacetate) along with a number of physicochemical parameters were measured. Multiple linear regression models between these parameters and both total organic carbon and total nitrogen were developed, using the ratio of predicted to measured values as the soil quality index. In most instances, pooling unmineralized and mineralized soil data within a given study site resulted in lower model correlations. Enzyme activity was a consistent explanatory variable in the models across the study sites. Though similar indicators were significant in models across different mineralization types, pooling data across sites inhibited model differentiation of undisturbed and disturbed sites. This procedure could be used to monitor recovery of disturbed systems in mineralized terrane and help link scientific and management disciplines.

  14. Mass balance evaluation of polybrominated diphenyl ethers in landfill leachate and potential for transfer from e-waste.

    PubMed

    Danon-Schaffer, Monica N; Mahecha-Botero, Andrés; Grace, John R; Ikonomou, Michael

    2013-09-01

    Previous research on brominated flame retardants (BFRs), including polybrominated diphenyl ethers (PBDEs) has largely focussed on their concentrations in the environment and their adverse effects on human health. This paper explores their transfer from waste streams to water and soil. A comprehensive mass balance model is developed to track polybrominated diphenyl ethers (PBDEs), originating from e-waste and non-e-waste solids leaching from a landfill. Stepwise debromination is assumed to occur in three sub-systems (e-waste, aqueous leachate phase, and non-e-waste solids). Analysis of landfill samples and laboratory results from a solid-liquid contacting chamber are used to estimate model parameters to simulate an urban landfill system, for past and future scenarios. Sensitivity tests to key model parameters were conducted. Lower BDEs require more time to disappear than high-molecular weight PBDEs, since debromination takes place in a stepwise manner, according to the simplified reaction scheme. Interphase mass transfer causes the decay pattern to be similar in all three sub-systems. The aqueous phase is predicted to be the first sub-system to eliminate PBDEs if their input to the landfill were to be stopped. The non-e-waste solids would be next, followed by the e-waste sub-system. The model shows that mass transfer is not rate-limiting, but the evolution over time depends on the kinetic degradation parameters. Experimental scatter makes model testing difficult. Nevertheless, the model provides qualitative understanding of the influence of key variables. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Distribution of trace metals in the vicinity of a wastewater treatment plant on the Potomac River, Washington, DC, USA

    NASA Astrophysics Data System (ADS)

    Smith, J. P.; Muller, A. C.

    2013-05-01

    Predicting the fate and distribution of anthropogenic-sourced trace metals in riverine and estuarine systems is challenging due to multiple and varying source functions and dynamic physiochemical conditions. Between July 2011 and November 2012, sediment and water column samples were collected from over 20 sites in the tidal-fresh Potomac River estuary, Washington, DC near the outfall of the Blue Plains Advanced Wastewater Treatment Plant (BPWTP) for measurement of select trace metals. Field observations of water column parameters (conductivity, temperature, pH, turbidity) were also made at each sampling site. Trace metal concentrations were normalized to the "background" composition of the river determined from control sites in order to investigate the distribution BPWTP-sourced in local Potomac River receiving waters. Temporal differences in the observed distribution of trace metals were attributed to changes in the relative contribution of metals from different sources (wastewater, riverine, other) coupled with differences in the physiochemical conditions of the water column. Results show that normalizing near-source concentrations to the background composition of the water body and also to key environmental parameters can aid in predicting the fate and distribution of anthropogenic-sourced trace metals in dynamic riverine and estuarine systems like the tidal-fresh Potomac River.

  16. Shape Transformation of the Nuclear Envelope during Closed Mitosis.

    PubMed

    Zhu, Qian; Zheng, Fan; Liu, Allen P; Qian, Jin; Fu, Chuanhai; Lin, Yuan

    2016-11-15

    The nuclear envelope (NE) in lower eukaryotes such as Schizosaccharomyces pombe undergoes large morphology changes during closed mitosis. However, which physical parameters are important in governing the shape evolution of the NE, and how defects in the dividing chromosomes/microtubules are reflected in those parameters, are fundamental questions that remain unresolved. In this study, we show that improper separation of chromosomes in genetically deficient cells leads to membrane tethering or asymmetric division in contrast to the formation of two equal-sized daughter nuclei in wild-type cells. We hypothesize that the poleward force is transmitted to the nuclear membrane through its physical contact with the separated sister chromatids at the two spindle poles. A theoretical model is developed to predict the morphology evolution of the NE where key factors such as the work done by the poleward force and bending and surface energies stored in the membrane have been taken into account. Interestingly, the predicted phase diagram, summarizing the dependence of nuclear shape on the size of the load transmission regions, and the pole-to-pole distance versus surface area relationship all quantitatively agree well with our experimental observations, suggesting that this model captures the essential physics involved in closed mitosis. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. A Numerical Multiscale Framework for Modeling Patient-Specific Coronary Artery Bypass Surgeries

    NASA Astrophysics Data System (ADS)

    Ramachandra, Abhay B.; Kahn, Andrew; Marsden, Alison

    2014-11-01

    Coronary artery bypass graft (CABG) surgery is performed to revascularize diseased coronary arteries, using arterial, venous or synthetic grafts. Vein grafts, used in more than 70% of procedures, have failure rates as high as 50% in less than 10 years. Hemodynamics is known to play a key role in the mechano-biological response of vein grafts, but current non-invasive imaging techniques cannot fully characterize the hemodynamic and biomechanical environment. We numerically compute hemodynamics and wall mechanics in patient-specific 3D CABG geometries using stabilized finite element methods. The 3D patient-specific domain is coupled to a 0D lumped parameter circulatory model and parameters are tuned to match patient-specific blood pressures, stroke volumes, heart rates and heuristic flow-split values. We quantify differences in hemodynamics between arterial and venous grafts and discuss possible correlations to graft failure. Extension to a deformable wall approximation will also be discussed. The quantification of wall mechanics and hemodynamics is a necessary step towards coupling continuum models in solid and fluid mechanics with the cellular and sub-cellular responses of grafts, which in turn, should lead to a more accurate prediction of the long term outcome of CABG surgeries, including predictions of growth and remodeling.

  18. Cross-validation pitfalls when selecting and assessing regression and classification models.

    PubMed

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  19. Aerodynamic simulation strategies assessment for a fenestron in hover flight

    NASA Astrophysics Data System (ADS)

    Marino, M.; Gourdain, N.; Legras, G.; Alfano, D.

    2017-06-01

    The Fenestron® has a crucial antitorque function and its sizing is a key point of the Helicopter design, especially regarding thrust and power predictions. This paper reports the investigations done on a full scale Dauphin Fenestron®. The objectives are, first, to evaluate the in§uence of some numerical parameters on the performance of the Fenestron®; and then, the flow is analyzed for a high incidence pitch, for which the rotor blade can experience massive boundary layer separations. Simulations are carried out on a single blade passage model. Several parameters are benched such as grid quality, numerical schemes, and turbulence modeling. A comparison with test bench measurements is carried out to evaluate the capability of the numerical simulations to predict both global performance (thrust and power) and local flows (static pressure at the shroud and radial profiles inside the vein). The analysis demonstrates the capability of numerical simulations to accurately estimate the global performance of the Fenestron®, including at high pitch angles. However, some discrepancies remain on the local flow, especially in the vicinity of the rotor shroud. A more detailed analysis of the local flow is performed at a blade pitch angle of 35°, with a particular interest for the blade tip region.

  20. SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments.

    PubMed

    Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H

    2018-01-01

    DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments.

  1. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  2. SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments

    PubMed Central

    Youngblut, Nicholas D.; Barnett, Samuel E.; Buckley, Daniel H.

    2018-01-01

    DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. PMID:29643843

  3. Hepatic transporter drug-drug interactions: an evaluation of approaches and methodologies.

    PubMed

    Williamson, Beth; Riley, Robert J

    2017-12-01

    Drug-drug interactions (DDIs) continue to account for 5% of hospital admissions and therefore remain a major regulatory concern. Effective, quantitative prediction of DDIs will reduce unexpected clinical findings and encourage projects to frontload DDI investigations rather than concentrating on risk management ('manage the baggage') later in drug development. A key challenge in DDI prediction is the discrepancies between reported models. Areas covered: The current synopsis focuses on four recent influential publications on hepatic drug transporter DDIs using static models that tackle interactions with individual transporters and in combination with other drug transporters and metabolising enzymes. These models vary in their assumptions (including input parameters), transparency, reproducibility and complexity. In this review, these facets are compared and contrasted with recommendations made as to their application. Expert opinion: Over the past decade, static models have evolved from simple [I]/k i models to incorporate victim and perpetrator disposition mechanisms including the absorption rate constant, the fraction of the drug metabolised/eliminated and/or clearance concepts. Nonetheless, models that comprise additional parameters and complexity do not necessarily out-perform simpler models with fewer inputs. Further, consideration of the property space to exploit some drug target classes has also highlighted the fine balance required between frontloading and back-loading studies to design out or 'manage the baggage'.

  4. Predicting bi-decadal organic carbon mineralization in northwestern European soils with Rock-Eval pyrolysis

    NASA Astrophysics Data System (ADS)

    Soucemarianadin, Laure; Barré, Pierre; Baudin, François; Chenu, Claire; Houot, Sabine; Kätterer, Thomas; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Cécillon, Lauric

    2017-04-01

    The organic carbon reservoir of soils is a key component of climate change, calling for an accurate knowledge of the residence time of soil organic carbon (SOC). Existing proxies of the size of SOC labile pool such as SOC fractionation or respiration tests are time consuming and unable to consistently predict SOC mineralization over years to decades. Similarly, models of SOC dynamics often yield unrealistic values of the size of SOC kinetic pools. Thermal analysis of bulk soil samples has recently been shown to provide useful and cost-effective information regarding the long-term in-situ decomposition of SOC. Barré et al. (2016) analyzed soil samples from long-term bare fallow sites in northwestern Europe using Rock-Eval 6 pyrolysis (RE6), and demonstrated that persistent SOC is thermally more stable and has less hydrogen-rich compounds (low RE6 HI parameter) than labile SOC. The objective of this study was to predict SOC loss over a 20-year period (i.e. the size of the SOC pool with a residence time lower than 20 years) using RE6 indicators. Thirty-six archive soil samples coming from 4 long-term bare fallow chronosequences (Grignon, France; Rothamsted, Great Britain; Ultuna, Sweden; Versailles, France) were used in this study. For each sample, the value of bi-decadal SOC mineralization was obtained from the observed SOC dynamics of its long-term bare fallow plot (approximated by a spline function). Those values ranged from 0.8 to 14.3 gC·kg-1 (concentration data), representing 8.6 to 50.6% of total SOC (proportion data). All samples were analyzed using RE6 and simple linear regression models were used to predict bi-decadal SOC loss (concentration and proportion data) from 4 RE6 parameters: HI, OI, PC/SOC and T50 CO2 oxidation. HI (the amount of hydrogen-rich effluents formed during the pyrolysis phase of RE6; mgCH.g-1SOC) and OI (the CO2 yield during the pyrolysis phase of RE6; mgCO2.g-1SOC) parameters describe SOC bulk chemistry. PC/SOC (the amount of organic C evolved during the pyrolysis phase of RE6; % of total SOC) and T50 CO2 oxidation (the temperature at which 50% of the residual organic C was oxidized to CO2 during the RE6 oxidation phase; °C) parameters represent SOC thermal stability. The RE6 HI parameter yielded the best predictions of bi-decadal SOC mineralization, for both concentration (R2 = 0.75) and proportion (R2 = 0.66) data. PC/SOC and T50 CO2 oxidation parameters also yielded significant regression models with R2 = 0.68 and 0.42 for concentration data and R2 = 0.59 and 0.26 for proportion data, respectively. The OI parameter was not a good predictor of bi-decadal SOC loss, with non-significant regression models. The RE6 thermal analysis method can predict in-situ SOC biogeochemical stability. SOC chemical composition, and to a lesser SOC thermal stability, are related to its bi-decadal dynamics. RE6 appears to be a more accurate and convenient proxy of the size of the bi-decadal labile SOC pool than other existing methodologies. Future developments include the validation of these RE6 models of bi-decadal SOC loss on soils from contrasted pedoclimatic conditions. Reference: Barré et al., 2016. Biogeochemistry 130, 1-12

  5. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  6. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  7. Evolution of Antibody-Drug Conjugate Tumor Disposition Model to Predict Preclinical Tumor Pharmacokinetics of Trastuzumab-Emtansine (T-DM1).

    PubMed

    Singh, Aman P; Maass, Katie F; Betts, Alison M; Wittrup, K Dane; Kulkarni, Chethana; King, Lindsay E; Khot, Antari; Shah, Dhaval K

    2016-07-01

    A mathematical model capable of accurately characterizing intracellular disposition of ADCs is essential for a priori predicting unconjugated drug concentrations inside the tumor. Towards this goal, the objectives of this manuscript were to: (1) evolve previously published cellular disposition model of ADC with more intracellular details to characterize the disposition of T-DM1 in different HER2 expressing cell lines, (2) integrate the improved cellular model with the ADC tumor disposition model to a priori predict DM1 concentrations in a preclinical tumor model, and (3) identify prominent pathways and sensitive parameters associated with intracellular activation of ADCs. The cellular disposition model was augmented by incorporating intracellular ADC degradation and passive diffusion of unconjugated drug across tumor cells. Different biomeasures and chemomeasures for T-DM1, quantified in the companion manuscript, were incorporated into the modified model of ADC to characterize in vitro pharmacokinetics of T-DM1 in three HER2+ cell lines. When the cellular model was integrated with the tumor disposition model, the model was able to a priori predict tumor DM1 concentrations in xenograft mice. Pathway analysis suggested different contribution of antigen-mediated and passive diffusion pathways for intracellular unconjugated drug exposure between in vitro and in vivo systems. Global and local sensitivity analyses revealed that non-specific deconjugation and passive diffusion of the drug across tumor cell membrane are key parameters for drug exposure inside a cell. Finally, a systems pharmacokinetic model for intracellular processing of ADCs has been proposed to highlight our current understanding about the determinants of ADC activation inside a cell.

  8. Multi-Dimensional Shallow Landslide Stability Analysis Suitable for Application at the Watershed Scale

    NASA Astrophysics Data System (ADS)

    Milledge, David; Bellugi, Dino; McKean, Jim; Dietrich, William E.

    2013-04-01

    Current practice in regional-scale shallow landslide hazard assessment is to adopt a one-dimensional slope stability representation. Such a representation cannot produce discrete landslides and thus cannot make predictions on landslide size. Furthermore, one-dimensional approaches cannot include lateral effects, which are known to be important in defining instability. Here we derive an alternative model that accounts for lateral resistance by representing the forces acting on each margin of an unstable block of soil. We model boundary frictional resistances using 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure, using the log-spiral method, on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion declines exponentially with soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are relatively well constrained and find that our model predicts failure at the observed location and predicts that larger or smaller failures conformal to the observed shape are indeed more stable. We use a sensitivity analysis of the model to show that lateral reinforcement sets a minimum landslide size, and that the additional strength at the downslope boundary results in optimal shapes that are longer in the downslope direction. However, reinforcement effects alone cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of the spatial pattern of key parameters (e.g. pore water pressure and soil depth) at the watershed scale. The application of the model at this scale requires an efficient method to find unstable shapes among an exponential number of candidates. In this context, the model allows a more extensive examination of the controls on landslide size, shape and location.

  9. Validation of a Polyimide Foam Model for Use in Transmission Loss Applications

    NASA Technical Reports Server (NTRS)

    Hong, Kwanwoo; Bolton, J. Stuart; Cano, Roberto J.; Weiser, Erik S.; Jensen, Brian J.; Silcox, Rich; Howerton, Brian M.; Maxon, John; Wang, Tongan; Lorenzi, Tyler

    2010-01-01

    The work described in this paper was focused on the use of a new polyimide foam in a double wall sound transmission loss application. Recall that polyimide foams are functionally attractive, compared to polyurethane foams, for example, owing to their fire resistance. The foam considered here was found to have a flow resistivity that was too high for conventional acoustical applications, and as a result, it was processed by partial crushing to lower the flow resistivity into an acceptable range. Procedures for measuring the flow resistivity and Young s modulus of the material have been described, as was an inverse characterization procedure for estimating the remaining Biot parameters based on standing wave tube measurements of transmission loss and absorption coefficient. The inverse characterization was performed using a finite element model implementation of the Biot poro-elastic material theory. Those parameters were then used to predict the sound transmission loss of a double panel system lined with polyimide foam, and the predictions were compared with full-scale transmission loss measurements. The agreement between the two was reasonable, especially in the high and low frequency limits; however, it was found that the SEA model resulted in an under-prediction of the transmission loss in the mid-frequency range. Nonetheless, it was concluded that the performance of polyimide foam could be predicted using conventional poro-elastic material models and that polyimide foam may offer an attractive alternative to other double wall linings in certain situations: e.g., when fire resistance is a key issue. Future work will concentrate on reducing the density of the foam to values similar to those used in current aircraft sidewall treatments, and developing procedures to improve the performance of the foam in transmission loss applications.

  10. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants

    PubMed Central

    Yu, Zheng-Yong; Liu, Qiang; Liu, Yunhan

    2017-01-01

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi–Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings. PMID:28792487

  11. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-08-09

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi-Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings.

  12. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  13. Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process

    NASA Astrophysics Data System (ADS)

    Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.

    2018-05-01

    Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.

  14. Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process

    NASA Astrophysics Data System (ADS)

    Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.

    2017-12-01

    Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.

  15. Middle and long-term prediction of UT1-UTC based on combination of Gray Model and Autoregressive Integrated Moving Average

    NASA Astrophysics Data System (ADS)

    Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing

    2017-02-01

    UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.

  16. Comparison of two metrological approaches for the prediction of human haptic perception

    NASA Astrophysics Data System (ADS)

    Neumann, Annika; Frank, Daniel; Vondenhoff, Thomas; Schmitt, Robert

    2016-06-01

    Haptic perception is regarded as a key component of customer appreciation and acceptance for various products. The prediction of customers’ haptic perception is of interest both during product development and production phases. This paper presents the results of a multivariate analysis between perceived roughness and texture related surface measurements, to examine whether perceived roughness can be accurately predicted using technical measurements. Studies have shown that standardized measurement parameters, such as the roughness coefficients (e.g. Rz or Ra), do not show a one-dimensional linear correlation with the human perception (of roughness). Thus, an alternative measurement method was compared to standard measurements of roughness, in regard to its capability of predicting perceived roughness through technical measurements. To estimate perceived roughness, an experimental study was conducted in which 102 subjects evaluated four sets of 12 different geometrical surface structures regarding their relative perceived roughness. The two different metrological procedures were examined in relation to their capability to predict the perceived roughness of the subjects stated within the study. The standardized measurements of the surface roughness were made using a structured light 3D-scanner. As an alternative method, surface induced vibrations were measured by a finger-like sensor during robot-controlled traverse over a surface. The presented findings provide a better understanding of the predictability of human haptic perception using technical measurements.

  17. Airline Transport Pilot Preferences for Predictive Information

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    1996-01-01

    This experiment assessed certain issues about the usefulness of predictive information: (1) the relative time criticality of failures, (2) the subjective utility of predictive information for different parameters or sensors, and (3) the preferred form and prediction time for displaying predictive information. To address these issues, three separate tasks were administered to 22 airline pilots. As shown by the data, these pilots preferred predictive information on parameters they considered vital to the safety of the flight. These parameters were related to the checklists performed first for alert messages. These pilots also preferred to know whether a parameter was changing abnormally and the time to a certain value being reached. Furthermore, they considered this information most useful during the cruise, the climb, and the descent phases of flight. Lastly, these pilots preferred the information to predict as far ahead as possible.

  18. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  19. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  20. Group contribution methodology based on the statistical associating fluid theory for heteronuclear molecules formed from Mie segments.

    PubMed

    Papaioannou, Vasileios; Lafitte, Thomas; Avendaño, Carlos; Adjiman, Claire S; Jackson, George; Müller, Erich A; Galindo, Amparo

    2014-02-07

    A generalization of the recent version of the statistical associating fluid theory for variable range Mie potentials [Lafitte et al., J. Chem. Phys. 139, 154504 (2013)] is formulated within the framework of a group contribution approach (SAFT-γ Mie). Molecules are represented as comprising distinct functional (chemical) groups based on a fused heteronuclear molecular model, where the interactions between segments are described with the Mie (generalized Lennard-Jonesium) potential of variable attractive and repulsive range. A key feature of the new theory is the accurate description of the monomeric group-group interactions by application of a high-temperature perturbation expansion up to third order. The capabilities of the SAFT-γ Mie approach are exemplified by studying the thermodynamic properties of two chemical families, the n-alkanes and the n-alkyl esters, by developing parameters for the methyl, methylene, and carboxylate functional groups (CH3, CH2, and COO). The approach is shown to describe accurately the fluid-phase behavior of the compounds considered with absolute average deviations of 1.20% and 0.42% for the vapor pressure and saturated liquid density, respectively, which represents a clear improvement over other existing SAFT-based group contribution approaches. The use of Mie potentials to describe the group-group interaction is shown to allow accurate simultaneous descriptions of the fluid-phase behavior and second-order thermodynamic derivative properties of the pure fluids based on a single set of group parameters. Furthermore, the application of the perturbation expansion to third order for the description of the reference monomeric fluid improves the predictions of the theory for the fluid-phase behavior of pure components in the near-critical region. The predictive capabilities of the approach stem from its formulation within a group-contribution formalism: predictions of the fluid-phase behavior and thermodynamic derivative properties of compounds not included in the development of group parameters are demonstrated. The performance of the theory is also critically assessed with predictions of the fluid-phase behavior (vapor-liquid and liquid-liquid equilibria) and excess thermodynamic properties of a variety of binary mixtures, including polymer solutions, where very good agreement with the experimental data is seen, without the need for adjustable mixture parameters.

  1. Mathematical Modeling of Intestinal Iron Absorption Using Genetic Programming

    PubMed Central

    Colins, Andrea; Gerdtzen, Ziomara P.; Nuñez, Marco T.; Salgado, J. Cristian

    2017-01-01

    Iron is a trace metal, key for the development of living organisms. Its absorption process is complex and highly regulated at the transcriptional, translational and systemic levels. Recently, the internalization of the DMT1 transporter has been proposed as an additional regulatory mechanism at the intestinal level, associated to the mucosal block phenomenon. The short-term effect of iron exposure in apical uptake and initial absorption rates was studied in Caco-2 cells at different apical iron concentrations, using both an experimental approach and a mathematical modeling framework. This is the first report of short-term studies for this system. A non-linear behavior in the apical uptake dynamics was observed, which does not follow the classic saturation dynamics of traditional biochemical models. We propose a method for developing mathematical models for complex systems, based on a genetic programming algorithm. The algorithm is aimed at obtaining models with a high predictive capacity, and considers an additional parameter fitting stage and an additional Jackknife stage for estimating the generalization error. We developed a model for the iron uptake system with a higher predictive capacity than classic biochemical models. This was observed both with the apical uptake dataset used for generating the model and with an independent initial rates dataset used to test the predictive capacity of the model. The model obtained is a function of time and the initial apical iron concentration, with a linear component that captures the global tendency of the system, and a non-linear component that can be associated to the movement of DMT1 transporters. The model presented in this paper allows the detailed analysis, interpretation of experimental data, and identification of key relevant components for this complex biological process. This general method holds great potential for application to the elucidation of biological mechanisms and their key components in other complex systems. PMID:28072870

  2. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  3. Predicting CYP2C19 Catalytic Parameters for Enantioselective Oxidations Using Artificial Neural Networks and a Chirality Code

    PubMed Central

    Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.

    2013-01-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224

  4. MINErosion 3: Using measurements on a tilting flume-rainfall simulator facility to predict erosion rates from post-mining landscapes in Central Queensland, Australia

    PubMed Central

    Khalifa, Ashraf M.; Yu, Bofu; Caroll, Chris; Burger, Peter; Mulligan, David

    2018-01-01

    Open-cut coal mining in Queensland results in the formation of extensive saline overburden spoil-piles with steep slopes at the angle of repose (approximately 75% or 37o). These spoil-piles are generally found in multiple rows, several kilometers in length and heights of up to 50 or 60 m above the original landscape. They are highly dispersive and erodible. Legislation demands that these spoil piles be rehabilitated to minimize on-site and off-site discharges of sediment and salt into the surrounding environment. To achieve this, the steep slopes must be reduced, stabilized against erosion, covered with topsoil and re-vegetated. Key design criteria (slope gradient, slope length and vegetation cover) are required for the construction of post-mining landscapes that will result in acceptable erosion rates. A novel user-friendly hillslope computer model MINErosion 3.4 was developed that can accurately predict potential erosion rates from field scale hillslopes using parameters measured with a 3m laboratory tilting flume-rainfall simulator or using routinely measured soil physical and chemical properties. This model links MINErosion 2 with a novel consolidation and above ground vegetation cover factors, to the RUSLE and MUSLE equations to predict the mean annual and storm event erosion rates. The RUSLE-based prediction of the mean annual erosion rates allow minesites to derive the key design criteria of slope length, slope gradient and vegetation cover that would lead to acceptable erosion rates. The MUSLE-based prediction of storm event erosion rates will be useful as input into risk analysis of potential damage from erosion. MINErosion 3.4 was validated against erosion measured on 20 m field erosion plots established on post-mining landscapes at the Oakey Creek and Curragh coalmines, as well as on 120 and 70 m erosion plots on postmining landscapes at Kidston Gold Mine. PMID:29590190

  5. MINErosion 3: Using measurements on a tilting flume-rainfall simulator facility to predict erosion rates from post-mining landscapes in Central Queensland, Australia.

    PubMed

    So, Hwat Bing; Khalifa, Ashraf M; Yu, Bofu; Caroll, Chris; Burger, Peter; Mulligan, David

    2018-01-01

    Open-cut coal mining in Queensland results in the formation of extensive saline overburden spoil-piles with steep slopes at the angle of repose (approximately 75% or 37o). These spoil-piles are generally found in multiple rows, several kilometers in length and heights of up to 50 or 60 m above the original landscape. They are highly dispersive and erodible. Legislation demands that these spoil piles be rehabilitated to minimize on-site and off-site discharges of sediment and salt into the surrounding environment. To achieve this, the steep slopes must be reduced, stabilized against erosion, covered with topsoil and re-vegetated. Key design criteria (slope gradient, slope length and vegetation cover) are required for the construction of post-mining landscapes that will result in acceptable erosion rates. A novel user-friendly hillslope computer model MINErosion 3.4 was developed that can accurately predict potential erosion rates from field scale hillslopes using parameters measured with a 3m laboratory tilting flume-rainfall simulator or using routinely measured soil physical and chemical properties. This model links MINErosion 2 with a novel consolidation and above ground vegetation cover factors, to the RUSLE and MUSLE equations to predict the mean annual and storm event erosion rates. The RUSLE-based prediction of the mean annual erosion rates allow minesites to derive the key design criteria of slope length, slope gradient and vegetation cover that would lead to acceptable erosion rates. The MUSLE-based prediction of storm event erosion rates will be useful as input into risk analysis of potential damage from erosion. MINErosion 3.4 was validated against erosion measured on 20 m field erosion plots established on post-mining landscapes at the Oakey Creek and Curragh coalmines, as well as on 120 and 70 m erosion plots on postmining landscapes at Kidston Gold Mine.

  6. [Value of sepsis single-disease manage system in predicting mortality in patients with sepsis].

    PubMed

    Chen, J; Wang, L H; Ouyang, B; Chen, M Y; Wu, J F; Liu, Y J; Liu, Z M; Guan, X D

    2018-04-03

    Objective: To observe the effect of sepsis single-disease manage system on the improvement of sepsis treatment and the value in predicting mortality in patients with sepsis. Methods: A retrospective study was conducted. Patients with sepsis admitted to the Department of Surgical Intensive Care Unit of Sun Yat-Sen University First Affiliated Hospital from September 22, 2013 to May 5, 2015 were enrolled in this study. Sepsis single-disease manage system (Rui Xin clinical data manage system, China data, China) was used to monitor 25 clinical quality parameters, consisting of timeliness, normalization and outcome parameters. Based on whether these quality parameters could be completed or not, the clinical practice was evaluated by the system. The unachieved quality parameter was defined as suspicious parameters, and these suspicious parameters were used to predict mortality of patients with receiver operating characteristic curve (ROC). Results: A total of 1 220 patients with sepsis were enrolled, included 805 males and 415 females. The mean age was (59±17) years, and acute physiology and chronic health evaluation (APACHE Ⅱ) scores was 19±8. The area under ROC curve of total suspicious numbers for predicting 28-day mortality was 0.70; when the suspicious parameters number was more than 6, the sensitivity was 68.0% and the specificity was 61.0% for predicting 28-day mortality. In addition, the area under ROC curve of outcome suspicious number for predicting 28-day mortality was 0.89; when the suspicious outcome parameters numbers was more than 1, the sensitivity was 88.0% and the specificity was 78.0% for predicting 28-day mortality. Moreover, the area under ROC curve of total suspicious number for predicting 90-day mortality was 0.73; when the total suspicious parameters number was more than 7, the sensitivity was 60.0% and the specificity was 74.0% for predicting 90-day mortality. Finally, the area under ROC curve of outcome suspicious numbers for predicting 90-day mortality was 0.92; when suspicious outcome parameters numbers was more than 1, the sensitivity was 88.0% and the specificity was 81.0% for predicting 90-day mortality. Conclusion: The single center study suggests that this sepsis single-disease manage system could be used to monitor the completion of clinical practice for intensivist in managing sepsis, and the number of quality parameters failed to complete could be used to predict the mortality of the patients.

  7. Temperature-dependent thermal properties of ex vivo liver undergoing thermal ablation.

    PubMed

    Guntur, Sitaramanjaneya Reddy; Lee, Kang Il; Paeng, Dong-Guk; Coleman, Andrew John; Choi, Min Joo

    2013-10-01

    Thermotherapy uses a heat source that raises temperatures in the target tissue, and the temperature rise depends on the thermal properties of the tissue. Little is known about the temperature-dependent thermal properties of tissue, which prevents us from accurately predicting the temperature distribution of the target tissue undergoing thermotherapy. The present study reports the key thermal parameters (specific heat capacity, thermal conductivity and heat diffusivity) measured in ex vivo porcine liver while being heated from 20 ° C to 90 ° C and then naturally cooled down to 20 ° C. The study indicates that as the tissue was heated, all the thermal parameters resulted in plots with asymmetric quasi-parabolic curves with temperature, being convex downward with their minima at the turning temperature of 35-40 ° C. The largest change was observed for thermal conductivity, which decreased by 9.6% from its initial value (at 20 ° C) at the turning temperature (35 ° C) and rose by 45% at 90 ° C from its minimum (at 35 ° C). The minima were 3.567 mJ/(m(3) ∙ K) for specific heat capacity, 0.520 W/(m.K) for thermal conductivity and 0.141 mm(2)/s for thermal diffusivity. The minimum at the turning temperature was unique, and it is suggested that it be taken as a characteristic value of the thermal parameter of the tissue. On the other hand, the thermal parameters were insensitive to temperature and remained almost unchanged when the tissue cooled down, indicating that their variations with temperature were irreversible. The rate of the irreversible rise at 35 ° C was 18% in specific heat capacity, 40% in thermal conductivity and 38.3% in thermal diffusivity. The study indicates that the key thermal parameters of ex vivo porcine liver vary largely with temperature when heated, as described by asymmetric quasi-parabolic curves of the thermal parameters with temperature, and therefore, substantial influence on the temperature distribution of the tissue undergoing thermotherapy is expected. 2013. Published by Elsevier Inc

  8. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  9. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  10. Prediction of the explosion effect of aluminized explosives

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Xiang, Cong; Liang, HuiMin

    2013-05-01

    We present an approach to predict the explosion load for aluminized explosives using a numerical calculation. A code to calculate the species of detonation products of high energy ingredients and those of the secondary reaction of aluminum and the detonation products, velocity of detonation, pressure, temperature and JWL parameters of aluminized explosives has been developed in this study. Through numerical calculations carried out with this code, the predicted JWL parameters for aluminized explosives have been compared with those measured by the cylinder test. The predicted JWL parameters with this code agree with those measured by the cylinder test. Furthermore, the load of explosion for the aluminized explosive was calculated using the numerical simulation by using the JWL equation of state. The loads of explosion for the aluminized explosive obtained using the predicted JWL parameters have been compared with those using the measured JWL parameters. Both of them are almost the same. The numerical results using the predicted JWL parameters show that the explosion air shock wave is the strongest when the mass fraction of aluminum powder in the explosive mixtures is 30%. This result agrees with the empirical data.

  11. Winter wheat yield estimation of remote sensing research based on WOFOST crop model and leaf area index assimilation

    NASA Astrophysics Data System (ADS)

    Chen, Yanling; Gong, Adu; Li, Jing; Wang, Jingmei

    2017-04-01

    Accurate crop growth monitoring and yield predictive information are significant to improve the sustainable development of agriculture and ensure the security of national food. Remote sensing observation and crop growth simulation models are two new technologies, which have highly potential applications in crop growth monitoring and yield forecasting in recent years. However, both of them have limitations in mechanism or regional application respectively. Remote sensing information can not reveal crop growth and development, inner mechanism of yield formation and the affection of environmental meteorological conditions. Crop growth simulation models have difficulties in obtaining data and parameterization from single-point to regional application. In order to make good use of the advantages of these two technologies, the coupling technique of remote sensing information and crop growth simulation models has been studied. Filtering and optimizing model parameters are key to yield estimation by remote sensing and crop model based on regional crop assimilation. Winter wheat of GaoCheng was selected as the experiment object in this paper. And then the essential data was collected, such as biochemical data and farmland environmental data and meteorological data about several critical growing periods. Meanwhile, the image of environmental mitigation small satellite HJ-CCD was obtained. In this paper, research work and major conclusions are as follows. (1) Seven vegetation indexes were selected to retrieve LAI, and then linear regression model was built up between each of these indexes and the measured LAI. The result shows that the accuracy of EVI model was the highest (R2=0.964 at anthesis stage and R2=0.920 at filling stage). Thus, EVI as the most optimal vegetation index to predict LAI in this paper. (2) EFAST method was adopted in this paper to conduct the sensitive analysis to the 26 initial parameters of the WOFOST model and then a sensitivity index was constructed to evaluate the influence of each parameter mentioned above on the winter wheat yield formation. Finally, six parameters that sensitivity index more than 0.1 as sensitivity factors were chose, which are TSUM1, SLATB1, SLATB2, SPAN, EFFTB3 and TMPF4. To other parameters, we confirmed them via practical measurement and calculation, available literature or WOFOST default. Eventually, we completed the regulation of WOFOST parameters. (3) Look-up table algorithm was used to realize single-point yield estimation through the assimilation of the WOFOST model and the retrieval LAI. This simulation achieved a high accuracy which perfectly meet the purpose of assimilation (R2=0.941 and RMSE=194.58kg/hm2). In this paper, the optimum value of sensitivity parameters were confirmed and the estimation of single-point yield were finished. Key words: yield estimation of winter wheat, LAI, WOFOST crop growth model, assimilation

  12. Use of Multiple Linear Regression Models for Setting Water Quality Criteria for Copper: A Complementary Approach to the Biotic Ligand Model.

    PubMed

    Brix, Kevin V; DeForest, David K; Tear, Lucinda; Grosell, Martin; Adams, William J

    2017-05-02

    Biotic Ligand Models (BLMs) for metals are widely applied in ecological risk assessments and in the development of regulatory water quality guidelines in Europe, and in 2007 the United States Environmental Protection Agency (USEPA) recommended BLM-based water quality criteria (WQC) for Cu in freshwater. However, to-date, few states have adopted BLM-based Cu criteria into their water quality standards on a state-wide basis, which appears to be due to the perception that the BLM is too complicated or requires too many input variables. Using the mechanistic BLM framework to first identify key water chemistry parameters that influence Cu bioavailability, namely dissolved organic carbon (DOC), pH, and hardness, we developed Cu criteria using the same basic methodology used by the USEPA to derive hardness-based criteria but with the addition of DOC and pH. As an initial proof of concept, we developed stepwise multiple linear regression (MLR) models for species that have been tested over wide ranges of DOC, pH, and hardness conditions. These models predicted acute Cu toxicity values that were within a factor of ±2 in 77% to 97% of tests (5 species had adequate data) and chronic Cu toxicity values that were within a factor of ±2 in 92% of tests (1 species had adequate data). This level of accuracy is comparable to the BLM. Following USEPA guidelines for WQC development, the species data were then combined to develop a linear model with pooled slopes for each independent parameter (i.e., DOC, pH, and hardness) and species-specific intercepts using Analysis of Covariance. The pooled MLR and BLM models predicted species-specific toxicity with similar precision; adjusted R 2 and R 2 values ranged from 0.56 to 0.86 and 0.66-0.85, respectively. Graphical exploration of relationships between predicted and observed toxicity, residuals and observed toxicity, and residuals and concentrations of key input parameters revealed many similarities and a few key distinctions between the performances of the two models. The pooled MLR model was then applied to the species sensitivity distribution to derive acute and chronic criteria equations similar in form to the USEPA's current hardness-based criteria equations but with DOC, pH, and hardness as the independent variables. Overall, the MLR is less responsive to DOC than the BLM across a range of hardness and pH conditions but more responsive to hardness than the BLM. Additionally, at low and intermediate hardness, the MLR model is less responsive than the BLM to pH, but the two models respond comparably at high hardness. The net effect of these different response profiles is that under many typical water quality conditions, MLR- and BLM-based criteria are quite comparable. Indeed, conditions where the two models differ most (high pH/low hardness and low pH/high hardness) are relatively rare in natural aquatic systems. We suggest that this MLR-based approach, which includes the mechanistic foundation of the BLM but is also consistent with widely accepted hardness-dependent WQC in terms of development and form, may facilitate adoption of updated state-wide Cu criteria that more accurately account for the parameters influencing Cu bioavailability than current hardness-based criteria.

  13. Predictors of round window accessibility for adult cochlear implantation based on pre-operative CT scan: a prospective observational study.

    PubMed

    Park, Edward; Amoodi, Hosam; Kuthubutheen, Jafri; Chen, Joseph M; Nedzelski, Julian M; Lin, Vincent Y W

    2015-05-28

    Cochlear implantation has become a mainstream treatment option for patients with severe to profound sensorineural hearing loss. During cochlear implant, there are key surgical steps which are influenced by anatomical variations between each patient. The aim of this study is to determine if there are potential predictors of difficulties that may be encountered during the cortical mastoidectomy, facial recess approach and round window access in cochlear implant surgery based upon pre-operative temporal bone CT scan. Fifty seven patients undergoing unilateral cochlear implantation were analyzed. Difficulty with 1) cortical mastoidectomy, 2) facial recess approach, and 3) round window access were scored intra-operatively by the surgeon in a blinded fashion (1 = "easy", 2 = "moderate", 3 = "difficult"). Pre-operative temporal bone CT scans were analyzed for 1) degree of mastoid aeration; 2) location of the sigmoid sinus; 3) height of the tegmen; 4) the presence of air cells in the facial recess, and 5) degree of round window bony overhang. Poor mastoid aeration and lower tegmen position, but not the location of sigmoid sinus, are associated with greater difficulty with the cortical mastoidectomy. Presence of an air cell around the facial nerve was predictive of easier facial recess access. However, the degree of round window bony overhang was not predictive of difficulty associated with round window access. Certain parameters on the pre-operative temporal bone CT scan may be useful in predicting potential difficulties encountered during the key steps involved in cochlear implant surgery.

  14. Mathematical 3D modelling and sensitivity analysis of multipolar radiofrequency ablation in the spine.

    PubMed

    Matschek, Janine; Bullinger, Eric; von Haeseler, Friedrich; Skalej, Martin; Findeisen, Rolf

    2017-02-01

    Radiofrequency ablation is a valuable tool in the treatment of many diseases, especially cancer. However, controlled heating up to apoptosis of the desired target tissue in complex situations, e.g. in the spine, is challenging and requires experienced interventionalists. For such challenging situations a mathematical model of radiofrequency ablation allows to understand, improve and optimise the outcome of the medical therapy. The main contribution of this work is the derivation of a tailored, yet expandable mathematical model, for the simulation, analysis, planning and control of radiofrequency ablation in complex situations. The dynamic model consists of partial differential equations that describe the potential and temperature distribution during intervention. To account for multipolar operation, time-dependent boundary conditions are introduced. Spatially distributed parameters, like tissue conductivity and blood perfusion, allow to describe the complex 3D environment representing diverse involved tissue types in the spine. To identify the key parameters affecting the prediction quality of the model, the influence of the parameters on the temperature distribution is investigated via a sensitivity analysis. Simulations underpin the quality of the derived model and the analysis approach. The proposed modelling and analysis schemes set the basis for intervention planning, state- and parameter estimation, and control. Copyright © 2016. Published by Elsevier Inc.

  15. Hybrid Modeling of Cell Signaling and Transcriptional Reprogramming and Its Application in C. elegans Development.

    PubMed

    Fertig, Elana J; Danilova, Ludmila V; Favorov, Alexander V; Ochs, Michael F

    2011-01-01

    Modeling of signal driven transcriptional reprogramming is critical for understanding of organism development, human disease, and cell biology. Many current modeling techniques discount key features of the biological sub-systems when modeling multiscale, organism-level processes. We present a mechanistic hybrid model, GESSA, which integrates a novel pooled probabilistic Boolean network model of cell signaling and a stochastic simulation of transcription and translation responding to a diffusion model of extracellular signals. We apply the model to simulate the well studied cell fate decision process of the vulval precursor cells (VPCs) in C. elegans, using experimentally derived rate constants wherever possible and shared parameters to avoid overfitting. We demonstrate that GESSA recovers (1) the effects of varying scaffold protein concentration on signal strength, (2) amplification of signals in expression, (3) the relative external ligand concentration in a known geometry, and (4) feedback in biochemical networks. We demonstrate that setting model parameters based on wild-type and LIN-12 loss-of-function mutants in C. elegans leads to correct prediction of a wide variety of mutants including partial penetrance of phenotypes. Moreover, the model is relatively insensitive to parameters, retaining the wild-type phenotype for a wide range of cell signaling rate parameters.

  16. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  17. Compressed Sensing for Metrics Development

    NASA Astrophysics Data System (ADS)

    McGraw, R. L.; Giangrande, S. E.; Liu, Y.

    2012-12-01

    Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.

  18. A simple analytical method to estimate all exit parameters of a cross-flow air dehumidifier using liquid desiccant

    PubMed Central

    Bassuoni, M.M.

    2013-01-01

    The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and −5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio. PMID:25685485

  19. Investigation of Homogeneous Relaxation Model Parameters and their Implications for Gasoline Injectors

    DOE PAGES

    Saha, Kaushik; Som, Sibendu; Battistoni, Michele

    2017-01-01

    Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less

  20. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  1. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  2. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  3. A novel bridge scour monitoring and prediction system

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos; Michalis, Panagiotis; Zhang, Hanqing

    2015-04-01

    Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the use of a novel methodology is proposed for the prediction of bridge scour. Specifically, the use of an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation. Training of the system to new bridge geometries and flow conditions can be achieved by obtaining real time data, via novel electromagnetic sensors monitoring scour depth. Once the model is trained with data representative of the new system, bridge scour prediction can be performed for high/design flows or floods.

  4. [Modelling the impact of vaccination on the epidemiology of varicella zoster virus].

    PubMed

    Bonmarin, I; Santa-Olalla, P; Lévy-Bruhl, D

    2008-10-01

    The soon to come the availability of a combined MMR-varicella vaccine has re-stimulated the debate around universal infant vaccination against varicella. In France, the incidence of varicella is estimated at about 700,000 cases per year, with approximately 3500 hospitalisations and 15-25 deaths, the latter mainly occurring in those over 15years. Vaccination would certainly decrease the overall incidence of the disease but concerns about vaccination leading to a shift in the average age at infection followed by an increase in incidence of severe cases and congenital varicella, still remain. In order to provide support for decision-making, a dynamic mathematical model of varicella virus transmission was used to predict the effect of different vaccination strategies and coverages on the epidemiology of varicella and zoster. A deterministic realistic age-structured model was adapted to the French situation. Epidemiological parameters were estimated from literature or surveillance data. Various vaccine coverages and vaccination strategies were investigated. A sensitivity analysis of varicella incidence predictions was performed to test the impact of changes in the vaccine parameters and age-specific mixing patterns. The model confirms that the overall incidence and morbidity of varicella would likely be reduced by mass vaccination of 12-month-old children. Whatever the coverage and the vaccine strategy, the vaccination will cause a shift in age distribution with, for vaccination coverage up to at least 80% in the base-case analysis, an increased morbidity among adults and pregnant women. However, the total number of deaths and hospitalisations from varicella is predicted to remain below that expected without vaccination. The model is very sensitive to the matrix of contacts used and to the parameters describing vaccine effectiveness. Zoster incidence will increase over a number of decades followed by a decline to below prevaccination levels. Mass varicella vaccination, in France, will result in an overall reduction of varicella incidence but will cause a shift in age distribution with an increase in adult cases. Due to the uncertainties in key parameters values, the exact magnitude of this shift is difficult to assess.

  5. Parameter transferability within homogeneous regions and comparisons with predictions from a priori parameters in the eastern United States

    NASA Astrophysics Data System (ADS)

    Chouaib, Wafa; Alila, Younes; Caldwell, Peter V.

    2018-05-01

    The need for predictions of flow time-series persists at ungauged catchments, motivating the research goals of our study. By means of the Sacramento model, this paper explores the use of parameter transfer within homogeneous regions of similar climate and flow characteristics and makes comparisons with predictions from a priori parameters. We assessed the performance using the Nash-Sutcliffe (NS), bias, mean monthly hydrograph and flow duration curve (FDC). The study was conducted on a large dataset of 73 catchments within the eastern US. Two approaches to the parameter transferability were developed and evaluated; (i) the within homogeneous region parameter transfer using one donor catchment specific to each region, (ii) the parameter transfer disregarding the geographical limits of homogeneous regions, where one donor catchment was common to all regions. Comparisons between both parameter transfers enabled to assess the gain in performance from the parameter regionalization and its respective constraints and limitations. The parameter transfer within homogeneous regions outperformed the a priori parameters and led to a decrease in bias and increase in efficiency reaching a median NS of 0.77 and a NS of 0.85 at individual catchments. The use of FDC revealed the effect of bias on the inaccuracy of prediction from parameter transfer. In one specific region, of mountainous and forested catchments, the prediction accuracy of the parameter transfer was less satisfactory and equivalent to a priori parameters. In this region, the parameter transfer from the outsider catchment provided the best performance; less-biased with smaller uncertainty in medium flow percentiles (40%-60%). The large disparity of energy conditions explained the lack of performance from parameter transfer in this region. Besides, the subsurface stormflow is predominant and there is a likelihood of lateral preferential flow, which according to its specific properties further explained the reduced efficiency. Testing the parameter transferability using criteria of similar climate and flow characteristics at ungauged catchments and comparisons with predictions from a priori parameters are a novelty. The ultimate limitations of both approaches are recognized and recommendations are made for future research.

  6. Calculations of key magnetospheric parameters using the isotropic and anisotropic SPSU global MHD code

    NASA Astrophysics Data System (ADS)

    Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor

    2017-04-01

    As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.

  7. Thermal-mechanical behavior of high precision composite mirrors

    NASA Technical Reports Server (NTRS)

    Kuo, C. P.; Lou, M. C.; Rapp, D.

    1993-01-01

    Composite mirror panels were designed, constructed, analyzed, and tested in the framework of a NASA precision segmented reflector task. The deformations of the reflector surface during the exposure to space enviroments were predicted using a finite element model. The composite mirror panels have graphite-epoxy or graphite-cyanate facesheets, separated by an aluminum or a composite honeycomb core. It is pointed out that in order to carry out detailed modeling of composite mirrors with high accuracy, it is necessary to have temperature dependent properties of the materials involved and the type and magnitude of manufacturing errors and material nonuniformities. The structural modeling and analysis efforts addressed the impact of key design and materials parameters on the performance of mirrors.

  8. Numerical Modeling of Ophthalmic Response to Space

    NASA Technical Reports Server (NTRS)

    Nelson, E. S.; Myers, J. G.; Mulugeta, L.; Vera, J.; Raykin, J.; Feola, A.; Gleason, R.; Samuels, B.; Ethier, C. R.

    2015-01-01

    To investigate ophthalmic changes in spaceflight, we would like to predict the impact of blood dysregulation and elevated intracranial pressure (ICP) on Intraocular Pressure (IOP). Unlike other physiological systems, there are very few lumped parameter models of the eye. The eye model described here is novel in its inclusion of the human choroid and retrobulbar subarachnoid space (rSAS), which are key elements in investigating the impact of increased ICP and ocular blood volume. Some ingenuity was required in modeling the blood and rSAS compartments due to the lack of quantitative data on essential hydrodynamic quantities, such as net choroidal volume and blood flowrate, inlet and exit pressures, and material properties, such as compliances between compartments.

  9. Probability of growth of small damage sites on the exit surface of fused silica optics.

    PubMed

    Negres, Raluca A; Abdulla, Ghaleb M; Cross, David A; Liao, Zhi M; Carr, Christopher W

    2012-06-04

    Growth of laser damage on fused silica optical components depends on several key parameters including laser fluence, wavelength, pulse duration, and site size. Here we investigate the growth behavior of small damage sites on the exit surface of SiO₂ optics under exposure to tightly controlled laser pulses. Results demonstrate that the onset of damage growth is not governed by a threshold, but is probabilistic in nature and depends both on the current size of a damage site and the laser fluence to which it is exposed. We also develop models for use in growth prediction. In addition, we show that laser exposure history also influences the behavior of individual sites.

  10. A new helium gas bearing turboexpander

    NASA Astrophysics Data System (ADS)

    Xiong, L. Y.; Chen, C. Z.; Liu, L. Q.; Hou, Y.; Wang, J.; Lin, M. F.

    2002-05-01

    A new helium gas bearing turboexpander of a helium refrigeration system used for space environment simulation experiments is described in this paper. The main design parameters and construction type of some key parts are presented. An improved calculation of thermodynamic efficiency and instability speed of this turboexpander has been obtained by a multiple objects optimization program. Experiments of examining mechanical and thermodynamic performance have been repeatedly conducted in the laboratory by using air at ambient and liquid nitrogen temperature, respectively. In order to predict the helium turboexpander performance, a similarity principles study has been developed. According to the laboratory and on-the-spot experiments, the mechanical and thermodynamic performances of this helium turboexpander are excellent.

  11. What is strange about high-temperature superconductivity in cuprates?

    NASA Astrophysics Data System (ADS)

    Božović, I.; He, X.; Wu, J.; Bollinger, A. T.

    2017-10-01

    Cuprate superconductors exhibit many features, but the ultimate question is why the critical temperature (Tc) is so high. The fundamental dichotomy is between the weak-pairing, Bardeen-Cooper-Schrieffer (BCS) scenario, and Bose-Einstein condensation (BEC) of strongly-bound pairs. While for underdoped cuprates it is hotly debated which of these pictures is appropriate, it is commonly believed that on the overdoped side strongly-correlated fermion physics evolves smoothly into the conventional BCS behavior. Here, we test this dogma by studying the dependence of key superconducting parameters on doping, temperature, and external fields, in thousands of cuprate samples. The findings do not conform to BCS predictions anywhere in the phase diagram.

  12. Stochastic analysis of multiphase flow in porous media: II. Numerical simulations

    NASA Astrophysics Data System (ADS)

    Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.

    1996-08-01

    The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.

  13. Supermassive black holes with higher Eddington ratios preferentially form in gas-rich galaxies

    NASA Astrophysics Data System (ADS)

    Izumi, Takuma

    2018-06-01

    The Eddington ratio (λEdd) of supermassive black holes (SMBHs) is a fundamental parameter that governs their cosmic growth. Although gas mass accretion onto SMBHs is sustained when they are surrounded by large amounts of gas, little is known about the molecular content of galaxies, particularly those hosting super-Eddington SMBHs (λEdd > 1: the key phase of SMBH growth). Here, we have compiled reported optical and 12CO(1-0) data of local quasars to characterize their hosts. We found that higher-λEdd SMBHs tend to reside in gas-rich (i.e., high gas mass to stellar mass fraction = fgas) galaxies. We used two methods to make this conclusion: one uses black hole mass as a surrogate for stellar mass by assuming a local co-evolutionary relationship, and the other directly uses stellar masses estimated from near-infrared observations. The fgas-λEdd correlation we found concurs with the cosmic decreasing trend in λEdd, as cold molecular gas is primarily consumed by star formation. This correlation qualitatively matches predictions of recent semi-analytic models of the cosmic downsizing of SMBHs as well. As the gas mass surface density would eventually be a key parameter controlling mass accretion, we need high-resolution observations to identify further differences in the molecular properties around super-Eddington and sub-Eddington SMBHs.

  14. Supermassive black holes with higher Eddington ratios preferentially form in gas-rich galaxies

    NASA Astrophysics Data System (ADS)

    Izumi, Takuma

    2018-05-01

    The Eddington ratio (λEdd) of supermassive black holes (SMBHs) is a fundamental parameter that governs their cosmic growth. Although gas mass accretion onto SMBHs is sustained when they are surrounded by large amounts of gas, little is known about the molecular content of galaxies, particularly those hosting super-Eddington SMBHs (λEdd > 1: the key phase of SMBH growth). Here, we have compiled reported optical and 12CO(1-0) data of local quasars to characterize their hosts. We found that higher-λEdd SMBHs tend to reside in gas-rich (i.e., high gas mass to stellar mass fraction = fgas) galaxies. We used two methods to make this conclusion: one uses black hole mass as a surrogate for stellar mass by assuming a local co-evolutionary relationship, and the other directly uses stellar masses estimated from near-infrared observations. The fgas-λEdd correlation we found concurs with the cosmic decreasing trend in λEdd, as cold molecular gas is primarily consumed by star formation. This correlation qualitatively matches predictions of recent semi-analytic models of the cosmic downsizing of SMBHs as well. As the gas mass surface density would eventually be a key parameter controlling mass accretion, we need high-resolution observations to identify further differences in the molecular properties around super-Eddington and sub-Eddington SMBHs.

  15. Influence of humidity on the initial emittable concentration of formaldehyde and hexaldehyde in building materials: experimental observation and correlation.

    PubMed

    Huang, Shaodan; Xiong, Jianyin; Cai, Chaorui; Xu, Wei; Zhang, Yinping

    2016-03-30

    Humidity is one of the main environmental factors affecting the emission rate and key parameters of formaldehyde and volatile organic compounds (VOCs) from building materials. Meanwhile, the initial emittable concentration (Cm,0) is proved to be the most sensitive key parameter to the emission behaviours. However, there is no report on the relationship between humidity and Cm,0. In this paper, Cm,0 of formaldehyde and hexaldehyde from a type of medium density fiberboard in absolute humidity (AH) range of 4.6-19.6 g/m(3) at 25 °C were tested by virtue of a C-history method. Experimental results indicate that Cm,0 is dramatically dependent on AH, increased by 10 and 2 times for formaldehyde and hexaldehyde when AH rising from 4.6 g/m(3) to 19.6 g/m(3). A linear relationship between the logarithm of Cm,0 and AH is obtained based on the measured results. In addition, a correlation characterizing the association of emission rate and AH is derived. The effectiveness of the correlation is verified with our experimental results as well as data from literature. With the correlations, the Cm,0 or emission rate different from the test AH conditions can be conveniently obtained. This study should be useful for predicting the emission characteristics of humidity changing scenarios and for source control.

  16. Prevention through Design Adoption Readiness Model (PtD ARM): An integrated conceptual model.

    PubMed

    Weidman, Justin; Dickerson, Deborah E; Koebel, Charles T

    2015-01-01

    Prevention through Design (PtD), eliminating hazards at the design-stage of tools and systems, is the optimal method of mitigating occupational health and safety risks. A recent National Institute of Safety and Health initiative has established a goal to increase adoption of PtD innovation in industry. The construction industry has traditionally lagged behind other sectors in the adoption of innovation, in general; and of safety and health prevention innovation, in particular. Therefore, as a first step toward improving adoption trends in this sector, a conceptual model was developed to describe the parameters and causal relationships that influence and predict construction stakeholder "adoption readiness" for PtD technology innovation. This model was built upon three well-established theoretical frameworks: the Health Belief Model, the Diffusion of Innovation Model, and the Technology Acceptance Model. Earp and Ennett's model development methodology was employed to build a depiction of the key constructs and directionality and magnitude of relationships among them. Key constructs were identified from the literature associated with the three theoretical frameworks, with special emphasis given to studies related to construction or OHS technology adoption. A conceptual model is presented. Recommendations for future research are described and include confirmatory structural equation modeling of model parameters and relationships, additional descriptive investigation of barriers to adoption in some trade sectors, and design and evaluation of an intervention strategy.

  17. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  18. Fundamental mass transfer modeling of emission of volatile organic compounds from building materials

    NASA Astrophysics Data System (ADS)

    Bodalal, Awad Saad

    In this study, a mass transfer theory based model is presented for characterizing the VOC emissions from building materials. A 3-D diffusion model is developed to describe the emissions of volatile organic compounds (VOCs) from individual sources. Then the formulation is extended to include the emissions from composite sources (system comprising an assemblage of individual sources). The key parameters for the model (The diffusion coefficient of the VOC in the source material D, and the equilibrium partition coefficient k e) were determined independently (model parameters are determined without the use of chamber emission data). This procedure eliminated to a large extent the need for emission testing using environmental chambers, which is costly, time consuming, and may be subject to confounding sink effects. An experimental method is developed and implemented to measure directly the internal diffusion (D) and partition coefficients ( ke). The use of the method is illustrated for three types of VOC's: (i) Aliphatic Hydrocarbons, (ii) Aromatic Hydrocarbons and ( iii) Aldehydes, through typical dry building materials (carpet, plywood, particleboard, vinyl floor tile, gypsum board, sub-floor tile and OSB). Then correlations for predicting D and ke based solely on commonly available properties such as molecular weight and vapour pressure were proposed for each product and type of VOC. These correlations can be used to estimate the D and ke when direct measurement data are not available, and thus facilitate the prediction of VOC emissions from the building materials using mass transfer theory. The VOC emissions from a sub-floor material (made of the recycled automobile tires), and a particleboard are measured and predicted. Finally, a mathematical model to predict the diffusion coefficient through complex sources (floor adhesive) as a function of time was developed. Then this model (for diffusion coefficient in complex sources) was used to predict the emission rate from material system (namely, substrate//glue//vinyl tile).

  19. Calibration of the 2D Hydrodynamic Model Floodos and Implications of Distributed Friction on Sediment Transport Capacity

    NASA Astrophysics Data System (ADS)

    Croissant, T.; Lague, D.; Davy, P.

    2014-12-01

    Numerical models of floodplain dynamics often use a simplified 1D description of flow hydraulics and sediment transport that cannot fully account for differential friction between vegetated banks and low friction in the main channel. Key parameters of such models are the friction coefficient and the description of the channel bathymetry which strongly influence predicted water depth and velocity, and therefore sediment transport capacity. In this study, we use a newly developed 2D hydrodynamic model, Floodos, whose efficiency is a major advantage for exploring channel morphodynamics from a flood event to millennial time scales. We evaluate the quality of Floodos predictions in the Whataroa river, New Zealand and assess the effect of a spatially distributed friction coefficient (SDFC) on long term sediment transport. Predictions from the model are compared to water depth data from a gauging station located on the Whataroa River in Southern Alps, New Zealand. The Digital Elevation Model (DEM) of the 2.5 km long studied reach is derived from a 2010 LiDAR acquisition with 2 m resolution and an interpolated bathymetry. The several large floods experienced by this river during 2010 allow us to access water depth for a wide range of possible river discharges and to retrieve the scaling between these two parameters. The high resolution DEM used has a non-negligible part of submerged bathymetry that airborne LiDAR was not able to capture. Bathymetry can be reconstructed by interpolation methods that introduce several uncertainties concerning water depth predictions. We address these uncertainties inherent to the interpolation using a simplified channel with a geometry (slope and width) similar to the Whataroa river. We then explore the effect of a SDFC on velocity pattern, water depth and sediment transport capacity and discuss its relevance on long term predictions of sediment transport and channel morphodynamics.

  20. Prediction of Scour Depth around Bridge Piers using Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos; Zhang, Hanqing

    2014-05-01

    Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health of river systems but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the application of a Machine Learning model that has been successfully employed in Water Engineering, namely an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation.

Top