Sample records for rate model based

  1. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  2. Modeling of Diffusion Based Correlations Between Heart Rate Modulations and Respiration Pattern

    DTIC Science & Technology

    2001-10-25

    1 of 4 MODELING OF DIFFUSION BASED CORRELATIONS BETWEEN HEART RATE MODULATIONS AND RESPIRATION PATTERN R.Langer,(1) Y.Smorzik,(2) S.Akselrod,(1...generations of the bronchial tree. The second stage describes the oxygen diffusion process from the pulmonary gas in the alveoli into the pulmonary...patterns (FRC, TV, rate). Keywords – Modeling, Diffusion , Heart Rate fluctuations I. INTRODUCTION Under a whole-body management perception, the

  3. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  4. Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer.

    PubMed

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L; Eisele, Fred L; Siepmann, J Ilja; Hanson, David R; Zhao, Jun; McMurry, Peter H

    2012-11-13

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid-base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta.

  5. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    PubMed

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  6. ADHD bifactor model based on parent and teacher ratings of Malaysian children.

    PubMed

    Gomez, Rapson

    2014-04-01

    The study used confirmatory factor analysis to ascertain support for the bifactor model of the Attention Deficit/Hyperactivity Disorder (ADHD) symptoms, based on parent and teacher ratings for a group of Malaysian children. Malaysian parents and teachers completed ratings of ADHD and Opposition Defiant Disorder (ODD) symptoms for 934 children. For both sets of ratings, the findings indicating good fit for the bifactor model, and the factors in this model showed differential associations with ODD, thereby supporting the internal and external validity of this model. The theoretical and clinical implications of the findings are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Acid–base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer

    PubMed Central

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.

    2012-01-01

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid–base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030

  8. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S

  9. Computer modeling and design analysis of a bit rate discrimination circuit based dual-rate burst mode receiver

    NASA Astrophysics Data System (ADS)

    Kota, Sriharsha; Patel, Jigesh; Ghillino, Enrico; Richards, Dwight

    2011-01-01

    In this paper, we demonstrate a computer model for simulating a dual-rate burst mode receiver that can readily distinguish bit rates of 1.25Gbit/s and 10.3Gbit/s and demodulate the data bursts with large power variations of above 5dB. To our knowledge, this is the first such model to demodulate data bursts of different bit rates without using any external control signal such as a reset signal or a bit rate select signal. The model is based on a burst-mode bit rate discrimination circuit (B-BDC) and makes use of a unique preamble sequence attached to each burst to separate out the data bursts with different bit rates. Here, the model is implemented using a combination of the optical system simulation suite OptSimTM, and the electrical simulation engine SPICE. The reaction time of the burst mode receiver model is about 7ns, which corresponds to less than 8 preamble bits for the bit rate of 1.25Gbps. We believe, having an accurate and robust simulation model for high speed burst mode transmission in GE-PON systems, is indispensable and tremendously speeds up the ongoing research in the area, saving a lot of time and effort involved in carrying out the laboratory experiments, while providing flexibility in the optimization of various system parameters for better performance of the receiver as a whole. Furthermore, we also study the effects of burst specifications like the length of preamble sequence, and other receiver design parameters on the reaction time of the receiver.

  10. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  11. Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate

    NASA Astrophysics Data System (ADS)

    Li, Chun-Hsien

    2015-06-01

    This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.

  12. Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?

    PubMed

    Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A

    2015-06-01

    Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.

  13. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  14. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  15. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  16. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGES

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  17. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  18. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  19. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration rates.

    PubMed

    Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James

    2014-09-06

    We build an agent-based model of incarceration based on the susceptible-infected-suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology.

  20. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based

  1. Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis

    USGS Publications Warehouse

    Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John

    2009-01-01

    Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.

  2. Interaction of rate- and size-effect using a dislocation density based strain gradient viscoplasticity model

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung N.; Siegmund, Thomas; Tomar, Vikas; Kruzic, Jamie J.

    2017-12-01

    Size effects occur in non-uniform plastically deformed metals confined in a volume on the scale of micrometer or sub-micrometer. Such problems have been well studied using strain gradient rate-independent plasticity theories. Yet, plasticity theories describing the time-dependent behavior of metals in the presence of size effects are presently limited, and there is no consensus about how the size effects vary with strain rates or whether there is an interaction between them. This paper introduces a constitutive model which enables the analysis of complex load scenarios, including loading rate sensitivity, creep, relaxation and interactions thereof under the consideration of plastic strain gradient effects. A strain gradient viscoplasticity constitutive model based on the Kocks-Mecking theory of dislocation evolution, namely the strain gradient Kocks-Mecking (SG-KM) model, is established and allows one to capture both rate and size effects, and their interaction. A formulation of the model in the finite element analysis framework is derived. Numerical examples are presented. In a special virtual creep test with the presence of plastic strain gradients, creep rates are found to diminish with the specimen size, and are also found to depend on the loading rate in an initial ramp loading step. Stress relaxation in a solid medium containing cylindrical microvoids is predicted to increase with decreasing void radius and strain rate in a prior ramp loading step.

  3. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  4. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    PubMed

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.

  5. Benzene patterns in different urban environments and a prediction model for benzene rates based on NOx values

    NASA Astrophysics Data System (ADS)

    Paz, Shlomit; Goldstein, Pavel; Kordova-Biezuner, Levana; Adler, Lea

    2017-04-01

    Exposure to benzene has been associated with multiple severe impacts on health. This notwithstanding, at most monitoring stations, benzene is not monitored on a regular basis. The aims of the study were to compare benzene rates in different urban environments (region with heavy traffic and industrial region), to analyse the relationship between benzene and meteorological parameters in a Mediterranean climate type, to estimate the linkages between benzene and NOx and to suggest a prediction model for benzene rates based on NOx levels in order contribute to a better estimation of benzene. Data were used from two different monitoring stations, located on the eastern Mediterranean coast: 1) a traffic monitoring station in Tel Aviv, Israel (TLV) located in an urban region with heavy traffic; 2) a general air quality monitoring station in Haifa Bay (HIB), located in Israel's main industrial region. At each station, hourly, daily, monthly, seasonal, and annual data of benzene, NOx, mean temperature, relative humidity, inversion level, and temperature gradient were analysed over three years: 2008, 2009, and 2010. A prediction model for benzene rates based on NOx levels (which are monitored regularly) was developed to contribute to a better estimation of benzene. The severity of benzene pollution was found to be considerably higher at the traffic monitoring station (TLV) than at the general air quality station (HIB), despite the location of the latter in an industrial area. Hourly, daily, monthly, seasonal, and annual patterns have been shown to coincide with anthropogenic activities (traffic), the day of the week, and atmospheric conditions. A strong correlation between NOx and benzene allowed the development of a prediction model for benzene rates, based on NOx, the day of the week, and the month. The model succeeded in predicting the benzene values throughout the year (except for September). The severity of benzene pollution was found to be considerably higher at the

  6. 3D modeling and characterization of a calorimetric flow rate sensor for sweat rate sensing applications

    NASA Astrophysics Data System (ADS)

    Iftekhar, Ahmed Tashfin; Ho, Jenny Che-Ting; Mellinger, Axel; Kaya, Tolga

    2017-03-01

    Sweat-based physiological monitoring has been intensively explored in the last decade with the hopes of developing real-time hydration monitoring devices. Although the content of sweat (electrolytes, lactate, urea, etc.) provides significant information about the physiology, it is also very important to know the rate of sweat at the time of sweat content measurements because the sweat rate is known to alter the concentrations of sweat compounds. We developed a calorimetric based flow rate sensor using PolydimethylSiloxane that is suitable for sweat rate applications. Our simple approach on using temperature-based flow rate detection can easily be adapted to multiple sweat collection and analysis devices. Moreover, we have developed a 3D finite element analysis model of the device using COMSOL Multiphysics™ and verified the flow rate measurements. The experiment investigated flow rate values from 0.3 μl/min up to 2.1 ml/min, which covers the human sweat rate range (0.5 μl/min-10 μl/min). The 3D model simulations and analytical model calculations covered an even wider range in order to understand the main physical mechanisms of the device. With a verified 3D model, different environmental heat conditions could be further studied to shed light on the physiology of the sweat rate.

  7. Recovery after treatment and sensitivity to base rate.

    PubMed

    Doctor, J N

    1999-04-01

    Accurate classification of patients as having recovered after psychotherapy depends largely on the base rate of such recovery. This article presents methods for classifying participants as recovered after therapy. The approach described here considers base rate in the statistical model. These methods can be applied to psychotherapy outcome data for 2 purposes: (a) to determine the robustness of a data set to differing base-rate assumptions and (b) to formulate an appropriate cutoff that is beyond the range of cases that are not robust to plausible base-rate assumptions. Discussion addresses a fundamental premise underlying the study of recovery after psychotherapy.

  8. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.

  9. [Shock shape representation of sinus heart rate based on cloud model].

    PubMed

    Yin, Wenfeng; Zhao, Jie; Chen, Tiantian; Zhang, Junjian; Zhang, Chunyou; Li, Dapeng; An, Baijing

    2014-04-01

    The present paper is to analyze the trend of sinus heart rate RR interphase sequence after a single ventricular premature beat and to compare it with the two parameters, turbulence onset (TO) and turbulence slope (TS). Based on the acquisition of sinus rhythm concussion sample, we in this paper use a piecewise linearization method to extract its linear characteristics, following which we describe shock form with natural language through cloud model. In the process of acquisition, we use the exponential smoothing method to forecast the position where QRS wave may appear to assist QRS wave detection, and use template to judge whether current cardiac is sinus rhythm. And we choose some signals from MIT-BIH Arrhythmia Database to detect whether the algorithm is effective in Matlab. The results show that our method can correctly detect the changing trend of sinus heart rate. The proposed method can achieve real-time detection of sinus rhythm shocks, which is simple and easily implemented, so that it is effective as a supplementary method.

  10. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  11. A New Global Geodetic Strain Rate Model

    NASA Astrophysics Data System (ADS)

    Kreemer, C. W.; Klein, E. C.; Blewitt, G.; Shen, Z.; Wang, M.; Chamot-Rooke, N. R.; Rabaute, A.

    2012-12-01

    As part of the Global Earthquake Model (GEM) effort to improve global seismic hazard models, we present a new global geodetic strain rate model. This model (GSRM v. 2) is a vast improvement on the previous model from 2004 (v. 1.2). The model is still based on a finite-element type approach and has deforming cells in between the assumed rigid plates. While v.1.2 contained ~25,000 deforming cells of 0.6° by 0.5° dimension, the new models contains >136,000 cells of 0.25° by 0.2° dimension. We redefined the geometries of the deforming zones based on the definitions of Bird (2003) and Chamot-Rooke and Rabaute (2006). We made some adjustments to the grid geometry at places where seismicity and/or GPS velocities suggested the presence of deforming areas where those previous studies did not. As a result, some plates/blocks identified by Bird (2003) we assumed to deform, and the total number of plates and blocks in GSRM v.2 is 38 (including the Bering block, which Bird (2003) did not consider). GSRM v.1.2 was based on ~5,200 GPS velocities, taken from 86 studies. The new model is based on ~17,000 GPS velocities, taken from 170 studies. The GPS velocity field consists of a 1) ~4900 velocities derived by us for CPS stations publicly available RINEX data and >3.5 years of data, 2) ~1200 velocities for China from a new analysis of all CMONOC data, and 3) velocities published in the literature or made otherwise available to us. All studies were combined into the same reference frame by a 6-parameter transformation using velocities at collocated stations. Because the goal of the project is to model the interseismic strain rate field, we model co-seismic jumps while estimating velocities, ignore periods of post-seismic deformation, and exclude time-series that reflect magmatic and anthropogenic activity. GPS velocities were used to estimate angular velocities for most of the 38 rigid plates and blocks (the rest being taken from the literature), and these were used as boundary

  12. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  13. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  14. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  15. Exploring the relationships among performance-based functional ability, self-rated disability, perceived instrumental support, and depression: a structural equation model analysis.

    PubMed

    Weil, Joyce; Hutchinson, Susan R; Traxler, Karen

    2014-11-01

    Data from the Women's Health and Aging Study were used to test a model of factors explaining depressive symptomology. The primary purpose of the study was to explore the association between performance-based measures of functional ability and depression and to examine the role of self-rated physical difficulties and perceived instrumental support in mediating the relationship between performance-based functioning and depression. The inclusion of performance-based measures allows for the testing of functional ability as a clinical precursor to disability and depression: a critical, but rarely examined, association in the disablement process. Structural equation modeling supported the overall fit of the model and found an indirect relationship between performance-based functioning and depression, with perceived physical difficulties serving as a significant mediator. Our results highlight the complementary nature of performance-based and self-rated measures and the importance of including perception of self-rated physical difficulties when examining depression in older persons. © The Author(s) 2014.

  16. The relationship-based care model: evaluation of the impact on patient satisfaction, length of stay, and readmission rates.

    PubMed

    Cropley, Stacey

    2012-06-01

    The objective of this study was to assess the impact of the implementation of the relationship-based care (RBC) model on patient satisfaction, length of stay, and readmission rates in hospitalized patients. RBC model promotes organizational viability in critical areas that measure success, inclusive of clinical quality, patient satisfaction, and robust financial standing. A retrospective secondary analysis of aggregate patient satisfaction data, length of stay, and readmission rates at a rural Texas hospital was reviewed for the years 2009 and 2010. This study compared preimplementation data for year 2009 with postimplementation data for year 2010. Data support the positive influential impact of RBC model. A negative correlation was noted with readmission rates and a concomitant positive correlation with length of stay. Overall satisfaction with nursing did not reveal a significant correlation to the new care model. RBC model supports a patient-centered, collaborative care environment, maximizing potential reimbursement.

  17. Dislocation mechanism based model for stage II fatigue crack propagation rate

    NASA Technical Reports Server (NTRS)

    Mazumdar, P. K.

    1986-01-01

    Repeated plastic deformation, which of course depends on dislocation mechanism, at or near the crack tip leads to the fatigue crack propagation. By involving the theory of thermally activated flow and the cumulative plastic strain criterion, an effort is made here to model the stage II fatigue crack propagation rate in terms of the dislocation mechanism. The model, therefore, provides capability to ascertain: (1) the dislocation mechanism (and hence the near crack tip microstructures) assisting the crack growth, (2) the relative resistance of dislocation mechanisms to the crack growth, and (3) the fracture surface characteristics and its interpretation in terms of the dislocation mechanism. The local microstructure predicted for the room temperature crack growth in copper by this model is in good agreement with the experimental results taken from the literature. With regard to the relative stability of such dislocation mechanisms as the cross-slip and the dislocation intersection, the model suggests an enhancement of crack growth rate with an ease of cross-slip which in general promotes dislocation cell formation and is common in material which has high stacking fault energy (produces wavy slips). Cross-slip apparently enhances crack growth rate by promoting slip irreversibility and fracture surface brittleness to a greater degree.

  18. Constitutive law for seismicity rate based on rate and state friction: Dieterich 1994 revisited.

    NASA Astrophysics Data System (ADS)

    Heimisson, E. R.; Segall, P.

    2017-12-01

    Dieterich [1994] derived a constitutive law for seismicity rate based on rate and state friction, which has been applied widely to aftershocks, earthquake triggering, and induced seismicity in various geological settings. Here, this influential work is revisited, and re-derived in a more straightforward manner. By virtue of this new derivation the model is generalized to include changes in effective normal stress associated with background seismicity. Furthermore, the general case when seismicity rate is not constant under constant stressing rate is formulated. The new derivation provides directly practical integral expressions for the cumulative number of events and rate of seismicity for arbitrary stressing history. Arguably, the most prominent limitation of Dieterich's 1994 theory is the assumption that seismic sources do not interact. Here we derive a constitutive relationship that considers source interactions between sub-volumes of the crust, where the stress in each sub-volume is assumed constant. Interactions are considered both under constant stressing rate conditions and for arbitrary stressing history. This theory can be used to model seismicity rate due to stress changes or to estimate stress changes using observed seismicity from triggered earthquake swarms where earthquake interactions and magnitudes are take into account. We identify special conditions under which influence of interactions cancel and the predictions reduces to those of Dieterich 1994. This remarkable result may explain the apparent success of the model when applied to observations of triggered seismicity. This approach has application to understanding and modeling induced and triggered seismicity, and the quantitative interpretation of geodetic and seismic data. It enables simultaneous modeling of geodetic and seismic data in a self-consistent framework. To date physics-based modeling of seismicity with or without geodetic data has been found to give insight into various processes

  19. A comparison between rate-and-state friction and microphysical models, based on numerical simulations of fault slip

    NASA Astrophysics Data System (ADS)

    van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.

    2018-05-01

    Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.

  20. Agent-based mathematical modeling as a tool for estimating Trypanosoma cruzi vector-host contact rates.

    PubMed

    Yong, Kamuela E; Mubayi, Anuj; Kribs, Christopher M

    2015-11-01

    The parasite Trypanosoma cruzi, spread by triatomine vectors, affects over 100 mammalian species throughout the Americas, including humans, in whom it causes Chagas' disease. In the U.S., only a few autochthonous cases have been documented in humans, but prevalence is high in sylvatic hosts (primarily raccoons in the southeast and woodrats in Texas). The sylvatic transmission of T. cruzi is spread by the vector species Triatoma sanguisuga and Triatoma gerstaeckeri biting their preferred hosts and thus creating multiple interacting vector-host cycles. The goal of this study is to quantify the rate of contacts between different host and vector species native to Texas using an agent-based model framework. The contact rates, which represent bites, are required to estimate transmission coefficients, which can be applied to models of infection dynamics. In addition to quantitative estimates, results confirm host irritability (in conjunction with host density) and vector starvation thresholds and dispersal as determining factors for vector density as well as host-vector contact rates. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Micromechanical modeling of rate-dependent behavior of Connective tissues.

    PubMed

    Fallah, A; Ahmadian, M T; Firozbakhsh, K; Aghdam, M M

    2017-03-07

    In this paper, a constitutive and micromechanical model for prediction of rate-dependent behavior of connective tissues (CTs) is presented. Connective tissues are considered as nonlinear viscoelastic material. The rate-dependent behavior of CTs is incorporated into model using the well-known quasi-linear viscoelasticity (QLV) theory. A planar wavy representative volume element (RVE) is considered based on the tissue microstructure histological evidences. The presented model parameters are identified based on the available experiments in the literature. The presented constitutive model introduced to ABAQUS by means of UMAT subroutine. Results show that, monotonic uniaxial test predictions of the presented model at different strain rates for rat tail tendon (RTT) and human patellar tendon (HPT) are in good agreement with experimental data. Results of incremental stress-relaxation test are also presented to investigate both instantaneous and viscoelastic behavior of connective tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Modelling rating curves using remotely sensed LiDAR data

    USGS Publications Warehouse

    Nathanson, Marcus; Kean, Jason W.; Grabs, Thomas J.; Seibert, Jan; Laudon, Hjalmar; Lyon, Steve W.

    2012-01-01

    Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage-discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics-based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m-wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90-m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a 'hybrid model' rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see 'below' the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics-based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote

  3. Generalization of exponential based hyperelastic to hyper-viscoelastic model for investigation of mechanical behavior of rate dependent materials.

    PubMed

    Narooei, K; Arman, M

    2018-03-01

    In this research, the exponential stretched based hyperelastic strain energy was generalized to the hyper-viscoelastic model using the heredity integral of deformation history to take into account the strain rate effects on the mechanical behavior of materials. The heredity integral was approximated by the approach of Goh et al. to determine the model parameters and the same estimation was used for constitutive modeling. To present the ability of the proposed hyper-viscoelastic model, the stress-strain response of the thermoplastic elastomer gel tissue at different strain rates from 0.001 to 100/s was studied. In addition to better agreement between the current model and experimental data in comparison to the extended Mooney-Rivlin hyper-viscoelastic model, a stable material behavior was predicted for pure shear and balance biaxial deformation modes. To present the engineering application of current model, the Kolsky bars impact test of gel tissue was simulated and the effects of specimen size and inertia on the uniform deformation were investigated. As the mechanical response of polyurea was provided over wide strain rates of 0.0016-6500/s, the current model was applied to fit the experimental data. The results were shown more accuracy could be expected from the current research than the extended Ogden hyper-viscoelastic model. In the final verification example, the pig skin experimental data was used to determine parameters of the hyper-viscoelastic model. Subsequently, a specimen of pig skin at different strain rates was loaded to a fixed strain and the change of stress with time (stress relaxation) was obtained. The stress relaxation results were revealed the peak stress increases by applied strain rate until the saturated loading rate and the equilibrium stress with magnitude of 0.281MPa could be reached. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    PubMed

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-05-01

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small

  5. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A New Global Geodetic Strain Rate Model

    NASA Astrophysics Data System (ADS)

    Kreemer, C.; Blewitt, G.; Klein, E. C.; Shen, Z.; Wang, M.; Estey, L.; Wier, S.

    2013-12-01

    As part of the Global Earthquake Model (GEM) effort to improve global seismic hazard models, we present a new global geodetic strain rate model. This model (GSRM v. 2) is a vast improvement on the previous model from 2004 (v. 1.2). The model is still based on a finite-element type approach and has deforming cells in between the assumed rigid plates. The new model contains ~144,700 cells of 0.25° by 0.2° dimension. We redefined the geometries of the deforming zones based on the definitions of Bird (2003) and Chamot-Rooke and Rabaute (2006). We made some adjustments to the grid geometry at places where seismicity and/or GPS velocities suggested either the presence of deforming areas or a rigid block where those previous studies did not. GSRM v.2 includes 50 plates and blocks, including many not considered by Bird (2003). The new GSRM model is based on over 20,700 horizontal geodetic velocities at over 17,000 unique locations. The GPS velocity field consists of a 1) Over 6500 velocities derived by the University of Nevada, Reno, for CGPS stations for which >2.5 years of RINEX data are available until April 2013, 2) ~1200 velocities for China from a new analysis of all data from the Crustal Movement Network of China (CMONOC), and 3) about 13,000 velocities from 212 studies published in the literature or made otherwise available to us. Velocities from all studies were combined into the same reference frame by a 6-parameter transformation using velocities at collocated stations. We model co-seismic jumps while estimating velocities, ignore periods of post-seismic deformation, and exclude time-series that reflect magmatic and anthropogenic activity. GPS velocities were used to estimate angular velocities for 36 of the 50 rigid plates and blocks (the rest being taken from the literature), and these were used as boundary conditions in the strain rate calculations. For the strain rate calculations we used the method of Haines and Holt. In order to fit the data equally well

  7. A Symmetric Time-Varying Cluster Rate of Descent Model

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2015-01-01

    A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.

  8. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  9. Modeling the dissipation rate in rotating turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Raj, Rishi; Gatski, Thomas B.

    1990-01-01

    A variety of modifications to the modeled dissipation rate transport equation that have been proposed during the past two decades to account for rotational strains are examined. The models are subjected to two crucial test cases: the decay of isotropic turbulence in a rotating frame and homogeneous shear flow in a rotating frame. It is demonstrated that these modifications do not yield substantially improved predictions for these two test cases and in many instances give rise to unphysical behavior. An alternative proposal, based on the use of the tensor dissipation rate, is made for the development of improved models.

  10. Covariates of the Rating Process in Hierarchical Models for Multiple Ratings of Test Items

    ERIC Educational Resources Information Center

    Mariano, Louis T.; Junker, Brian W.

    2007-01-01

    When constructed response test items are scored by more than one rater, the repeated ratings allow for the consideration of individual rater bias and variability in estimating student proficiency. Several hierarchical models based on item response theory have been introduced to model such effects. In this article, the authors demonstrate how these…

  11. Modeling Tetragonal Lysozyme Crystal Growth Rates

    NASA Technical Reports Server (NTRS)

    Gorti, Sridhar; Forsythe, Elizabeth L.; Pusey, Marc L.

    2003-01-01

    Tetragonal lysozyme 110 face crystal growth rates, measured over 5 orders of magnitude in range, can be described using a model where growth occurs by 2D nucleation on the crystal surface for solution supersaturations of c/c(sub eq) less than or equal to 7 +/- 2. Based upon the model, the step energy per unit length, beta was estimated to be approx. 5.3 +/- 0.4 x 10(exp -7) erg/mol-cm, which for a step height of 56 A corresponds to barrier of approx. 7 +/- 1 k(sub B)T at 300 K. For supersaturations of c/c(sub eq) > 8, the model emphasizing crystal growth by 2D nucleation not only could not predict, but also consistently overestimated, the highest observable crystal growth rates. Kinetic roughening is hypothesized to occur at a cross-over supersaturation of c/c(sub eq) > 8, where crystal growth is postulated to occur by a different process such as adsorption. Under this assumption, all growth rate data indicated that a kinetic roughening transition and subsequent crystal growth by adsorption for all solution conditions, varying in buffer pH, temperature and precipitant concentration, occurs for c/c(sub eq)(T, pH, NaCl) in the range between 5 and 10, with an energy barrier for adsorption estimated to be approx. 20 k(sub B)T at 300 K. Based upon these and other estimates, we determined the size of the critical surface nucleate, at the crossover supersaturation and higher concentrations, to range from 4 to 10 molecules.

  12. Comparison of field theory models of interest rates with market data

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Srikant, Marakani

    2004-03-01

    We calibrate and test various variants of field theory models of the interest rate with data from Eurodollar futures. Models based on psychological factors are seen to provide the best fit to the market. We make a model independent determination of the volatility function of the forward rates from market data.

  13. A model of clearance rate regulation in mussels

    NASA Astrophysics Data System (ADS)

    Fréchette, Marcel

    2012-10-01

    Clearance rate regulation has been modelled as an instantaneous response to food availability, independent of the internal state of the animals. This view is incompatible with latent effects during ontogeny and phenotypic flexibility in clearance rate. Internal-state regulation of clearance rate is required to account for these patterns. Here I develop a model of internal-state based regulation of clearance rate. External factors such as suspended sediments are included in the model. To assess the relative merits of instantaneous regulation and internal-state regulation, I modelled blue mussel clearance rate and growth using a DEB model. In the usual standard feeding module, feeding is governed by a Holling's Type II response to food concentration. In the internal-state feeding module, gill ciliary activity and thus clearance rate are driven by internal reserve level. Factors such as suspended sediments were not included in the simulations. The two feeding modules were compared on the basis of their ability to capture the impact of latent effects, of environmental heterogeneity in food abundance and of physiological flexibility on clearance rate and individual growth. The Holling feeding module was unable to capture the effect of any of these sources of variability. In contrast, the internal-state feeding module did so without any modification or ad hoc calibration. Latent effects, however, appeared transient. With simple annual variability in temperature and food concentration, the relationship between clearance rate and food availability predicted by the internal-state feeding module was quite similar to that observed in Norwegian fjords. I conclude that in contrast with the usual Holling feeding module, internal-state regulation of clearance rate is consistent with well-documented growth and clearance rate patterns.

  14. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  15. A fuzzy-logic-based model to predict biogas and methane production rates in a pilot-scale mesophilic UASB reactor treating molasses wastewater.

    PubMed

    Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan

    2010-10-15

    A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98. 2010 Elsevier B.V. All rights reserved.

  16. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  17. The Topp-Leone generalized Rayleigh cure rate model and its application

    NASA Astrophysics Data System (ADS)

    Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena

    2017-11-01

    Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.

  18. A multi-species reactive transport model to estimate biogeochemical rates based on single-well push-pull test data

    NASA Astrophysics Data System (ADS)

    Phanikumar, Mantha S.; McGuire, Jennifer T.

    2010-08-01

    Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.

  19. Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning

    ERIC Educational Resources Information Center

    Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus

    2008-01-01

    When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…

  20. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  1. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  2. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  3. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  4. Ground-Based Remote Retrievals of Cumulus Entrainment Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Timothy J.; Turner, David D.; Berg, Larry K.

    2013-07-26

    While fractional entrainment rates for cumulus clouds have typically been derived from airborne observations, this limits the size and scope of available data sets. To increase the number of continental cumulus entrainment rate observations available for study, an algorithm for retrieving them from ground-based remote sensing observations has been developed. This algorithm, called the Entrainment Rate In Cumulus Algorithm (ERICA), uses the suite of instruments at the Southern Great Plains (SGP) site of the United States Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility as inputs into a Gauss-Newton optimal estimation scheme, in which an assumed guess ofmore » the entrainment rate is iteratively adjusted through intercomparison of modeled liquid water path and cloud droplet effective radius to their observed counterparts. The forward model in this algorithm is the Explicit Mixing Parcel Model (EMPM), a cloud parcel model that treats entrainment as a series of discrete entrainment events. A quantified value for measurement uncertainty is also returned as part of the retrieval. Sensitivity testing and information content analysis demonstrate the robust nature of this method for retrieving accurate observations of the entrainment rate without the drawbacks of airborne sampling. Results from a test of ERICA on three months of shallow cumulus cloud events show significant variability of the entrainment rate of clouds in a single day and from one day to the next. The mean value of 1.06 km-¹ for the entrainment rate in this dataset corresponds well with prior observations and simulations of the entrainment rate in cumulus clouds.« less

  5. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  6. Reacting Chemistry Based Burn Model for Explosive Hydrocodes

    NASA Astrophysics Data System (ADS)

    Schwaab, Matthew; Greendyke, Robert; Steward, Bryan

    2017-06-01

    Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.

  7. Functional response models to estimate feeding rates of wading birds

    USGS Publications Warehouse

    Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.

    2010-01-01

    Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P < 0.05). Substantial discrepancies between the CM and HoII models were possible depending on flock sizes used to model feeding rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.

  8. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    PubMed

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  9. Modeling study of seated reach envelopes based on spherical harmonics with consideration of the difficulty ratings.

    PubMed

    Yu, Xiaozhi; Ren, Jindong; Zhang, Qian; Liu, Qun; Liu, Honghao

    2017-04-01

    Reach envelopes are very useful for the design and layout of controls. In building reach envelopes, one of the key problems is to represent the reach limits accurately and conveniently. Spherical harmonics are proved to be accurate and convenient method for fitting of the reach capability envelopes. However, extensive study are required on what components of spherical harmonics are needed in fitting the envelope surfaces. For applications in the vehicle industry, an inevitable issue is to construct reach limit surfaces with consideration of the seating positions of the drivers, and it is desirable to use population envelopes rather than individual envelopes. However, it is relatively inconvenient to acquire reach envelopes via a test considering the seating positions of the drivers. In addition, the acquired envelopes are usually unsuitable for use with other vehicle models because they are dependent on the current cab packaging parameters. Therefore, it is of great significance to construct reach envelopes for real vehicle conditions based on individual capability data considering seating positions. Moreover, traditional reach envelopes provide little information regarding the assessment of reach difficulty. The application of reach envelopes will improve design quality by providing difficulty-rating information about reach operations. In this paper, using the laboratory data of seated reach with consideration of the subjective difficulty ratings, the method of modeling reach envelopes is studied based on spherical harmonics. The surface fitting using spherical harmonics is conducted for circumstances both with and without seat adjustments. For use with adjustable seat, the seating position model is introduced to re-locate the test data. The surface fitting is conducted for both population and individual reach envelopes, as well as for boundary envelopes. Comparison of the envelopes of adjustable seat and the SAE J287 control reach envelope shows that the latter

  10. Base Rates: Both Neglected and Intuitive

    ERIC Educational Resources Information Center

    Pennycook, Gordon; Trippas, Dries; Handley, Simon J.; Thompson, Valerie A.

    2014-01-01

    Base-rate neglect refers to the tendency for people to underweight base-rate probabilities in favor of diagnostic information. It is commonly held that base-rate neglect occurs because effortful (Type 2) reasoning is required to process base-rate information, whereas diagnostic information is accessible to fast, intuitive (Type 1) processing…

  11. Influence of government controls over the currency exchange rate in the evolution of Hurst's exponent: An autonomous agent-based model

    NASA Astrophysics Data System (ADS)

    Chávez Muñoz, Pablo; Fernandes da Silva, Marcus; Vivas Miranda, José; Claro, Francisco; Gomez Diniz, Raimundo

    2007-12-01

    We have studied the performance of the Hurst's index associated with the currency exchange rate in Brazil and Chile. It is shown that this index maps the degree of government control in the exchange rate. A model of supply and demand based in an autonomous agent is proposed, that simulates a virtual market of sale and purchase, where buyer or seller are forced to negotiate through an intermediary. According to this model, the average of the price of daily transactions correspond to the theoretical balance proposed by the law of supply and demand. The influence of an added tendency factor is also analyzed.

  12. Validation of the generalized model of two-phase thermosyphon loop based on experimental measurements of volumetric flow rate

    NASA Astrophysics Data System (ADS)

    Bieliński, Henryk

    2016-09-01

    The current paper presents the experimental validation of the generalized model of the two-phase thermosyphon loop. The generalized model is based on mass, momentum, and energy balances in the evaporators, rising tube, condensers and the falling tube. The theoretical analysis and the experimental data have been obtained for a new designed variant. The variant refers to a thermosyphon loop with both minichannels and conventional tubes. The thermosyphon loop consists of an evaporator on the lower vertical section and a condenser on the upper vertical section. The one-dimensional homogeneous and separated two-phase flow models were used in calculations. The latest minichannel heat transfer correlations available in literature were applied. A numerical analysis of the volumetric flow rate in the steady-state has been done. The experiment was conducted on a specially designed test apparatus. Ultrapure water was used as a working fluid. The results show that the theoretical predictions are in good agreement with the measured volumetric flow rate at steady-state.

  13. Patient perceptions of a pharmacy star rating model.

    PubMed

    Warholak, Terri L; Patel, Mira; Rosenthal, Meagen; West-Strum, Donna; Ettienne, Earl B; Nunlee-Bland, Gail; Nau, David; Hincapie, Ana L

    To identify patients' understanding of what constitutes a "quality pharmacy" and to obtain their feedback regarding the development and use of the pharmacy star rating model, a pharmacy-specific aggregate performance score based on the Centers for Medicare and Medicaid Services' Medicare Star Rating. Prospective cross-sectional study. Focus groups were conducted in Arizona, California, Mississippi, Maryland, and the District of Columbia, and one-on-one interviews were conducted in Indiana. Eligible patients were required to routinely use a community pharmacy. Consumer insights on their experiences with their pharmacies and their input on the pharmacy star rating model were attained. Key themes from the focus groups and interviews were obtained through the use of qualitative data analyses. Forty-nine subjects from 5 states and DC participated in 6 focus groups and 4 one-on-one interviews. Eighty-eight percent of participants reported currently taking at least 1 medication, and 87% reported having at least 1 health condition. The 7 themes identified during qualitative analysis included patient care, relational factors for choosing a pharmacy, physical factors for choosing a pharmacy, factors related to use of the pharmacy star rating model, reliability of the pharmacy star rating model, trust in pharmacists, and measures of pharmacy quality. Most participants agreed that the ratings would be useful and could aid in selecting a pharmacy, especially if they were moving to a new place or if they were dissatisfied with their current pharmacy. Pharmacy quality measures are new to patients. Therefore, training and education will need to be provided to patients, as pharmacies begin to offer additional clinical services, such as medication therapy management and diabetes education. The use of the pharmacy star rating model was dependent on the participants' situation when choosing a pharmacy. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc

  14. Modeling the Impact of Smoking Cessation Treatment Policies on Quit Rates

    PubMed Central

    Levy, David T.; Graham, Amanda L.; Mabry, Patricia L.; Abrams, David B.; Orleans, C. Tracy

    2010-01-01

    Background: Smoking cessation treatment policies could yield substantial increases in adult quit rates in the U.S. Purpose: The goals of this paper are to model the effects of individual cessation treatment policies on population quit rates, and to illustrate the potential benefits of combining policies to leverage their synergistic effects. Methods: A mathematical model is updated to examine the impact of five cessation treatment policies on quit attempts, treatment use, and treatment effectiveness. Policies include: (1) Expand cessation treatment coverage and provider reimbursement; (2) Mandate adequate funding for the use and promotion of evidence-based state-sponsored telephone quitlines; (3) Support healthcare systems changes to prompt, guide, and incentivize tobacco treatment; (4) Support and promote evidence-based treatment via the Internet; and (5) Improve individually tailored, stepped care approaches and the long-term effectiveness of evidence-based treatments. Results: The annual baseline population quit rate is 4.3% of all current smokers. Implementing any policy in isolation is projected to make the quit rate increase to between 4.5% and 6%. By implementing all five policies in combination, the quit rate is projected to increase to 10.9%, or 2.5 times the baseline rate. Conclusions: If fully implemented in a coordinated fashion, cessation treatment policies could reduce smoking prevalence from its current rate of 20.5% to 17.2% within 1 year. By modeling the policy impacts on the components of the population quit rate (quit attempts, treatment use, treatment effectiveness), key indicators are identified to analyze in improving the effect of cessation treatment policies. PMID:20176309

  15. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  16. Gaussian Mixture Model of Heart Rate Variability

    PubMed Central

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  17. Decoding nitric oxide release rates of amine-based diazeniumdiolates.

    PubMed

    Wang, Yan-Ni; Collins, Jack; Holland, Ryan J; Keefer, Larry K; Ivanic, Joseph

    2013-08-01

    Amine-based diazeniumdiolates (NONOates) have garnered widespread use as nitric oxide (NO) donors, and their potential for nitroxyl (HNO) release has more recently been realized. While NO release rates can vary significantly with the type of amine, half-lives of seconds to days under physiological conditions, there is as yet no way to determine a priori the NO or HNO production rates of a given species, and no discernible trends have manifested other than that secondary amines produce only NO (i.e., no HNO). As a step to understanding these complex systems, here we describe a procedure for modeling amine-based NONOates in water solvent that provides an excellent correlation (R(2) = 0.94) between experimentally measured dissociation rates of seven secondary amine species and their computed NO release activation energies. The significant difference in behavior of NONOates in the gas and solvent phases is also rigorously demonstrated via explicit additions of quantum mechanical water molecules. The presented results suggest that the as-yet unsynthesized simplest amine-based NONOate, the diazeniumdiolated ammonia anion [H2N-N(O)═NO(-)], could serve as an unperturbed HNO donor. These results provide a step forward toward the accurate modeling of general NO and/or HNO donors as well as for the identification of tailored prodrug candidates.

  18. Can hydraulic-modelled rating curves reduce uncertainty in high flow data?

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Lam, Norris; Lyon, Steve W.

    2017-04-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show

  19. DEPENDENCE OF X-RAY BURST MODELS ON NUCLEAR REACTION RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cyburt, R. H.; Keek, L.; Schatz, H.

    2016-10-20

    X-ray bursts are thermonuclear flashes on the surface of accreting neutron stars, and reliable burst models are needed to interpret observations in terms of properties of the neutron star and the binary system. We investigate the dependence of X-ray burst models on uncertainties in (p, γ ), ( α , γ ), and ( α , p) nuclear reaction rates using fully self-consistent burst models that account for the feedbacks between changes in nuclear energy generation and changes in astrophysical conditions. A two-step approach first identified sensitive nuclear reaction rates in a single-zone model with ignition conditions chosen to matchmore » calculations with a state-of-the-art 1D multi-zone model based on the Kepler stellar evolution code. All relevant reaction rates on neutron-deficient isotopes up to mass 106 were individually varied by a factor of 100 up and down. Calculations of the 84 changes in reaction rate with the highest impact were then repeated in the 1D multi-zone model. We find a number of uncertain reaction rates that affect predictions of light curves and burst ashes significantly. The results provide insights into the nuclear processes that shape observables from X-ray bursts, and guidance for future nuclear physics work to reduce nuclear uncertainties in X-ray burst models.« less

  20. Using Dynamic Transmission Modeling to Determine Vaccination Coverage Rate Based on 5-Year Economic Burden of Infectious Disease: An Example of Pneumococcal Vaccine.

    PubMed

    Wen, Yu-Wen; Wu, Hsin; Chang, Chee-Jen

    2015-05-01

    Vaccination can reduce the incidence and mortality of an infectious disease and thus increase the years of life and productivity for the entire society. But when determining the vaccination coverage rate, its economic burden is usually not taken into account. This article aimed to use a dynamic transmission modeling (DTM), which is based on a susceptible-infectious-recovered model and is a system of differential equations, to find the optimal vaccination coverage rate based on the economic burden of an infectious disease. Vaccination for pneumococcal diseases was used as an example to demonstrate the main purpose. 23-Valent pneumococcal polysaccharide vaccines (PPV23) and 13-valent pneumococcal conjugate vaccines (PCV13) have shown their cost-effectiveness in elderly and children, respectively. Scenarios analysis of PPV23 to elderly aged 65+ years and of PCV13 to children aged 0 to 4 years was applied to assess the optimal vaccination coverage rate based on the 5-year economic burden. Model parameters were derived from Taiwan's National Health Insurance Research Database, government data, and published literature. Various vaccination coverage rates, the vaccine efficacy, and all epidemiologic parameters were substituted into DTM, and all differential equations were solved in R Statistical Software. If the coverage rate of PPV23 for the elderly and of PCV13 for the children both reach 50%, the economic burden due to pneumococcal disease will be acceptable. This article provided an alternative perspective from the economic burden of diseases to obtain a vaccination coverage rate using the DTM. This will provide valuable information for vaccination policy decision makers. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  2. Improving Rice Modeling Success Rate with Ternary Non-structural Fertilizer Response Model.

    PubMed

    Li, Juan; Zhang, Mingqing; Chen, Fang; Yao, Baoquan

    2018-06-13

    Fertilizer response modelling is an important technical approach to realize metrological fertilization on rice. With the goal of solving the problems of a low success rate of a ternary quadratic polynomial model (TPFM) and to expand the model's applicability, this paper established a ternary non-structural fertilizer response model (TNFM) based on the experimental results from N, P and K fertilized rice fields. Our research results showed that the TNFM significantly improved the modelling success rate by addressing problems arising from setting the bias and multicollinearity in a TPFM. The results from 88 rice field trials in China indicated that the proportion of typical TNFMs that satisfy the general fertilizer response law of plant nutrition was 40.9%, while the analogous proportion of TPFMs was only 26.1%. The recommended fertilization showed a significant positive linear correlation between the two models, and the parameters N 0 , P 0 and K 0 that estimated the value of soil supplying nutrient equivalents can be used as better indicators of yield potential in plots where no N or P or K fertilizer was applied. The theoretical analysis showed that the new model has a higher fitting accuracy and a wider application range.

  3. An equivalent dissipation rate model for capturing history effects in non-premixed flames

    DOE PAGES

    Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang; ...

    2016-11-11

    The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less

  4. An equivalent dissipation rate model for capturing history effects in non-premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang

    The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less

  5. Is population structure sufficient to generate area-level inequalities in influenza rates? An examination using agent-based models.

    PubMed

    Kumar, Supriya; Piper, Kaitlin; Galloway, David D; Hadler, James L; Grefenstette, John J

    2015-09-23

    In New Haven County, CT (NHC), influenza hospitalization rates have been shown to increase with census tract poverty in multiple influenza seasons. Though multiple factors have been hypothesized to cause these inequalities, including population structure, differential vaccine uptake, and differential access to healthcare, the impact of each in generating observed inequalities remains unknown. We can design interventions targeting factors with the greatest explanatory power if we quantify the proportion of observed inequalities that hypothesized factors are able to generate. Here, we ask if population structure is sufficient to generate the observed area-level inequalities in NHC. To our knowledge, this is the first use of simulation models to examine the causes of differential poverty-related influenza rates. Using agent-based models with a census-informed, realistic representation of household size, age-structure, population density in NHC census tracts, and contact rates in workplaces, schools, households, and neighborhoods, we measured poverty-related differential influenza attack rates over the course of an epidemic with a 23 % overall clinical attack rate. We examined the role of asthma prevalence rates as well as individual contact rates and infection susceptibility in generating observed area-level influenza inequalities. Simulated attack rates (AR) among adults increased with census tract poverty level (F = 30.5; P < 0.001) in an epidemic caused by a virus similar to A (H1N1) pdm09. We detected a steeper, earlier influenza rate increase in high-poverty census tracts-a finding that we corroborate with a temporal analysis of NHC surveillance data during the 2009 H1N1 pandemic. The ratio of the simulated adult AR in the highest- to lowest-poverty tracts was 33 % of the ratio observed in surveillance data. Increasing individual contact rates in the neighborhood did not increase simulated area-level inequalities. When we modified individual

  6. An autoregressive model-based particle filtering algorithms for extraction of respiratory rates as high as 90 breaths per minute from pulse oximeter.

    PubMed

    Lee, Jinseok; Chon, Ki H

    2010-09-01

    We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.

  7. Evaluating crown fire rate of spread predictions from physics-based models

    Treesearch

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  8. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    PubMed

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  9. VR closure rates for two vocational models.

    PubMed

    Fraser, Virginia V; Jones, Amanda M; Frounfelker, Rochelle; Harding, Brian; Hardin, Teresa; Bond, Gary R

    2008-01-01

    The Individual Placement and Support (IPS) model of supported employment is an evidence-based practice for individuals with psychiatric disabilities. To be financially viable, IPS programs require funding from the state-federal vocational rehabilitation (VR) system. However, some observers have questioned the compatibility of IPS and the VR system. Using a randomized controlled trial comparing IPS to a well-established vocational program called the Diversified Placement Approach (DPA), we examined rates of VR sponsorship and successful VR closures. We also describe the establishment of an active collaboration between a psychiatric rehabilitation agency and the state VR system to facilitate rapid VR sponsorship for IPS clients. Both IPS and DPA achieved a 44% rate of VR Status 26 closure when considering all clients entering the study. IPS and DPA averaged similar amount of time to achieve VR sponsorship. Time from vocational program entry to Status 26 was 51 days longer on average for IPS. Even though several IPS principles seem to run counter to VR practices, such as zero exclusion and rapid job search, we found IPS closure rates comparable to those for DPA, a vocational model that screens for readiness, provides prevocational preparation, and extensively uses agency-run businesses.

  10. Evaluation of Finite-Rate Gas/Surface Interaction Models for a Carbon Based Ablator

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Goekcen, Tahir

    2015-01-01

    Two sets of finite-rate gas-surface interaction model between air and the carbon surface are studied. The first set is an engineering model with one-way chemical reactions, and the second set is a more detailed model with two-way chemical reactions. These two proposed models intend to cover the carbon surface ablation conditions including the low temperature rate-controlled oxidation, the mid-temperature diffusion-controlled oxidation, and the high temperature sublimation. The prediction of carbon surface recession is achieved by coupling a material thermal response code and a Navier-Stokes flow code. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and Ablation Program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting full Navier-Stokes equations using Data Parallel Line Relaxation method. Recession analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities with heat fluxes ranging from 45 to 1100 wcm2 are performed and compared with data for model validation. The ablating material used in these arc-jet tests is Phenolic Impregnated Carbon Ablator. Additionally, computational predictions of surface recession and shape change are in good agreement with measurement for arc-jet conditions of Small Probe Reentry Investigation for Thermal Protection System Engineering.

  11. Cosmogenic nuclide production rates as a function of latitude and altitude calculated via a physics based model and excitation functions

    NASA Astrophysics Data System (ADS)

    Argento, D.; Reedy, R. C.; Stone, J. O.

    2012-12-01

    Cosmogenic nuclides have been used to develop a set of tools critical to the quantification of a wide range of geomorphic and climatic processes and events (Dunai 2010). Having reliable absolute measurement methods has had great impact on research constraining ice age extents as well as providing important climatic data via well constrained erosion rates, etc. Continuing to improve CN methods is critical for these sciences. While significant progress has been made in the last two decades to reduce uncertainties (Dunai 2010; Gosse & Phillips 2001), numerous aspects still need to be refined in order to achieve the analytic resolution desired by glaciologists and geomorphologists. In order to investigate the finer details of the radiation responsible for cosmogenic nuclide production, we have developed a physics based model which models the radiation cascade of primary and secondary cosmic-rays through the atmosphere. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray (GCR) radiation impinging on the upper atmosphere. Beginning with a spectrum of high energy protons and alpha particles at the top of the atmosphere, the code tracks the primary and resulting secondary particles through a model of the Earth's atmosphere and into the lithosphere. Folding the neutron and proton flux results with energy dependent cross sections for nuclide production provides production rates for key cosmogenic nuclides (Argento et al. 2012, in press; Reedy 2012, in press). Our initial study for high latitude shows that nuclides scale at different rates for each nuclide (Argento 2012, in press). Furthermore, the attenuation length for each of these nuclide production rates increases with altitude, and again, they increase at different rates. This has the consequence of changing the production rate ratio as a function of altitude. The earth's geomagnetic field differentially filters low energy cosmic-rays by deflecting them away

  12. A watershed scale spatially-distributed model for streambank erosion rate driven by channel curvature

    NASA Astrophysics Data System (ADS)

    McMillan, Mitchell; Hu, Zhiyong

    2017-10-01

    Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.

  13. Risky forward interest rates and swaptions: Quantum finance model and empirical results

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra

    2018-02-01

    Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.

  14. Simple Model for Detonation Energy and Rate

    NASA Astrophysics Data System (ADS)

    Lauderbach, Lisa M.; Souers, P. Clark

    2017-06-01

    A simple model is used to derive the Eyring equation for the size effect and detonation rate, which depends on a constant energy density. The rate derived from detonation velocities is then converted into a rate constant to be used in a reactive flow model. The rate might be constant if the size effect curve is straight, but the rate constant will change with the radius of the sample and cannot be a constant. This is based on many careful cylinder tests have been run recently on LX-17 with inner copper diameters ranging from 12.7 to 101.6 mm. Copper wall velocities at scaled displacements of 6, 12.5 and 19 mm equate to values at relative volumes of 2.4, 4.4 and 7.0. At each point, the velocities from 25.4 to 101.6 mm are constant within error whereas the 12.7 mm velocities are lower. Using the updated Gurney model, the energy densities at the three larger sizes are also constant. Similar behavior has been seen in LX-14, LX-04, and an 83% RDX mix. A rough saturation has also been in old ANFO data for diameters of 101.6 mm and larger. Although the energy densities saturate, the detonation velocities continue to increase with size. These observations suggest that maximum energy density is a constant for a given explosive of a given density. The correlation of energy density with detonation velocity is not good because the latter depends on the total energy of the sample. This work performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  15. Room model based Monte Carlo simulation study of the relationship between the airborne dose rate and the surface-deposited radon progeny.

    PubMed

    Sun, Kainan; Field, R William; Steck, Daniel J

    2010-01-01

    The quantitative relationships between radon gas concentration, the surface-deposited activities of various radon progeny, the airborne radon progeny dose rate, and various residential environmental factors were investigated through a Monte Carlo simulation study based on the extended Jacobi room model. Airborne dose rates were calculated from the unattached and attached potential alpha-energy concentrations (PAECs) using two dosimetric models. Surface-deposited (218)Po and (214)Po were significantly correlated with radon concentration, PAECs, and airborne dose rate (p-values <0.0001) in both non-smoking and smoking environments. However, in non-smoking environments, the deposited radon progeny were not highly correlated to the attached PAEC. In multiple linear regression analysis, natural logarithm transformation was performed for airborne dose rate as a dependent variable, as well as for radon and deposited (218)Po and (214)Po as predictors. In non-smoking environments, after adjusting for the effect of radon, deposited (214)Po was a significant positive predictor for one dose model (RR 1.46, 95% CI 1.27-1.67), while deposited (218)Po was a negative predictor for the other dose model (RR 0.90, 95% CI 0.83-0.98). In smoking environments, after adjusting for radon and room size, deposited (218)Po was a significant positive predictor for one dose model (RR 1.10, 95% CI 1.02-1.19), while a significant negative predictor for the other model (RR 0.90, 95% CI 0.85-0.95). After adjusting for radon and deposited (218)Po, significant increases of 1.14 (95% CI 1.03-1.27) and 1.13 (95% CI 1.05-1.22) in the mean dose rates were found for large room sizes relative to small room sizes in the different dose models.

  16. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  17. SeqRate: sequence-based protein folding type classification and rates prediction

    PubMed Central

    2010-01-01

    Background Protein folding rate is an important property of a protein. Predicting protein folding rate is useful for understanding protein folding process and guiding protein design. Most previous methods of predicting protein folding rate require the tertiary structure of a protein as an input. And most methods do not distinguish the different kinetic nature (two-state folding or multi-state folding) of the proteins. Here we developed a method, SeqRate, to predict both protein folding kinetic type (two-state versus multi-state) and real-value folding rate using sequence length, amino acid composition, contact order, contact number, and secondary structure information predicted from only protein sequence with support vector machines. Results We systematically studied the contributions of individual features to folding rate prediction. On a standard benchmark dataset, the accuracy of folding kinetic type classification is 80%. The Pearson correlation coefficient and the mean absolute difference between predicted and experimental folding rates (sec-1) in the base-10 logarithmic scale are 0.81 and 0.79 for two-state protein folders, and 0.80 and 0.68 for three-state protein folders. SeqRate is the first sequence-based method for protein folding type classification and its accuracy of fold rate prediction is improved over previous sequence-based methods. Its performance can be further enhanced with additional information, such as structure-based geometric contacts, as inputs. Conclusions Both the web server and software of predicting folding rate are publicly available at http://casp.rnet.missouri.edu/fold_rate/index.html. PMID:20438647

  18. A transfer-rate epidemiological model

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Hu, Hailong; Li, Yantao; Qu, Zehui

    2018-05-01

    Everywhere in the world, thousands of lives are taken by infectious viruses every year. It is very meaningful to study the communication model to help medical workers to formulate timely and effective interventions. In this paper, we proposed a model with considering the different influences to the virus's spreading, which are from the different kinds of connects between people. What's more, the infection and curation rates in our model are more in line with real life. We simulate the real spreading of B-Yamagata from 2014 to 2017 and find the trends of infection rate during one year.

  19. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimentalmore » data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.« less

  20. A High Performance Impedance-based Platform for Evaporation Rate Detection.

    PubMed

    Chou, Wei-Lung; Lee, Pee-Yew; Chen, Cheng-You; Lin, Yu-Hsin; Lin, Yung-Sheng

    2016-10-17

    This paper describes the method of a novel impedance-based platform for the detection of the evaporation rate. The model compound hyaluronic acid was employed here for demonstration purposes. Multiple evaporation tests on the model compound as a humectant with various concentrations in solutions were conducted for comparison purposes. A conventional weight loss approach is known as the most straightforward, but time-consuming, measurement technique for evaporation rate detection. Yet, a clear disadvantage is that a large volume of sample is required and multiple sample tests cannot be conducted at the same time. For the first time in literature, an electrical impedance sensing chip is successfully applied to a real-time evaporation investigation in a time sharing, continuous and automatic manner. Moreover, as little as 0.5 ml of test samples is required in this impedance-based apparatus, and a large impedance variation is demonstrated among various dilute solutions. The proposed high-sensitivity and fast-response impedance sensing system is found to outperform a conventional weight loss approach in terms of evaporation rate detection.

  1. Modeling emission rates and exposures from outdoor cooking

    NASA Astrophysics Data System (ADS)

    Edwards, Rufus; Princevac, Marko; Weltman, Robert; Ghasemian, Masoud; Arora, Narendra K.; Bond, Tami

    2017-09-01

    Approximately 3 billion individuals rely on solid fuels for cooking globally. For a large portion of these - an estimated 533 million - cooking is outdoors, where emissions from cookstoves pose a health risk to both cooks and other household and village members. Models that estimate emissions rates from stoves in indoor environments that would meet WHO air quality guidelines (AQG), explicitly don't account for outdoor cooking. The objectives of this paper are to link health based exposure guidelines with emissions from outdoor cookstoves, using a Monte Carlo simulation of cooking times from Haryana India coupled with inverse Gaussian dispersion models. Mean emission rates for outdoor cooking that would result in incremental increases in personal exposure equivalent to the WHO AQG during a 24-h period were 126 ± 13 mg/min for cooking while squatting and 99 ± 10 mg/min while standing. Emission rates modeled for outdoor cooking are substantially higher than emission rates for indoor cooking to meet AQG, because the models estimate impact of emissions on personal exposure concentrations rather than microenvironment concentrations, and because the smoke disperses more readily outdoors compared to indoor environments. As a result, many more stoves including the best performing solid-fuel biomass stoves would meet AQG when cooking outdoors, but may also result in substantial localized neighborhood pollution depending on housing density. Inclusion of the neighborhood impact of pollution should be addressed more formally both in guidelines on emissions rates from stoves that would be protective of health, and also in wider health impact evaluation efforts and burden of disease estimates. Emissions guidelines should better represent the different contexts in which stoves are being used, especially because in these contexts the best performing solid fuel stoves have the potential to provide significant benefits.

  2. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  3. Web-Based Physician Ratings for California Physicians on Probation

    PubMed Central

    2017-01-01

    Background  Web-based physician ratings systems are a popular tool to help patients evaluate physicians. Websites help patients find information regarding physician licensure, office hours, and disciplinary records along with ratings and reviews. Whether higher patient ratings are associated with higher quality of care is unclear. Objective  The aim of this study was to characterize the impact of physician probation on consumer ratings by comparing website ratings between doctors on probation against matched controls. Methods  A retrospective review of data from the Medical Board of California for physicians placed on probation from December 1989 to September 2015 was performed. Violations were categorized into nine types. Nonprobation controls were matched by zip code and specialty with probation cases in a 2:1 ratio using the California Department of Consumer Affairs website. Web-based reviews were recorded from vitals.com, healthgrades.com, and ratemds.com (ratings range from 1-5). Results  A total of 410 physicians were placed on probation for 866 violations. The mean (standard deviation [SD]) number of ratings per doctor was 5.2 (7.8) for cases and 4 (6.3) for controls (P=.003). The mean rating for physicians on probation was 3.7 (1.6) compared with 4.0 (1.0) for controls when all three rating websites were pooled (P<.001). Violations for medical documentation, incompetence, prescription negligence, and fraud were found to have statistically significant lower rating scores. Conversely, scores for professionalism, drugs or alcohol, crime, sexual misconduct, and personal illness were similar between cases and controls. In a univariate analysis, probation was found to be associated with lower rating, odds ratio=1.5 (95% CI 1.0-2.2). This association was not significant in a multivariate model when we included age and gender. Conclusions  Web-based physician ratings were lower for doctors on probation indicating that patients may perceive a difference

  4. Simulation of heart rate variability model in a network

    NASA Astrophysics Data System (ADS)

    Cascaval, Radu C.; D'Apice, Ciro; D'Arienzo, Maria Pia

    2017-07-01

    We consider a 1-D model for the simulation of the blood flow in the cardiovascular system. As inflow condition we consider a model for the aortic valve. The opening and closing of the valve is dynamically determined by the pressure difference between the left ventricular and aortic pressures. At the outflow we impose a peripheral resistance model. To approximate the solution we use a numerical scheme based on the discontinuous Galerkin method. We also considering a variation in heart rate and terminal reflection coefficient due to monitoring of the pressure in the network.

  5. A lattice hydrodynamic model based on delayed feedback control considering the effect of flow rate difference

    NASA Astrophysics Data System (ADS)

    Wang, Yunong; Cheng, Rongjun; Ge, Hongxia

    2017-08-01

    In this paper, a lattice hydrodynamic model is derived considering not only the effect of flow rate difference but also the delayed feedback control signal which including more comprehensive information. The control method is used to analyze the stability of the model. Furthermore, the critical condition for the linear steady traffic flow is deduced and the numerical simulation is carried out to investigate the advantage of the proposed model with and without the effect of flow rate difference and the control signal. The results are consistent with the theoretical analysis correspondingly.

  6. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  7. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified

  8. Relationship between soil erodibility and modeled infiltration rate in different soils

    NASA Astrophysics Data System (ADS)

    Wang, Guoqiang; Fang, Qingqing; Wu, Binbin; Yang, Huicai; Xu, Zongxue

    2015-09-01

    The relationship between soil erodibility, which is hard to measure, and modeled infiltration rate were rarely researched. Here, the soil erodibility factors (K and Ke in the USLE, Ki and K1 in the WEPP) were calculated and the infiltration rates were modeled based on the designed laboratory simulation experiments and proposed infiltration model, in order to build their relationship. The impacts of compost amendment on the soil erosion characteristics and relationship were also studied. Two contrasting agricultural soils (bare and cultivated fluvo-aquic soils) were used, and different poultry compost contents (control, low and high) were applied to both soils. The results indicated that the runoff rate, sediment yield rate and soil erodibility of the bare soil treatments were generally higher than those of the corresponding cultivated soil treatments. The application of composts generally decreased sediment yield and soil erodibility but did not always decrease runoff. The comparison of measured and modeled infiltration rates indicated that the model represented the infiltration processes well with an N-S coefficient of 0.84 for overall treatments. Significant negative logarithmic correlations have been found between final infiltration rate (FIR) and the four soil erodibility factors, and the relationship between USLE-K and FIR demonstrated the best correlation. The application of poultry composts would not influence the logarithmic relationship between FIR and soil erodibility. Our study provided a useful tool to estimate soil erodibility.

  9. LS-DYNA Implementation of Polymer Matrix Composite Model Under High Strain Rate Impact

    NASA Technical Reports Server (NTRS)

    Zheng, Xia-Hua; Goldberg, Robert K.; Binienda, Wieslaw K.; Roberts, Gary D.

    2003-01-01

    A recently developed constitutive model is implemented into LS-DYNA as a user defined material model (UMAT) to characterize the nonlinear strain rate dependent behavior of polymers. By utilizing this model within a micromechanics technique based on a laminate analogy, an algorithm to analyze the strain rate dependent, nonlinear deformation of a fiber reinforced polymer matrix composite is then developed as a UMAT to simulate the response of these composites under high strain rate impact. The models are designed for shell elements in order to ensure computational efficiency. Experimental and numerical stress-strain curves are compared for two representative polymers and a representative polymer matrix composite, with the analytical model predicting the experimental response reasonably well.

  10. Innovative model-based flow rate optimization for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2016-11-01

    In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.

  11. Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique

    2016-05-01

    In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated

  12. Predictive Finite Rate Model for Oxygen-Carbon Interactions at High Temperature

    NASA Astrophysics Data System (ADS)

    Poovathingal, Savio

    An oxidation model for carbon surfaces is developed to predict ablation rates for carbon heat shields used in hypersonic vehicles. Unlike existing empirical models, the approach used here was to probe gas-surface interactions individually and then based on an understanding of the relevant fundamental processes, build a predictive model that would be accurate over a wide range of pressures and temperatures, and even microstructures. Initially, molecular dynamics was used to understand the oxidation processes on the surface. The molecular dynamics simulations were compared to molecular beam experiments and good qualitative agreement was observed. The simulations reproduced cylindrical pitting observed in the experiments where oxidation was rapid and primarily occurred around a defect. However, the studies were limited to small systems at low temperatures and could simulate time scales only of the order of nanoseconds. Molecular beam experiments at high surface temperature indicated that a majority of surface reaction products were produced through thermal mechanisms. Since the reactions were thermal, they occurred over long time scales which were computationally prohibitive for molecular dynamics to simulate. The experiments provided detailed dynamical data on the scattering of O, O2, CO, and CO2 and it was found that the data from molecular beam experiments could be used directly to build a model. The data was initially used to deduce surface reaction probabilities at 800 K. The reaction probabilities were then incorporated into the direct simulation Monte Carlo (DSMC) method. Simulations were performed where the microstructure was resolved and dissociated oxygen convected and diffused towards it. For a gas-surface temperature of 800 K, it was found that despite CO being the dominant surface reaction product, a gas-phase reaction forms significant CO2 within the microstructure region. It was also found that surface area did not play any role in concentration of

  13. The Relationship Between Hospital Value-Based Purchasing Program Scores and Hospital Bond Ratings.

    PubMed

    Rangnekar, Anooja; Johnson, Tricia; Garman, Andrew; O'Neil, Patricia

    2015-01-01

    Tax-exempt hospitals and health systems often borrow long-term debt to fund capital investments. Lenders use bond ratings as a standard metric to assess whether to lend funds to a hospital. Credit rating agencies have historically relied on financial performance measures and a hospital's ability to service debt obligations to determine bond ratings. With the growth in pay-for-performance-based reimbursement models, rating agencies are expanding their hospital bond rating criteria to include hospital utilization and value-based purchasing (VBP) measures. In this study, we evaluated the relationship between the Hospital VBP domains--Clinical Process of Care, Patient Experience of Care, Outcome, and Medicare Spending per Beneficiary (MSPB)--and hospital bond ratings. Given the historical focus on financial performance, we hypothesized that hospital bond ratings are not associated with any of the Hospital VBP domains. This was a retrospective, cross-sectional study of all hospitals that were rated by Moody's for fiscal year 2012 and participated in the Centers for Medicare & Medicaid Services' VBP program as of January 2014 (N = 285). Of the 285 hospitals in the study, 15% had been assigned a bond rating of Aa, and 46% had been assigned an A rating. Using a binary logistic regression model, we found an association between MSPB only and bond ratings, after controlling for other VBP and financial performance scores; however, MSPB did not improve the overall predictive accuracy of the model. Inclusion of VBP scores in the methodology used to determine hospital bond ratings is likely to affect hospital bond ratings in the near term.

  14. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  15. A study of the thermoregulatory characteristics of a liquid-cooled garment with automatic temperature control based on sweat rate: Experimental investigation and biothermal man-model development

    NASA Technical Reports Server (NTRS)

    Chambers, A. B.; Blackaby, J. R.; Miles, J. B.

    1973-01-01

    Experimental results for three subjects walking on a treadmill at exercise rates of up to 590 watts showed that thermal comfort could be maintained in a liquid cooled garment by using an automatic temperature controller based on sweat rate. The addition of head- and neck-cooling to an Apollo type liquid cooled garment increased its effectiveness and resulted in greater subjective comfort. The biothermal model of man developed in the second portion of the study utilized heat rates and exchange coefficients based on the experimental data, and included the cooling provisions of a liquid-cooled garment with automatic temperature control based on sweat rate. Simulation results were good approximations of the experimental results.

  16. Real-time motion-based H.263+ frame rate control

    NASA Astrophysics Data System (ADS)

    Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay

    1998-12-01

    Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.

  17. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  18. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Strain Rate Dependant Material Model for Orthotropic Metals

    NASA Astrophysics Data System (ADS)

    Vignjevic, Rade

    2016-08-01

    In manufacturing processes anisotropic metals are often exposed to the loading with high strain rates in the range from 102 s-1 to 106 s-1 (e.g. stamping, cold spraying and explosive forming). These types of loading often involve generation and propagation of shock waves within the material. The material behaviour under such a complex loading needs to be accurately modelled, in order to optimise the manufacturing process and achieve appropriate properties of the manufactured component. The presented research is related to development and validation of a thermodynamically consistent physically based constitutive model for metals under high rate loading. The model is capable of modelling damage, failure and formation and propagation of shock waves in anisotropic metals. The model has two main parts: the strength part which defines the material response to shear deformation and an equation of state (EOS) which defines the material response to isotropic volumetric deformation [1]. The constitutive model was implemented into the transient nonlinear finite element code DYNA3D [2] and our in house SPH code. Limited model validation was performed by simulating a number of high velocity material characterisation and validation impact tests. The new damage model was developed in the framework of configurational continuum mechanics and irreversible thermodynamics with internal state variables. The use of the multiplicative decomposition of deformation gradient makes the model applicable to arbitrary plastic and damage deformations. To account for the physical mechanisms of failure, the concept of thermally activated damage initially proposed by Tuller and Bucher [3], Klepaczko [4] was adopted as the basis for the new damage evolution model. This makes the proposed damage/failure model compatible with the Mechanical Threshold Strength (MTS) model Follansbee and Kocks [5], 1988; Chen and Gray [6] which was used to control evolution of flow stress during plastic deformation. In

  20. Developing models for the prediction of hospital healthcare waste generation rate.

    PubMed

    Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe

    2016-01-01

    An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.

  1. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  2. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  3. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 592.510 Section 592... PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate. The base time rate for voluntary inspection services for egg products is $47.79 per hour per program...

  4. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND... ACCREDITATION § 391.2 Base time rate. The base time rate for inspection services provided pursuant to §§ 350.7...

  5. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND... ACCREDITATION § 391.2 Base time rate. The base time rate for inspection services provided pursuant to §§ 350.7...

  6. Analysis of mean seismic ground motion and its uncertainty based on the UCERF3 geologic slip rate model with uncertainty for California

    USGS Publications Warehouse

    Zeng, Yuehua

    2018-01-01

    The Uniform California Earthquake Rupture Forecast v.3 (UCERF3) model (Field et al., 2014) considers epistemic uncertainty in fault‐slip rate via the inclusion of multiple rate models based on geologic and/or geodetic data. However, these slip rates are commonly clustered about their mean value and do not reflect the broader distribution of possible rates and associated probabilities. Here, we consider both a double‐truncated 2σ Gaussian and a boxcar distribution of slip rates and use a Monte Carlo simulation to sample the entire range of the distribution for California fault‐slip rates. We compute the seismic hazard following the methodology and logic‐tree branch weights applied to the 2014 national seismic hazard model (NSHM) for the western U.S. region (Petersen et al., 2014, 2015). By applying a new approach developed in this study to the probabilistic seismic hazard analysis (PSHA) using precomputed rates of exceedance from each fault as a Green’s function, we reduce the computer time by about 10^5‐fold and apply it to the mean PSHA estimates with 1000 Monte Carlo samples of fault‐slip rates to compare with results calculated using only the mean or preferred slip rates. The difference in the mean probabilistic peak ground motion corresponding to a 2% in 50‐yr probability of exceedance is less than 1% on average over all of California for both the Gaussian and boxcar probability distributions for slip‐rate uncertainty but reaches about 18% in areas near faults compared with that calculated using the mean or preferred slip rates. The average uncertainties in 1σ peak ground‐motion level are 5.5% and 7.3% of the mean with the relative maximum uncertainties of 53% and 63% for the Gaussian and boxcar probability density function (PDF), respectively.

  7. Web-Based Physician Ratings for California Physicians on Probation.

    PubMed

    Murphy, Gregory P; Awad, Mohannad A; Osterberg, E Charles; Gaither, Thomas W; Chumnarnsongkhroh, Thanabhudee; Washington, Samuel L; Breyer, Benjamin N

    2017-08-22

     Web-based physician ratings systems are a popular tool to help patients evaluate physicians. Websites help patients find information regarding physician licensure, office hours, and disciplinary records along with ratings and reviews. Whether higher patient ratings are associated with higher quality of care is unclear.  The aim of this study was to characterize the impact of physician probation on consumer ratings by comparing website ratings between doctors on probation against matched controls.  A retrospective review of data from the Medical Board of California for physicians placed on probation from December 1989 to September 2015 was performed. Violations were categorized into nine types. Nonprobation controls were matched by zip code and specialty with probation cases in a 2:1 ratio using the California Department of Consumer Affairs website. Web-based reviews were recorded from vitals.com, healthgrades.com, and ratemds.com (ratings range from 1-5).  A total of 410 physicians were placed on probation for 866 violations. The mean (standard deviation [SD]) number of ratings per doctor was 5.2 (7.8) for cases and 4 (6.3) for controls (P=.003). The mean rating for physicians on probation was 3.7 (1.6) compared with 4.0 (1.0) for controls when all three rating websites were pooled (P<.001). Violations for medical documentation, incompetence, prescription negligence, and fraud were found to have statistically significant lower rating scores. Conversely, scores for professionalism, drugs or alcohol, crime, sexual misconduct, and personal illness were similar between cases and controls. In a univariate analysis, probation was found to be associated with lower rating, odds ratio=1.5 (95% CI 1.0-2.2). This association was not significant in a multivariate model when we included age and gender.  Web-based physician ratings were lower for doctors on probation indicating that patients may perceive a difference. Despite these statistical findings, the absolute

  8. Modeled Estimates of Soil and Dust Ingestion Rates for Children

    EPA Science Inventory

    Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust inge...

  9. Improvement of specific growth rate of Pichia pastoris for effective porcine interferon-α production with an on-line model-based glycerol feeding strategy.

    PubMed

    Gao, Min-Jie; Zheng, Zhi-Yong; Wu, Jian-Rong; Dong, Shi-Juan; Li, Zhen; Jin, Hu; Zhan, Xiao-Bei; Lin, Chi-Chung

    2012-02-01

    Effective expression of porcine interferon-α (pIFN-α) with recombinant Pichia pastoris was conducted in a bench-scale fermentor. The influence of the glycerol feeding strategy on the specific growth rate and protein production was investigated. The traditional DO-stat feeding strategy led to very low cell growth rate resulting in low dry cell weight (DCW) of about 90 g/L during the subsequent induction phase. The previously reported Artificial Neural Network Pattern Recognition (ANNPR) model-based glycerol feeding strategy improved the cell density to 120 g DCW/L, while the specific growth rate decreased from 0.15 to 0.18 to 0.03-0.08 h(-1) during the last 10 h of the glycerol feeding stage leading to a variation of the porcine interferon-α production, as the glycerol feeding scheme had a significant effect on the induction phase. This problem was resolved by an improved ANNPR model-based feeding strategy to maintain the specific growth rate above 0.11 h(-1). With this feeding strategy, the pIFN-α concentration reached a level of 1.43 g/L, more than 1.5-fold higher than that obtained with the previously adopted feeding strategy. Our results showed that increasing the specific growth rate favored the target protein production and the glycerol feeding methods directly influenced the induction stage. Consequently, higher cell density and specific growth rate as well as effective porcine interferon-α production have been achieved by our novel glycerol feeding strategy.

  10. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  11. Modelling of Dynamic Rock Fracture Process with a Rate-Dependent Combined Continuum Damage-Embedded Discontinuity Model Incorporating Microstructure

    NASA Astrophysics Data System (ADS)

    Saksala, Timo

    2016-10-01

    This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.

  12. An "Emergent Model" for Rate of Change

    ERIC Educational Resources Information Center

    Herbert, Sandra; Pierce, Robyn

    2008-01-01

    Does speed provide a "model for" rate of change in other contexts? Does JavaMathWorlds (JMW), animated simulation software, assist in the development of the "model for" rate of change? This project investigates the transference of understandings of rate gained in a motion context to a non-motion context. Students were 27 14-15 year old students at…

  13. A review of air exchange rate models for air pollution exposure assessments.

    PubMed

    Breen, Michael S; Schultz, Bradley D; Sohn, Michael D; Long, Thomas; Langstaff, John; Williams, Ronald; Isaacs, Kristin; Meng, Qing Yu; Stallings, Casson; Smith, Luther

    2014-11-01

    A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings where people spend their time. The AER, which is the rate of exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pollutants and for removal of indoor-emitted air pollutants. This paper presents an overview and critical analysis of the scientific literature on empirical and physically based AER models for residential and commercial buildings; the models highlighted here are feasible for exposure assessments as extensive inputs are not required. Models are included for the three types of airflows that can occur across building envelopes: leakage, natural ventilation, and mechanical ventilation. Guidance is provided to select the preferable AER model based on available data, desired temporal resolution, types of airflows, and types of buildings included in the exposure assessment. For exposure assessments with some limited building leakage or AER measurements, strategies are described to reduce AER model uncertainty. This review will facilitate the selection of AER models in support of air pollution exposure assessments.

  14. Exploring Latent Class Based on Growth Rates in Number Sense Ability

    ERIC Educational Resources Information Center

    Kim, Dongil; Shin, Jaehyun; Lee, Kijyung

    2013-01-01

    The purpose of this study was to explore latent class based on growth rates in number sense ability by using latent growth class modeling (LGCM). LGCM is one of the noteworthy methods for identifying growth patterns of the progress monitoring within the response to intervention framework in that it enables us to analyze latent sub-groups based not…

  15. Performability modeling based on real data: A casestudy

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1987-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.

  16. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  17. Source mass eruption rate retrieved from satellite-based data using statistical modelling

    NASA Astrophysics Data System (ADS)

    Gouhier, Mathieu; Guillin, Arnaud; Azzaoui, Nourddine; Eychenne, Julia; Valade, Sébastien

    2015-04-01

    Ash clouds emitted during volcanic eruptions have long been recognized as a major hazard likely to have dramatic consequences on aircrafts, environment and people. Thus, the International Civil Aviation Organization (ICAO) established nine Volcanic Ash Advisory Centers (VAACs) around the world, whose mission is to forecast the location and concentration of ash clouds over hours to days, using volcanic ash transport and dispersion models (VATDs). Those models use input parameters such as plume height (PH), particle size distribution (PSD), and mass eruption rate (MER), the latter being a key parameter as it directly controls the amount of ash injected into the atmosphere. The MER can be obtained rather accurately from detailed ground deposit studies, but this method does not match the operational requirements in case of a volcanic crisis. Thus, VAACs use empirical laws to determine the MER from the estimation of the plume height. In some cases, this method can be difficult to apply, either because plume height data are not available or because uncertainties related to this method are too large. We propose here an alternative method based on the utilization of satellite data to assess the MER at the source, during explosive eruptions. Satellite-based techniques allow fine ash cloud loading to be quantitatively retrieved far from the source vent. Those measurements can be carried out in a systematic and real-time fashion using geostationary satellite, in particular. We tested here the relationship likely to exist between the amount of fine ash dispersed in the atmosphere and of coarser tephra deposited on the ground. The sum of both contributions yielding an estimate of the MER. For this purpose we examined 19 eruptions (of known duration) in detail for which both (i) the amount of fine ash dispersed in the atmosphere, and (ii) the mass of tephra deposited on the ground have been estimated and published. We combined these data with contextual information that may

  18. Measurement and Modeling of Respiration Rate of Tomato (Cultivar Roma) for Modified Atmosphere Storage.

    PubMed

    Kandasamy, Palani; Moitra, Ranabir; Mukherjee, Souti

    2015-01-01

    Experiments were conducted to determine the respiration rate of tomato at 10, 20 and 30 °C using closed respiration system. Oxygen depletion and carbon dioxide accumulation in the system containing tomato was monitored. Respiration rate was found to decrease with increasing CO2 and decreasing O2 concentration. Michaelis-Menten type model based on enzyme kinetics was evaluated using experimental data generated for predicting the respiration rate. The model parameters that obtained from the respiration rate at different O2 and CO2 concentration levels were used to fit the model against the storage temperatures. The fitting was fair (R2 = 0.923 to 0.970) when the respiration rate was expressed as O2 concentation. Since inhibition constant for CO2 concentration tended towards negetive, the model was modified as a function of O2 concentration only. The modified model was fitted to the experimental data and showed good agreement (R2 = 0.998) with experimentally estimated respiration rate.

  19. An enhanced rate-based emission trading program for NOX: the Dutch model.

    PubMed

    Sholtz, A M; Van Amburg, B; Wochnick, V K

    2001-12-01

    Since 1997 government and industry in The Netherlands have been engaged in intensive policy discussions on how to design an emission trading program that would satisfy the Government's policy objectives within the national and international regulatory framework and accommodate industry's need for a flexible and cost-effective approach. Early on in the discussion the most promising solution was a rate-based approach, which dynamically allocated saleable emission credits based on a performance standard rate and actual energy used by facilities. All industrial facilities above a threshold of 20 MWth would be judged on their ability to meet this performance rate. Those "cleaner" than the standard can sell excess credits to others with an allocation that is less than their actual NOX emission. With some changes in law, such a design could be made to fit well into the national and EU legislative framework while at the same time uniquely meeting industry's requirement of flexibility toward economic growth and facility expansion. (An analysis of the legislative changes required will be given in a separate paper by Chris Dekkers.) However, the environmental outcome of such a system is not as certain as under an absolute emission cap. At the request of the Netherlands Ministry of Housing, Spatial Planning and the Environment (VROM), Automated Credit Exchange (ACE), in close cooperation with the working group of government and industry representatives introduced a number of features into the Dutch NOX program allowing full exploitation of market mechanisms while allowing intermediate adjustments in the performance standard rates. The design is geared toward meeting environmental targets without jeopardizing the trading market the program intends to create. The paper discusses the genesis of the two-tier credit system ACE helped to design, explains the differences between primary (fixed) and secondary (variable) credits, and outlines how the Dutch system is expected to

  20. Did case-based payment influence surgical readmission rates in France? A retrospective study

    PubMed Central

    Vuagnat, Albert; Yilmaz, Engin; Roussot, Adrien; Rodwin, Victor; Gadreau, Maryse; Bernard, Alain; Creuzot-Garcher, Catherine; Quantin, Catherine

    2018-01-01

    Objectives To determine whether implementation of a case-based payment system changed all-cause readmission rates in the 30 days following discharge after surgery, we analysed all surgical procedures performed in all hospitals in France before (2002–2004), during (2005–2008) and after (2009–2012) its implementation. Setting Our study is based on claims data for all surgical procedures performed in all acute care hospitals with >300 surgical admissions per year (740 hospitals) in France over 11 years (2002–2012; n=51.6 million admissions). Interventions We analysed all-cause 30-day readmission rates after surgery using a logistic regression model and an interrupted time series analysis. Results The overall 30-day all-cause readmission rate following discharge after surgery increased from 8.8% to 10.0% (P<0.001) for the public sector and from 5.9% to 8.6% (P<0.001) for the private sector. Interrupted time series models revealed a significant linear increase in readmission rates over the study period in all types of hospitals. However, the implementation of case-based payment was only associated with a significant increase in rehospitalisation rates for private hospitals (P<0.001). Conclusion In France, the increase in the readmission rate appears to be relatively steady in both the private and public sector but appears not to have been affected by the introduction of a case-based payment system after accounting for changes in care practices in the public sector. PMID:29391376

  1. An Improved Computing Method for 3D Mechanical Connectivity Rates Based on a Polyhedral Simulation Model of Discrete Fracture Network in Rock Masses

    NASA Astrophysics Data System (ADS)

    Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye

    2018-06-01

    Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.

  2. Development of a QTL-environment-based predictive model for node addition rate in common bean.

    PubMed

    Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J

    2017-05-01

    This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day - 1 ) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.

  3. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel

  4. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel

  5. Base Rates, Contingencies, and Prediction Behavior

    ERIC Educational Resources Information Center

    Kareev, Yaakov; Fiedler, Klaus; Avrahami, Judith

    2009-01-01

    A skew in the base rate of upcoming events can often provide a better cue for accurate predictions than a contingency between signals and events. The authors study prediction behavior and test people's sensitivity to both base rate and contingency; they also examine people's ability to compare the benefits of both for prediction. They formalize…

  6. Modeling of the interest rate policy of the central bank of Russia

    NASA Astrophysics Data System (ADS)

    Shelomentsev, A. G.; Berg, D. B.; Detkov, A. A.; Rylova, A. P.

    2017-11-01

    This paper investigates interactions among money supply, exchange rates, inflation, and nominal interest rates, which are regulating parameters of the Central bank policy. The study is based on the data received from Russian source in 2002-2016. The major findings are 1) the interest rate demonstrates almost no relation with inflation; 2) ties of money supply and the nominal interest rate are strong; 3) money supply and inflation show meaningful relations only in comparison to their growth rates. We have developed a dynamic model, which can be used in forecasting of macroeconomic processes.

  7. Stationarity test with a direct test for heteroskedasticity in exchange rate forecasting models

    NASA Astrophysics Data System (ADS)

    Khin, Aye Aye; Chau, Wong Hong; Seong, Lim Chee; Bin, Raymond Ling Leh; Teng, Kevin Low Lock

    2017-05-01

    Global economic has been decreasing in the recent years, manifested by the greater exchange rates volatility on international commodity market. This study attempts to analyze some prominent exchange rate forecasting models on Malaysian commodity trading: univariate ARIMA, ARCH and GARCH models in conjunction with stationarity test on residual diagnosis direct testing of heteroskedasticity. All forecasting models utilized the monthly data from 1990 to 2015. Given a total of 312 observations, the data used to forecast both short-term and long-term exchange rate. The forecasting power statistics suggested that the forecasting performance of ARIMA (1, 1, 1) model is more efficient than the ARCH (1) and GARCH (1, 1) models. For ex-post forecast, exchange rate was increased from RM 3.50 per USD in January 2015 to RM 4.47 per USD in December 2015 based on the baseline data. For short-term ex-ante forecast, the analysis results indicate a decrease in exchange rate on 2016 June (RM 4.27 per USD) as compared with 2015 December. A more appropriate forecasting method of exchange rate is vital to aid the decision-making process and planning on the sustainable commodities' production in the world economy.

  8. Influence of the formation- and passivation rate of boron-oxygen defects for mitigating carrier-induced degradation in silicon within a hydrogen-based model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallam, Brett, E-mail: brett.hallam@unsw.edu.au; Abbott, Malcolm; Nampalli, Nitin

    2016-02-14

    A three-state model is used to explore the influence of defect formation- and passivation rates of carrier-induced degradation related to boron-oxygen complexes in boron-doped p-type silicon solar cells within a hydrogen-based model. The model highlights that the inability to effectively mitigate carrier-induced degradation at elevated temperatures in previous studies is due to the limited availability of defects for hydrogen passivation, rather than being limited by the defect passivation rate. An acceleration of the defect formation rate is also observed to increase both the effectiveness and speed of carrier-induced degradation mitigation, whereas increases in the passivation rate do not lead tomore » a substantial acceleration of the hydrogen passivation process. For high-throughput mitigation of such carrier-induced degradation on finished solar cell devices, two key factors were found to be required, high-injection conditions (such as by using high intensity illumination) to enable an acceleration of defect formation whilst simultaneously enabling a rapid passivation of the formed defects, and a high temperature to accelerate both defect formation and defect passivation whilst still ensuring an effective mitigation of carrier-induced degradation.« less

  9. Genome Informed Trait-Based Models

    NASA Astrophysics Data System (ADS)

    Karaoz, U.; Cheng, Y.; Bouskill, N.; Tang, J.; Beller, H. R.; Brodie, E.; Riley, W. J.

    2013-12-01

    Trait-based approaches are powerful tools for representing microbial communities across both spatial and temporal scales within ecosystem models. Trait-based models (TBMs) represent the diversity of microbial taxa as stochastic assemblages with a distribution of traits constrained by trade-offs between these traits. Such representation with its built-in stochasticity allows the elucidation of the interactions between the microbes and their environment by reducing the complexity of microbial community diversity into a limited number of functional ';guilds' and letting them emerge across spatio-temporal scales. From the biogeochemical/ecosystem modeling perspective, the emergent properties of the microbial community could be directly translated into predictions of biogeochemical reaction rates and microbial biomass. The accuracy of TBMs depends on the identification of key traits of the microbial community members and on the parameterization of these traits. Current approaches to inform TBM parameterization are empirical (i.e., based on literature surveys). Advances in omic technologies (such as genomics, metagenomics, metatranscriptomics, and metaproteomics) pave the way to better-initialize models that can be constrained in a generic or site-specific fashion. Here we describe the coupling of metagenomic data to the development of a TBM representing the dynamics of metabolic guilds from an organic carbon stimulated groundwater microbial community. Illumina paired-end metagenomic data were collected from the community as it transitioned successively through electron-accepting conditions (nitrate-, sulfate-, and Fe(III)-reducing), and used to inform estimates of growth rates and the distribution of metabolic pathways (i.e., aerobic and anaerobic oxidation, fermentation) across a spatially resolved TBM. We use this model to evaluate the emergence of different metabolisms and predict rates of biogeochemical processes over time. We compare our results to observational

  10. Cure rate model with interval censored data.

    PubMed

    Kim, Yang-Jin; Jhun, Myoungshic

    2008-01-15

    In cancer trials, a significant fraction of patients can be cured, that is, the disease is completely eliminated, so that it never recurs. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A cure rate model represents a combination of cure fraction and survival model, and can be applied to many clinical studies over several types of cancer. In this article, the cure rate model is considered in the interval censored data composed of two time points, which include the event time of interest. Interval censored data commonly occur in the studies of diseases that often progress without symptoms, requiring clinical evaluation for detection (Encyclopedia of Biostatistics. Wiley: New York, 1998; 2090-2095). In our study, an approximate likelihood approach suggested by Goetghebeur and Ryan (Biometrics 2000; 56:1139-1144) is used to derive the likelihood in interval censored data. In addition, a frailty model is introduced to characterize the association between the cure fraction and survival model. In particular, the positive association between the cure fraction and the survival time is incorporated by imposing a common normal frailty effect. The EM algorithm is used to estimate parameters and a multiple imputation based on the profile likelihood is adopted for variance estimation. The approach is applied to the smoking cessation study in which the event of interest is a smoking relapse and several covariates including an intensive care treatment are evaluated to be effective for both the occurrence of relapse and the non-smoking duration. Copyright (c) 2007 John Wiley & Sons, Ltd.

  11. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  12. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with

  13. Estimating roadside encroachment rates with the combined strengths of accident- and encroachment-based approaches

    DOT National Transportation Integrated Search

    2001-09-01

    In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...

  14. Estimation of dioxin and furan elimination rates with a pharmacokinetic model.

    PubMed

    Van der Molen, G W; Kooijman, B A; Wittsiepe, J; Schrey, P; Flesch-Janys, D; Slob, W

    2000-01-01

    Quantitative description of the pharmacokinetics of dioxins and furans in humans can be of great help for the assessment of health risks posed by these compounds. To that the elimination rates of sixteen 2,3,7,8-chlorinated dibenzodioxins and dibenzofurans are estimated from both a longitudinal and a cross-sectional data set using the model of Van der Molen et al. [Van der Molen G.W., Kooijman S.A.L.M., and Slob W. A generic toxicokinetic model for persistent lipophilic compounds in humans: an application to TCDD. Fundam Appl Toxicol 1996: 31: 83-94]. In this model the elimination rate is given by the (constant) specific elimination rate multiplied with the ratio between the lipid weight of the liver and total body lipid weight. Body composition, body weight and intake are assumed to depend on age. The elimination rate is, therefore, not constant. For 49-year-old males, the elimination rate estimates range between 0.03 per year for 1,2,3,6,7,8-hexaCDF to 1.0 per year for octaCDF. The elimination rates of the most toxic congeners, 2,3,7,8-tetraCDD, 1,2,3,7,8-pentaCDD, and 2,3,4,7,8-pentaCDF, were estimated at 0.09, 0.06, and 0.07, respectively, based on the cross-sectional data, and 0.11, 0.09, and 0.09 based on the longitudinal data. The elimination rates of dioxins decrease with age between 0.0011 per year for 1,2,3,6,7,8-hexaCDD and 0.0035 per year for 1,2,3,4,6,7,8-heptaCDD. For furans the average decrease is 0.0033 per year. The elimination rates were estimated both from a longitudinal and a cross-sectional data set, and agreed quite well with each other, after taking account of historical changes in average intake levels.

  15. Does overall reinforcer rate affect discrimination of time-based contingencies?

    PubMed

    Cowie, Sarah; Davison, Michael; Blumhardt, Luca; Elliffe, Douglas

    2016-05-01

    Overall reinforcer rate appears to affect choice. The mechanism for such an effect is uncertain, but may relate to reinforcer rate changing the discrimination of the relation between stimuli and reinforcers. We assessed whether a quantitative model based on a stimulus-control approach could be used to account for the effects of overall reinforcer rate on choice under changing time-based contingencies. On a two-key concurrent schedule, the likely availability of a reinforcer reversed when a fixed time had elapsed since the last reinforcer, and the overall reinforcer rate was varied across conditions. Changes in the overall reinforcer rate produced a change in response bias, and some indication of a change in discrimination. These changes in bias and discrimination always occurred quickly, usually within the first session of a condition. The stimulus-control approach provided an excellent account of the data, suggesting that changes in overall reinforcer rate affect choice because they alter the frequency of reinforcers obtained at different times, or in different stimulus contexts, and thus change the discriminated relation between stimuli and reinforcers. These findings support the notion that temporal and spatial discriminations can be understood in terms of discrimination of reinforcers across time and space. © 2016 Society for the Experimental Analysis of Behavior.

  16. Unprecedented rates of land-use transformation in modeled climate change mitigation pathways

    NASA Astrophysics Data System (ADS)

    Turner, P. A.; Field, C. B.; Lobell, D. B.; Sanchez, D.; Mach, K. J.

    2017-12-01

    Integrated assessment models (IAMs) generate climate change mitigation scenarios consistent with global temperature targets. To limit warming to 2°, stylized cost-effective mitigation pathways rely on extensive deployments of carbon dioxide (CO2) removal (CDR) technologies, including multi-gigatonne yearly carbon removal from the atmosphere through bioenergy with carbon capture and storage (BECCS) and afforestation/reforestation. These assumed CDR deployments keep ambitious temperature limits in reach, but associated rates of land-use transformation have not been evaluated. For IAM scenarios from the IPCC Fifth Assessment Report, we compare rates of modeled land-use conversion to recent observed commodity crop expansions. In scenarios with a likely chance of limiting warming to 2° in 2100, the rate of energy cropland expansion supporting BECCS exceeds past commodity crop rates by several fold. In some cases, mitigation scenarios include abrupt reversal of deforestation, paired with massive afforestation/reforestation. Specifically, energy cropland in <2° scenarios expands, on average, by 8.2 Mha yr-1 and 11.7% p.a. across scenarios. This rate exceeds, by more than 3-fold, the observed expansion of soybean, the most rapidly expanding commodity crop. If energy cropland instead increases at rates equal to recent soybean and oil palm expansions, the scale of CO2 removal possible with BECCS is 2.6 to 10-times lower, respectively, than the deployments <2° IAM scenarios rely upon in 2100. IAM mitigation pathways may favor multi-gigatonne biomass-based CDR given undervalued sociopolitical and techno-economic deployment barriers. Heroic modeled rates for land-use transformation imply that large-scale biomass-based CDR is not an easy solution to the climate challenge.

  17. A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study.

    PubMed

    Souza, J P; Betran, A P; Dumont, A; de Mucio, B; Gibbs Pickens, C M; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, J G; Vogel, J P; Jayaratne, K; Leal, M C; Gissler, M; Morisaki, N; Lack, N; Oladapo, O T; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, A D; Marcolin, A C; Zongo, A; Blondel, B; Hernández, B; Hogue, C J; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, Ecd; Vieira, E M; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, M L; Torloni, M R; Kramer, M R; Borges, P; Olkhanud, P B; Pérez-Cuevas, R; Agampodi, S B; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, A M

    2016-02-01

    To generate a global reference for caesarean section (CS) rates at health facilities. Cross-sectional study. Health facilities from 43 countries. Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10,045,875 women giving birth from 43 countries for model testing. We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. The C-Model provides a customized benchmark for caesarean section rates in health facilities and systems. © 2015 World Health Organization; licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  18. Evidence-Based Adequacy Model for School Funding: Success Rates in Illinois Schools that Meet Targets

    ERIC Educational Resources Information Center

    Murphy, Gregory J.

    2012-01-01

    This quantitative study explores the 2010 recommendation of the Educational Funding Advisory Board to consider the Evidence-Based Adequacy model of school funding in Illinois. This school funding model identifies and costs research based practices necessary in a prototypical school and sets funding levels based upon those practices. This study…

  19. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  20. Equivalence of interest rate models and lattice gases.

    PubMed

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t(1),t(2))=-Cov[x(t(1)),x(t(2))]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e(-γ|x-y|)-e(-γ(x+y))). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  1. Equivalence of interest rate models and lattice gases

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t1,t2)=-Cov[x(t1),x(t2)]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e-γ|x-y|-e-γ(x+y)). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  2. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    DTIC Science & Technology

    2017-08-01

    as an official Department of the Army position unless so designated by other authorizing documents. REPORT DOCUMENTATION PAGE Form Approved OMB...processes to find a kinetic rate model that provides a high degree of correlation with experimental data. Furthermore, the use of kinetic rate... correlation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON Renu B

  3. A new lattice hydrodynamic model based on control method considering the flux change rate and delay feedback signal

    NASA Astrophysics Data System (ADS)

    Qin, Shunda; Ge, Hongxia; Cheng, Rongjun

    2018-02-01

    In this paper, a new lattice hydrodynamic model is proposed by taking delay feedback and flux change rate effect into account in a single lane. The linear stability condition of the new model is derived by control theory. By using the nonlinear analysis method, the mKDV equation near the critical point is deduced to describe the traffic congestion. Numerical simulations are carried out to demonstrate the advantage of the new model in suppressing traffic jam with the consideration of flux change rate effect in delay feedback model.

  4. Shilling attack detection for recommender systems based on credibility of group users and rating time series.

    PubMed

    Zhou, Wei; Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user's credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method.

  5. Shilling attack detection for recommender systems based on credibility of group users and rating time series

    PubMed Central

    Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user’s credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method. PMID:29742134

  6. A 1DVAR-based snowfall rate retrieval algorithm for passive microwave radiometers

    NASA Astrophysics Data System (ADS)

    Meng, Huan; Dong, Jun; Ferraro, Ralph; Yan, Banghua; Zhao, Limin; Kongoli, Cezar; Wang, Nai-Yu; Zavodsky, Bradley

    2017-06-01

    Snowfall rate retrieval from spaceborne passive microwave (PMW) radiometers has gained momentum in recent years. PMW can be so utilized because of its ability to sense in-cloud precipitation. A physically based, overland snowfall rate (SFR) algorithm has been developed using measurements from the Advanced Microwave Sounding Unit-A/Microwave Humidity Sounder sensor pair and the Advanced Technology Microwave Sounder. Currently, these instruments are aboard five polar-orbiting satellites, namely, NOAA-18, NOAA-19, Metop-A, Metop-B, and Suomi-NPP. The SFR algorithm relies on a separate snowfall detection algorithm that is composed of a satellite-based statistical model and a set of numerical weather prediction model-based filters. There are four components in the SFR algorithm itself: cloud properties retrieval, computation of ice particle terminal velocity, ice water content adjustment, and the determination of snowfall rate. The retrieval of cloud properties is the foundation of the algorithm and is accomplished using a one-dimensional variational (1DVAR) model. An existing model is adopted to derive ice particle terminal velocity. Since no measurement of cloud ice distribution is available when SFR is retrieved in near real time, such distribution is implicitly assumed by deriving an empirical function that adjusts retrieved SFR toward radar snowfall estimates. Finally, SFR is determined numerically from a complex integral. The algorithm has been validated against both radar and ground observations of snowfall events from the contiguous United States with satisfactory results. Currently, the SFR product is operationally generated at the National Oceanic and Atmospheric Administration and can be obtained from that organization.

  7. Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model

    NASA Astrophysics Data System (ADS)

    Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd

    2017-09-01

    Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.

  8. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    NASA Astrophysics Data System (ADS)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  9. Modeling a SI epidemic with stochastic transmission: hyperbolic incidence rate.

    PubMed

    Christen, Alejandra; Maulén-Yañez, M Angélica; González-Olivares, Eduardo; Curé, Michel

    2018-03-01

    In this paper a stochastic susceptible-infectious (SI) epidemic model is analysed, which is based on the model proposed by Roberts and Saha (Appl Math Lett 12: 37-41, 1999), considering a hyperbolic type nonlinear incidence rate. Assuming the proportion of infected population varies with time, our new model is described by an ordinary differential equation, which is analogous to the equation that describes the double Allee effect. The limit of the solution of this equation (deterministic model) is found when time tends to infinity. Then, the asymptotic behaviour of a stochastic fluctuation due to the environmental variation in the coefficient of disease transmission is studied. Thus a stochastic differential equation (SDE) is obtained and the existence of a unique solution is proved. Moreover, the SDE is analysed through the associated Fokker-Planck equation to obtain the invariant measure when the proportion of the infected population reaches steady state. An explicit expression for invariant measure is found and we study some of its properties. The long time behaviour of deterministic and stochastic models are compared by simulations. According to our knowledge this incidence rate has not been previously used for this type of epidemic models.

  10. [Establishing and applying of autoregressive integrated moving average model to predict the incidence rate of dysentery in Shanghai].

    PubMed

    Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An

    2010-01-01

    To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.

  11. 77 FR 24198 - Notice of Revocation of Market-Based Rate Authority and Termination of Market-Based Rate Tariffs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-23

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Notice of Revocation of Market-Based Rate Authority and Termination of Market-Based Rate Tariffs Docket Nos. Aleph One, Inc ER04-686-000 Alpha Domestic Power Trading, LLC........ ER08-14-000 American Power Exchange, Inc ER94-1578-000 CBA...

  12. A frictional population model of seismicity rate change

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.

    2005-01-01

    We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.

  13. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  14. Noise models for low counting rate coherent diffraction imaging.

    PubMed

    Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John

    2012-11-05

    Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.

  15. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  16. Modeling the Declining Positivity Rates for Human Immunodeficiency Virus Testing in New York State.

    PubMed

    Martin, Erika G; MacDonald, Roderick H; Smith, Lou C; Gordon, Daniel E; Lu, Tao; OʼConnell, Daniel A

    2015-01-01

    New York health care providers have experienced declining percentages of positive human immunodeficiency virus (HIV) tests among patients. Furthermore, observed positivity rates are lower than expected on the basis of the national estimate that one-fifth of HIV-infected residents are unaware of their infection. We used mathematical modeling to evaluate whether this decline could be a result of declining numbers of HIV-infected persons who are unaware of their infection, a measure that is impossible to measure directly. A stock-and-flow mathematical model of HIV incidence, testing, and diagnosis was developed. The model includes stocks for uninfected, infected and unaware (in 4 disease stages), and diagnosed individuals. Inputs came from published literature and time series (2006-2009) for estimated new infections, newly diagnosed HIV cases, living diagnosed cases, mortality, and diagnosis rates in New York. Primary model outcomes were the percentage of HIV-infected persons unaware of their infection and the percentage of HIV tests with a positive result (HIV positivity rate). In the base case, the estimated percentage of unaware HIV-infected persons declined from 14.2% in 2006 (range, 11.9%-16.5%) to 11.8% in 2010 (range, 9.9%-13.1%). The HIV positivity rate, assuming testing occurred independent of risk, was 0.12% in 2006 (range, 0.11%-0.15%) and 0.11% in 2010 (range, 0.10%-0.13%). The observed HIV positivity rate was more than 4 times the expected positivity rate based on the model. HIV test positivity is a readily available indicator, but it cannot distinguish causes of underlying changes. Findings suggest that the percentage of unaware HIV-infected New Yorkers is lower than the national estimate and that the observed HIV test positivity rate is greater than expected if infected and uninfected individuals tested at the same rate, indicating that testing efforts are appropriately targeting undiagnosed cases.

  17. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  18. Hybrid attacks on model-based social recommender systems

    NASA Astrophysics Data System (ADS)

    Yu, Junliang; Gao, Min; Rong, Wenge; Li, Wentao; Xiong, Qingyu; Wen, Junhao

    2017-10-01

    With the growing popularity of the online social platform, the social network based approaches to recommendation emerged. However, because of the open nature of rating systems and social networks, the social recommender systems are susceptible to malicious attacks. In this paper, we present a certain novel attack, which inherits characteristics of the rating attack and the relation attack, and term it hybrid attack. Furtherly, we explore the impact of the hybrid attack on model-based social recommender systems in multiple aspects. The experimental results show that, the hybrid attack is more destructive than the rating attack in most cases. In addition, users and items with fewer ratings will be influenced more when attacked. Last but not the least, the findings suggest that spammers do not depend on the feedback links from normal users to become more powerful, the unilateral links can make the hybrid attack effective enough. Since unilateral links are much cheaper, the hybrid attack will be a great threat to model-based social recommender systems.

  19. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  20. Triple-α reaction rate constrained by stellar evolution models

    NASA Astrophysics Data System (ADS)

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-01

    We investigate the quantitative constraint on the triple-α reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8<=M/Msolar<=25 and in the metallicity range between Z = 0 and Z = 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10Msolar) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8<=M/Msolar<=6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-α reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least ν > 10 at T = 1-1.2×108K where the cross section is proportional to Tν. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than ~ 10-29 cm6 s-1 mole-2 at ~ 107.8 K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  1. Predation rates by North Sea cod (Gadus morhua) - Predictions from models on gastric evacuation and bioenergetics

    USGS Publications Warehouse

    Hansson, S.; Rudstam, L. G.; Kitchell, J.F.; Hilden, M.; Johnson, B.L.; Peppard, P.E.

    1996-01-01

    We compared four different methods for estimating predation rates by North Sea cod (Gadus moi hua). Three estimates, based on gastric evacuation rates, came from an ICES multispecies working group and the fourth from a bioenergetics model. The bioenergetics model was developed from a review of literature on cod physiology. The three gastric evacuation rate models produced very different prey consumption estimates for small (2 kg) fish. For most size and age classes, the bioenergetics model predicted food consumption rates intermediate to those predicted by the gastric evacuation models. Using the standard ICES model and the average population abundance and age structure for 1974-1989, annual, prey consumption by the North Sea cod population (age greater than or equal to 1) was 840 kilotons. The other two evacuation rate models produced estimates of 1020 and 1640 kilotons, respectively. The bioenergetics model estimate was 1420 kilotons. The major differences between models were due to consumption rate estimates for younger age groups of cod. (C) 1996 International Council for the Exploration of the Sea

  2. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation.

    PubMed

    Augustin, Moritz; Ladenbauer, Josef; Baumann, Fabian; Obermayer, Klaus

    2017-06-01

    The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for

  3. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation

    PubMed Central

    Baumann, Fabian; Obermayer, Klaus

    2017-01-01

    The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for

  4. Tailoring drug release rates in hydrogel-based therapeutic delivery applications using graphene oxide

    PubMed Central

    Zhi, Z. L.; Craster, R. V.

    2018-01-01

    Graphene oxide (GO) is increasingly used for controlling mass diffusion in hydrogel-based drug delivery applications. On the macro-scale, the density of GO in the hydrogel is a critical parameter for modulating drug release. Here, we investigate the diffusion of a peptide drug through a network of GO membranes and GO-embedded hydrogels, modelled as porous matrices resembling both laminated and ‘house of cards’ structures. Our experiments use a therapeutic peptide and show a tunable nonlinear dependence of the peptide concentration upon time. We establish models using numerical simulations with a diffusion equation accounting for the photo-thermal degradation of fluorophores and an effective percolation model to simulate the experimental data. The modelling yields an interpretation of the control of drug diffusion through GO membranes, which is extended to the diffusion of the peptide in GO-embedded agarose hydrogels. Varying the density of micron-sized GO flakes allows for fine control of the drug diffusion. We further show that both GO density and size influence the drug release rate. The ability to tune the density of hydrogel-like GO membranes to control drug release rates has exciting implications to offer guidelines for tailoring drug release rates in hydrogel-based therapeutic delivery applications. PMID:29445040

  5. Regression rate behaviors of HTPB-based propellant combinations for hybrid rocket motor

    NASA Astrophysics Data System (ADS)

    Sun, Xingliang; Tian, Hui; Li, Yuelong; Yu, Nanjia; Cai, Guobiao

    2016-02-01

    The purpose of this paper is to characterize the regression rate behavior of hybrid rocket motor propellant combinations, using hydrogen peroxide (HP), gaseous oxygen (GOX), nitrous oxide (N2O) as the oxidizer and hydroxyl-terminated poly-butadiene (HTPB) as the based fuel. In order to complete this research by experiment and simulation, a hybrid rocket motor test system and a numerical simulation model are established. Series of hybrid rocket motor firing tests are conducted burning different propellant combinations, and several of those are used as references for numerical simulations. The numerical simulation model is developed by combining the Navies-Stokes equations with the turbulence model, one-step global reaction model, and solid-gas coupling model. The distribution of regression rate along the axis is determined by applying simulation mode to predict the combustion process and heat transfer inside the hybrid rocket motor. The time-space averaged regression rate has a good agreement between the numerical value and experimental data. The results indicate that the N2O/HTPB and GOX/HTPB propellant combinations have a higher regression rate, since the enhancement effect of latter is significant due to its higher flame temperature. Furthermore, the containing of aluminum (Al) and/or ammonium perchlorate(AP) in the grain does enhance the regression rate, mainly due to the more energy released inside the chamber and heat feedback to the grain surface by the aluminum combustion.

  6. Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive.

    PubMed

    Richardson, Magnus J E

    2007-08-01

    Integrate-and-fire models are mainstays of the study of single-neuron response properties and emergent states of recurrent networks of spiking neurons. They also provide an analytical base for perturbative approaches that treat important biological details, such as synaptic filtering, synaptic conductance increase, and voltage-activated currents. Steady-state firing rates of both linear and nonlinear integrate-and-fire models, receiving fluctuating synaptic drive, can be calculated from the time-independent Fokker-Planck equation. The dynamic firing-rate response is less easy to extract, even at the first-order level of a weak modulation of the model parameters, but is an important determinant of neuronal response and network stability. For the linear integrate-and-fire model the response to modulations of current-based synaptic drive can be written in terms of hypergeometric functions. For the nonlinear exponential and quadratic models no such analytical forms for the response are available. Here it is demonstrated that a rather simple numerical method can be used to obtain the steady-state and dynamic response for both linear and nonlinear models to parameter modulation in the presence of current-based or conductance-based synaptic fluctuations. To complement the full numerical solution, generalized analytical forms for the high-frequency response are provided. A special case is also identified--time-constant modulation--for which the response to an arbitrarily strong modulation can be calculated exactly.

  7. A Simple and Accurate Rate-Driven Infiltration Model

    NASA Astrophysics Data System (ADS)

    Cui, G.; Zhu, J.

    2017-12-01

    In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.

  8. Models for financial crisis detection in Indonesia based on bank deposits, real exchange rate and terms of trade indicators

    NASA Astrophysics Data System (ADS)

    Sugiyanto; Zukhronah, Etik; Nur Aini, Anis

    2017-12-01

    Several times Indonesia has experienced to face a financial crisis, but the crisis occurred in 1997 had a tremendous impact on the economy and national stability. The impact of the crisis fall the exchange rate of rupiah against the dollar so it is needed the financial crisis detection system. Some data of bank deposits, real exchange rate and terms of trade indicators are used in this paper. Data taken from January 1990 until December 2016 are used to form the models with three state. Combination of volatility and Markov switching models are used to model the data. The result suggests that the appropriate model for bank deposit and terms of trade is SWARCH (3,1), and for real exchange rates is SWARCH (3,2).

  9. An agent-based model of tsetse fly response to seasonal climatic drivers: Assessing the impact on sleeping sickness transmission rates.

    PubMed

    Alderton, Simon; Macleod, Ewan T; Anderson, Neil E; Palmer, Gwen; Machila, Noreen; Simuunza, Martin; Welburn, Susan C; Atkinson, Peter M

    2018-02-01

    This paper presents the development of an agent-based model (ABM) to incorporate climatic drivers which affect tsetse fly (G. m. morsitans) population dynamics, and ultimately disease transmission. The model was used to gain a greater understanding of how tsetse populations fluctuate seasonally, and investigate any response observed in Trypanosoma brucei rhodesiense human African trypanosomiasis (rHAT) disease transmission, with a view to gaining a greater understanding of disease dynamics. Such an understanding is essential for the development of appropriate, well-targeted mitigation strategies in the future. The ABM was developed to model rHAT incidence at a fine spatial scale along a 75 km transect in the Luangwa Valley, Zambia. The model incorporates climatic factors that affect pupal mortality, pupal development, birth rate, and death rate. In combination with fine scale demographic data such as ethnicity, age and gender for the human population in the region, as well as an animal census and a sample of daily routines, we create a detailed, plausible simulation model to explore tsetse population and disease transmission dynamics. The seasonally-driven model suggests that the number of infections reported annually in the simulation is likely to be a reasonable representation of reality, taking into account the high levels of under-detection observed. Similar infection rates were observed in human (0.355 per 1000 person-years (SE = 0.013)), and cattle (0.281 per 1000 cattle-years (SE = 0.025)) populations, likely due to the sparsity of cattle close to the tsetse interface. The model suggests that immigrant tribes and school children are at greatest risk of infection, a result that derives from the bottom-up nature of the ABM and conditioning on multiple constraints. This result could not be inferred using alternative population-level modelling approaches. In producing a model which models the tsetse population at a very fine resolution, we were able to analyse

  10. Vehicle-specific emissions modeling based upon on-road measurements.

    PubMed

    Frey, H Christopher; Zhang, Kaishan; Rouphail, Nagui M

    2010-05-01

    Vehicle-specific microscale fuel use and emissions rate models are developed based upon real-world hot-stabilized tailpipe measurements made using a portable emissions measurement system. Consecutive averaging periods of one to three multiples of the response time are used to compare two semiempirical physically based modeling schemes. One scheme is based on internally observable variables (IOVs), such as engine speed and manifold absolute pressure, while the other is based on externally observable variables (EOVs), such as speed, acceleration, and road grade. For NO, HC, and CO emission rates, the average R(2) ranged from 0.41 to 0.66 for the former and from 0.17 to 0.30 for the latter. The EOV models have R(2) for CO(2) of 0.43 to 0.79 versus 0.99 for the IOV models. The models are sensitive to episodic events in driving cycles such as high acceleration. Intervehicle and fleet average modeling approaches are compared; the former account for microscale variations that might be useful for some types of assessments. EOV-based models have practical value for traffic management or simulation applications since IOVs usually are not available or not used for emission estimation.

  11. The Modellers' Halting Foray into Ecological Theory: Or, What is This Thing Called 'Growth Rate'?

    PubMed

    Deveau, Michael; Karsten, Richard; Teismann, Holger

    2015-06-01

    This discussion paper describes the attempt of an imagined group of non-ecologists ("Modellers") to determine the population growth rate from field data. The Modellers wrestle with the multiple definitions of the growth rate available in the literature and the fact that, in their modelling, it appears to be drastically model-dependent, which seems to throw into question the very concept itself. Specifically, they observe that six representative models used to capture the data produce growth-rate values, which differ significantly. Almost ready to concede that the problem they set for themselves is ill-posed, they arrive at an alternative point of view that not only preserves the identity of the concept of the growth rate, but also helps discriminate between competing models for capturing the data. This is accomplished by assessing how robustly a given model is able to generate growth-rate values from randomized time-series data. This leads to the proposal of an iterative approach to ecological modelling in which the definition of theoretical concepts (such as the growth rate) and model selection complement each other. The paper is based on high-quality field data of mites on apple trees and may be called a "data-driven opinion piece".

  12. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  13. Monitoring, modeling, and management: why base avian management on vital rates and how should it be done?

    Treesearch

    David F. DeSante; M. Philip Nott; Danielle R. Kaschube

    2005-01-01

    In this paper we argue that effective management of landbirds should be based on assessing and monitoring their vital rates (primary demographic parameters) as well as population trends. This is because environmental stressors and management actions affect vital rates directly and usually without time lags, and because monitoring vital rates provides a) information on...

  14. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 592.510 Section 592.510 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate. The...

  15. Disturbance Distance: Using a process based ecosystem model to estimate and map potential thresholds in disturbance rates that would give rise to fundamentally altered ecosystems

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.; Hurtt, G. C.; Fisk, J.; Flanagan, S.; LePage, Y.; Sahajpal, R.

    2014-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. As recent studies highlight novel disturbance regimes resulting from pollution, invasive pests and climate change, there is a need to include these alterations in predictions of future forest function and structure. The Ecosystem Demography (ED) model is a mechanistic model of forest ecosystem dynamics in which individual-based forest dynamics can be efficiently implemented over regional to global scales due to advanced scaling methods. We utilize ED to characterize the sensitivity of potential vegetation structure and function to changes in rates of density independent mortality. Disturbance rate within ED can either be altered directly or through the development of sub-models. Disturbance sub-models in ED currently include fire, land use and hurricanes. We use a tiered approach to understand the sensitivity of North American ecosystems to changes in background density independent mortality. Our first analyses were conducted at half-degree spatial resolution with a constant rate of disturbance in space and time, which was altered between runs. Annual climate was held constant at the site level and the land use and fire sub-models were turned off. Results showed an ~ 30% increase in non-forest area across the US when disturbance rates were changed from 0.6% a year to 1.2% a year and a more than 3.5 fold increase in non-forest area when disturbance rates doubled again from 1.2% to 2.4%. Continued runs altered natural background disturbance rates with the existing fire and hurricane sub models turned on as well as historic and future land use. By quantify differences between model outputs that characterize ecosystem structure and function related to the carbon cycle across the US, we

  16. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  17. Modeling High Rate Phosphorus and Nitrogen Removal in a Vertical Flow Alum Sludge based Constructed Wetlands

    NASA Astrophysics Data System (ADS)

    Jeyakumar, Lordwin; Zhao, Yaqian

    2014-05-01

    Increased awareness of the impacts of diffuse pollution and their intensification has pushed forward the need for the development of low-cost wastewater treatment techniques. One of such efforts is the use of novel DASC (Dewatered Alum Sludge Cakes) based constructed wetlands (CWs) for removing nutrients, organics, trace elements and other pollutants from wastewater. Understanding of the processes in CWs requires a numerical model that describes the biochemical transformation and degradation processes in subsurface vertical flow (VF) CWs. Therefore, this research focuses on the development of a process-based model for phosphorus (P) and nitrogen (N) removal to achieve a stable performance by using DASC as a substrate in CWs treatment system. An object-oriented modelling tool known as "STELLA" which works based on the principle of system dynamics is used for the development of P and N model. The core objective of the modelling work is oriented towards understanding the process in DASC-based CWs and optimizes design criteria. The P and N dynamic model is developed for DASC-based CWs. The P model developed exclusively for DASC-based CW was able to simulate the effluent P concentration leaving the system satisfactorily. Moreover, the developed P dynamic model has identified the major P pathways as adsorption (72%) followed by plant uptake (20%) and microbial uptake (7%) in single-stage laboratory scale DASC-based CW. Similarly, P dynamic simulation model was developed to simulate the four-stage laboratory scale DASC-based CWs. It was found that simulated and observed values of P removal were in good agreement. The fate of P in all the four stages clearly shows that adsorption played a pivotal role in each stage of the system due to the use of the DASC as a substrate. P adsorption by wetland substrate/DASC represents 59-75% of total P reduction. Subsequently, plant uptake and microbial uptake have lesser role regarding P removal (as compared to adsorption).With regard

  18. Prediction of indoor radon/thoron concentration in a model room from exhalation rates of building materials for different ventilation rates

    NASA Astrophysics Data System (ADS)

    Kumar, Manish; Sharma, Navjeet; Sarin, Amit

    2018-05-01

    Studies have confirmed that elevated levels of radon/thoron in the human-environments can substantially increase the risk of lung cancer in general population. The building materials are the second largest contributors to indoor radon/thoron after soil and bedrock beneath dwellings. In present investigation, the exhalation rates of radon/thoron from different building materials samples have been analysed using active technique. Radon/thoron concentrations in a model room have been predicted based on the exhalation rates from walls, floor and roof. The indoor concentrations show significant variations depending upon the ventilation rate and type of building materials used.

  19. Modeling and predicting historical volatility in exchange rate markets

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2017-04-01

    Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.

  20. Probabilistic models and uncertainty quantification for the ionization reaction rate of atomic Nitrogen

    NASA Astrophysics Data System (ADS)

    Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.

    2012-05-01

    The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.

  1. Chemistry resolved kinetic flow modeling of TATB based explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Laurence E.; William, Howard; Levesque, George; Souers, P. Clark

    2012-03-01

    Detonation waves in insensitive, TATB-based explosives are believed to have multiple time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. We term our model chemistry resolved kinetic flow, since CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculates EOS values based on the concentrations. We present here two variants of our new rate model and comparison with hot, ambient, and cold experimental data for PBX 9502.

  2. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P A; Fried, L E; Howard, W M

    2011-07-21

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. They use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. They term their model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonationmore » wave and calculates EOS values based on the concentrations. A HE-validation suite of model simulations compared to experiments at ambient, hot, and cold temperatures has been developed. They present here a new rate model and comparison with experimental data.« less

  3. What explains usage of mobile physician-rating apps? Results from a web-based questionnaire.

    PubMed

    Bidmon, Sonja; Terlutter, Ralf; Röttl, Johanna

    2014-06-11

    Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients' value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients' value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of using apps for health-related information

  4. Volatility modeling for IDR exchange rate through APARCH model with student-t distribution

    NASA Astrophysics Data System (ADS)

    Nugroho, Didit Budi; Susanto, Bambang

    2017-08-01

    The aim of this study is to empirically investigate the performance of APARCH(1,1) volatility model with the Student-t error distribution on five foreign currency selling rates to Indonesian rupiah (IDR), including the Swiss franc (CHF), the Euro (EUR), the British pound (GBP), Japanese yen (JPY), and the US dollar (USD). Six years daily closing rates over the period of January 2010 to December 2016 for a total number of 1722 observations have analysed. The Bayesian inference using the efficient independence chain Metropolis-Hastings and adaptive random walk Metropolis methods in the Markov chain Monte Carlo (MCMC) scheme has been applied to estimate the parameters of model. According to the DIC criterion, this study has found that the APARCH(1,1) model under Student-t distribution is a better fit than the model under normal distribution for any observed rate return series. The 95% highest posterior density interval suggested the APARCH models to model the IDR/JPY and IDR/USD volatilities. In particular, the IDR/JPY and IDR/USD data, respectively, have significant negative and positive leverage effect in the rate returns. Meanwhile, the optimal power coefficient of volatility has been found to be statistically different from 2 in adopting all rate return series, save the IDR/EUR rate return series.

  5. Vehicle anti-rollover control strategy based on load transferring rate

    NASA Astrophysics Data System (ADS)

    Dai, W. T.; Du, H. Q.; Zhang, L.

    2018-03-01

    When vehicles is drived on a low adhesion road or going on a high speed and sharp turn, it is prone to product some lateral stability problems, such as lateral sideslip or rollover. In order to improve the vehicle anti-rollover stability under these limited conditions, a SUV vehicle model with high mass center was built based on the software of CarSim and the rollover stability controller was designed using the static threshold value method for the lateral load transferring rate (LTR). The simulations are shown that the vehicle anti-rollover stability under limit conditions is improved using the SUV model.

  6. Modeling analysis of pulsed magnetization process of magnetic core based on inverse Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang

    2018-05-01

    The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.

  7. Cosmogenic Ne-21 Production Rates in H-Chondrites Based on Cl-36 - Ar-36 Ages

    NASA Technical Reports Server (NTRS)

    Leya, I.; Graf, Th.; Nishiizumi, K.; Guenther, D.; Wieler, R.

    2000-01-01

    We measured Ne-21 production rates in 14 H-chondrites in good agreement with model calculations. The production rates are based on Ne-21 concentrations measured on bulk samples or the non-magnetic fraction and Cl-36 - Ar-36 ages determined from the metal phase.

  8. Analysis of Factors that Influence Infiltration Rates using the HELP Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyer, J.; Shipmon, J.

    The Hydrologic Evaluation of Landfill Performance (HELP) model is used by Savannah River National Laboratory (SRNL) in conjunction with PORFLOW groundwater flow simulation software to make longterm predictions of the fate and transport of radionuclides in the environment at radiological waste sites. The work summarized in this report supports preparation of the planned 2018 Performance Assessment for the E-Area Low-Level Waste Facility (LLWF) at the Savannah River Site (SRS). More specifically, this project focused on conducting a sensitivity analysis of infiltration (i.e., the rate at which water travels vertically in soil) through the proposed E-Area LLWF closure cap. A sensitivitymore » analysis was completed using HELP v3.95D to identify the cap design and material property parameters that most impact infiltration rates through the proposed closure cap for a 10,000-year simulation period. The results of the sensitivity analysis indicate that saturated hydraulic conductivity (Ksat) for select cap layers, precipitation rate, surface vegetation type, and geomembrane layer defect density are dominant factors limiting infiltration rate. Interestingly, calculated infiltration rates were substantially influenced by changes in the saturated hydraulic conductivity of the Upper Foundation and Lateral Drainage layers. For example, an order-of-magnitude decrease in Ksat for the Upper Foundation layer lowered the maximum infiltration rate from a base-case 11 inches per year to only two inches per year. Conversely, an order-of-magnitude increase in Ksat led to an increase in infiltration rate from 11 to 15 inches per year. This work and its results provide a framework for quantifying uncertainty in the radionuclide transport and dose models for the planned 2018 E-Area Performance Assessment. Future work will focus on the development of a nonlinear regression model for infiltration rate using Minitab 17® to facilitate execution of probabilistic simulations in the Gold

  9. A Simultaneous Equation Demand Model for Block Rates

    NASA Astrophysics Data System (ADS)

    Agthe, Donald E.; Billings, R. Bruce; Dobra, John L.; Raffiee, Kambiz

    1986-01-01

    This paper examines the problem of simultaneous-equations bias in estimation of the water demand function under an increasing block rate structure. The Hausman specification test is used to detect the presence of simultaneous-equations bias arising from correlation of the price measures with the regression error term in the results of a previously published study of water demand in Tucson, Arizona. An alternative simultaneous equation model is proposed for estimating the elasticity of demand in the presence of block rate pricing structures and availability of service charges. This model is used to reestimate the price and rate premium elasticities of demand in Tucson, Arizona for both the usual long-run static model and for a simple short-run demand model. The results from these simultaneous equation models are consistent with a priori expectations and are unbiased.

  10. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    PubMed

    Yu, Kezi; Quirk, J Gerald; Djurić, Petar M

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.

  11. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models

    PubMed Central

    Yu, Kezi; Quirk, J. Gerald

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927

  12. A cloud model-radiative model combination for determining microwave TB-rain rate relations

    NASA Technical Reports Server (NTRS)

    Szejwach, Gerard; Adler, Robert F.; Jobard, Esabelle; Mack, Robert A.

    1986-01-01

    The development of a cloud model-radiative transfer model combination for computing average brightness temperature, T(B), is discussed. The cloud model and radiative transfer model used in this study are described. The relations between rain rate, cloud and rain water, cloud and precipitation ice, and upwelling radiance are investigated. The effects of the rain rate relations on T(B) under different climatological conditions are examined. The model-derived T(B) results are compared to the 92 and 183 GHz aircraft observations of Hakkarinen and Adler (1984, 1986) and the radar-estimated rain rate of Hakkarinen and Adler (1986); good correlation between the data is detected.

  13. Disturbance Distance: Combining a process based ecosystem model and remote sensing data to map the vulnerability of U.S. forested ecosystems to potentially altered disturbance rates

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.

    2015-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. In addition, recent studies suggest that disturbance rates may increase in the future under altered climate and land use scenarios. Thus understanding how vulnerable forested ecosystems are to potential changes in disturbance rates is of high importance. This study calculated the theoretical threshold rate of disturbance for which forest ecosystems could no longer be sustained (λ*) across the Coterminous U.S. using an advanced process based ecosystem model (ED). Published rates of disturbance (λ) in 50 study sites were obtained from the North American Forest Disturbance (NAFD) program. Disturbance distance (λ* - λ) was calculated for each site by differencing the model based threshold under current climate conditions and average observed rates of disturbance over the last quarter century. Preliminary results confirm all sample forest sites have current average rates of disturbance below λ*, but there were interesting patterns in the recorded disturbance distances. In general western sites had much smaller disturbance distances, suggesting higher vulnerability to change, while eastern sites showed larger buffers. Ongoing work is being conducted to assess the vulnerability of these sites in the context of potential future changes by propagating scenarios of future climate and land-use change through the analysis.

  14. A research model--forecasting incident rates from optimized safety program intervention strategies.

    PubMed

    Iyer, P S; Haight, J M; Del Castillo, E; Tink, B W; Hawkins, P W

    2005-01-01

    INTRODUCTION/PROBLEM: Property damage incidents, workplace injuries, and safety programs designed to prevent them, are expensive aspects of doing business in contemporary industry. The National Safety Council (2002) estimated that workplace injuries cost $146.6 billion per year. Because companies are resource limited, optimizing intervention strategies to decrease incidents with less costly programs can contribute to improved productivity. Systematic data collection methods were employed and the forecasting ability of a time-lag relationship between interventions and incident rates was studied using various statistical methods (an intervention is not expected to have an immediate nor infinitely lasting effect on the incident rate). As a follow up to the initial work, researchers developed two models designed to forecast incident rates. One is based on past incident rate performance and the other on the configuration and level of effort applied to the safety and health program. Researchers compared actual incident performance to the prediction capability of each model over 18 months in the forestry operations at an electricity distribution company and found the models to allow accurate prediction of incident rates. These models potentially have powerful implications as a business-planning tool for human resource allocation and for designing an optimized safety and health intervention program to minimize incidents. Depending on the mathematical relationship, one can determine what interventions, where and how much to apply them, and when to increase or reduce human resource input as determined by the forecasted performance.

  15. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  16. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a

  17. A Comparison of Moment Rates for the Eastern Mediterranean Region from Competitive Kinematic Models

    NASA Astrophysics Data System (ADS)

    Klein, E. C.; Ozeren, M. S.; Shen-Tu, B.; Galgana, G. A.

    2017-12-01

    Relatively continuous, complex, and long-lived episodes of tectonic deformation gradually shaped the lithosphere of the eastern Mediterranean region into its present state. This large geodynamically interconnected and seismically active region absorbs, accumulates and transmits strains arising from stresses associated with: (1) steady northward convergence of the Arabian and African plates; (2) differences in lithospheric gravitational potential energy; and (3) basal tractions exerted by subduction along the Hellenic and Cyprus Arcs. Over the last twenty years, numerous kinematic models have been built using a variety of assumptions to take advantage of the extensive and dense GPS observations made across the entire region resulting in a far better characterization of the neotectonic deformation field than ever previously achieved. In this study, three separate horizontal strain rate field solutions obtained from three, region-wide, GPS only based kinematic models (i.e., a regional block model, a regional continuum model, and global continuum model) are utilized to estimate the distribution and uncertainty of geodetic moment rates within the eastern Mediterranean region. The geodetic moment rates from each model are also compared with seismic moment release rates gleaned from historic earthquake data. Moreover, kinematic styles of deformation derived from each of the modeled horizontal strain rate fields are examined for their degree of correlation with earthquake rupture styles defined by proximal centroid moment tensor solutions. This study suggests that significant differences in geodetically obtained moment rates from competitive kinematic models may introduce unforeseen bias into regularly updated, geodetically constrained, regional seismic hazard assessments.

  18. An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution

    NASA Astrophysics Data System (ADS)

    Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan

    2013-04-01

    The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently

  19. Liver cancer mortality rate model in Thailand

    NASA Astrophysics Data System (ADS)

    Sriwattanapongse, Wattanavadee; Prasitwattanaseree, Sukon

    2013-09-01

    Liver Cancer has been a leading cause of death in Thailand. The purpose of this study was to model and forecast liver cancer mortality rate in Thailand using death certificate reports. A retrospective analysis of the liver cancer mortality rate was conducted. Numbering of 123,280 liver cancer causes of death cases were obtained from the national vital registration database for the 10-year period from 2000 to 2009, provided by the Ministry of Interior and coded as cause-of-death using ICD-10 by the Ministry of Public Health. Multivariate regression model was used for modeling and forecasting age-specific liver cancer mortality rates in Thailand. Liver cancer mortality increased with increasing age for each sex and was also higher in the North East provinces. The trends of liver cancer mortality remained stable in most age groups with increases during ten-year period (2000 to 2009) in the Northern and Southern. Liver cancer mortality was higher in males and increase with increasing age. There is need of liver cancer control measures to remain on a sustained and long-term basis for the high liver cancer burden rate of Thailand.

  20. Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast

    NASA Astrophysics Data System (ADS)

    Toda, Shinji; Enescu, Bogdan

    2011-03-01

    Numerous studies retrospectively found that seismicity rate jumps (drops) by coseismic Coulomb stress increase (decrease). The Collaboratory for the Study of Earthquake Prediction (CSEP) instead provides us an opportunity for prospective testing of the Coulomb hypothesis. Here we adapt our stress transfer model incorporating rate and state dependent friction law to the CSEP Japan seismicity forecast. We demonstrate how to compute the forecast rates of large shocks in 2009 using the large earthquakes during the past 120 years. The time dependent impact of the coseismic stress perturbations explains qualitatively well the occurrence of the recent moderate size shocks. Such ability is partly similar to that of statistical earthquake clustering models. However, our model differs from them as follows: the off-fault aftershock zones can be simulated using finite fault sources; the regional areal patterns of triggered seismicity are modified by the dominant mechanisms of the potential sources; the imparted stresses due to large earthquakes produce stress shadows that lead to a reduction of the forecasted number of earthquakes. Although the model relies on several unknown parameters, it is the first physics based model submitted to the CSEP Japan test center and has the potential to be tuned for short-term earthquake forecasts.

  1. The effect of learning models and emotional intelligence toward students learning outcomes on reaction rate

    NASA Astrophysics Data System (ADS)

    Sutiani, Ani; Silitonga, Mei Y.

    2017-08-01

    This research focused on the effect of learning models and emotional intelligence in students' chemistry learning outcomes on reaction rate teaching topic. In order to achieve the objectives of the research, with 2x2 factorial research design was used. There were two factors tested, namely: the learning models (factor A), and emotional intelligence (factor B) factors. Then, two learning models were used; problem-based learning/PBL (A1), and project-based learning/PjBL (A2). While, the emotional intelligence was divided into higher and lower types. The number of population was six classes containing 243 grade X students of SMAN 10 Medan, Indonesia. There were 15 students of each class were chosen as the sample of the research by applying purposive sampling technique. The data were analyzed by applying two-ways analysis of variance (2X2) at the level of significant α = 0.05. Based on hypothesis testing, there was the interaction between learning models and emotional intelligence in students' chemistry learning outcomes. Then, the finding of the research showed that students' learning outcomes in reaction rate taught by using PBL with higher emotional intelligence is higher than those who were taught by using PjBL. There was no significant effect between students with lower emotional intelligence taught by using both PBL and PjBL in reaction rate topic. Based on the finding, the students with lower emotional intelligence were quite hard to get in touch with other students in group discussion.

  2. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  3. On a sparse pressure-flow rate condensation of rigid circulation models

    PubMed Central

    Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.

    2015-01-01

    Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219

  4. On a problematic procedure to manipulate response biases in recognition experiments: the case of "implied" base rates.

    PubMed

    Bröder, Arndt; Malejka, Simone

    2017-07-01

    The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.

  5. Evolution of the rate of biological aging using a phenotype based computational model.

    PubMed

    Kittas, Aristotelis

    2010-10-07

    In this work I introduce a simple model to study how natural selection acts upon aging, which focuses on the viability of each individual. It is able to reproduce the Gompertz law of mortality and can make predictions about the relation between the level of mutation rates (beneficial/deleterious/neutral), age at reproductive maturity and the degree of biological aging. With no mutations, a population with low age at reproductive maturity R stabilizes at higher density values, while with mutations it reaches its maximum density, because even for large pre-reproductive periods each individual evolves to survive to maturity. Species with very short pre-reproductive periods can only tolerate a small number of detrimental mutations. The probabilities of detrimental (P(d)) or beneficial (P(b)) mutations are demonstrated to greatly affect the process. High absolute values produce peaks in the viability of the population over time. Mutations combined with low selection pressure move the system towards weaker phenotypes. For low values in the ratio P(d)/P(b), the speed at which aging occurs is almost independent of R, while higher values favor significantly species with high R. The value of R is critical to whether the population survives or dies out. The aging rate is controlled by P(d) and P(b) and the amount of the viability of each individual is modified, with neutral mutations allowing the system more "room" to evolve. The process of aging in this simple model is revealed to be fairly complex, yielding a rich variety of results. 2010 Elsevier Ltd. All rights reserved.

  6. A simplified model for predicting malaria entomologic inoculation rates based on entomologic and parasitologic parameters relevant to control.

    PubMed

    Killeen, G F; McKenzie, F E; Foy, B D; Schieffelin, C; Billingsley, P F; Beier, J C

    2000-05-01

    Malaria transmission intensity is modeled from the starting perspective of individual vector mosquitoes and is expressed directly as the entomologic inoculation rate (EIR). The potential of individual mosquitoes to transmit malaria during their lifetime is presented graphically as a function of their feeding cycle length and survival, human biting preferences, and the parasite sporogonic incubation period. The EIR is then calculated as the product of 1) the potential of individual vectors to transmit malaria during their lifetime, 2) vector emergence rate relative to human population size, and 3) the infectiousness of the human population to vectors. Thus, impacts on more than one of these parameters will amplify each other's effects. The EIRs transmitted by the dominant vector species at four malaria-endemic sites from Papua New Guinea, Tanzania, and Nigeria were predicted using field measurements of these characteristics together with human biting rate and human reservoir infectiousness. This model predicted EIRs (+/- SD) that are 1.13 +/- 0.37 (range = 0.84-1.59) times those measured in the field. For these four sites, mosquito emergence rate and lifetime transmission potential were more important determinants of the EIR than human reservoir infectiousness. This model and the input parameters from the four sites allow the potential impacts of various control measures on malaria transmission intensity to be tested under a range of endemic conditions. The model has potential applications for the development and implementation of transmission control measures and for public health education.

  7. A SIMPLIFIED MODEL FOR PREDICTING MALARIA ENTOMOLOGIC INOCULATION RATES BASED ON ENTOMOLOGIC AND PARASITOLOGIC PARAMETERS RELEVANT TO CONTROL

    PubMed Central

    KILLEEN, GERRY F.; McKENZIE, F. ELLIS; FOY, BRIAN D.; SCHIEFFELIN, CATHERINE; BILLINGSLEY, PETER F.; BEIER, JOHN C.

    2008-01-01

    Malaria transmission intensity is modeled from the starting perspective of individual vector mosquitoes and is expressed directly as the entomologic inoculation rate (EIR). The potential of individual mosquitoes to transmit malaria during their lifetime is presented graphically as a function of their feeding cycle length and survival, human biting preferences, and the parasite sporogonic incubation period. The EIR is then calculated as the product of 1) the potential of individual vectors to transmit malaria during their lifetime, 2) vector emergence rate relative to human population size, and 3) the infectiousness of the human population to vectors. Thus, impacts on more than one of these parameters will amplify each other’s effects. The EIRs transmitted by the dominant vector species at four malaria-endemic sites from Papua New Guinea, Tanzania, and Nigeria were predicted using field measurements of these characteristics together with human biting rate and human reservoir infectiousness. This model predicted EIRs (± SD) that are 1.13 ± 0.37 (range = 0.84–1.59) times those measured in the field. For these four sites, mosquito emergence rate and lifetime transmission potential were more important determinants of the EIR than human reservoir infectiousness. This model and the input parameters from the four sites allow the potential impacts of various control measures on malaria transmission intensity to be tested under a range of endemic conditions. The model has potential applications for the development and implementation of transmission control measures and for public health education. PMID:11289661

  8. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    NASA Astrophysics Data System (ADS)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  9. Extending rule-based methods to model molecular geometry and 3D model resolution.

    PubMed

    Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia

    2016-08-01

    Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models

  10. What Explains Usage of Mobile Physician-Rating Apps? Results From a Web-Based Questionnaire

    PubMed Central

    Terlutter, Ralf; Röttl, Johanna

    2014-01-01

    Background Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. Objective This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. Methods A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. Results The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients’ value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients’ value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of

  11. Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs

    NASA Astrophysics Data System (ADS)

    Zhu, Yanfeng; Niu, Zhisheng

    Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.

  12. Sphaleron rate in the minimal standard model.

    PubMed

    D'Onofrio, Michela; Rummukainen, Kari; Tranberg, Anders

    2014-10-03

    We use large-scale lattice simulations to compute the rate of baryon number violating processes (the sphaleron rate), the Higgs field expectation value, and the critical temperature in the standard model across the electroweak phase transition temperature. While there is no true phase transition between the high-temperature symmetric phase and the low-temperature broken phase, the crossover is sharp and located at temperature T(c) = (159.5 ± 1.5)  GeV. The sphaleron rate in the symmetric phase (T>T(c)) is Γ/T(4) = (18 ± 3)α(W)(5), and in the broken phase in the physically interesting temperature range 130 GeV < T < T(c) it can be parametrized as log(Γ/T(4)) = (0.83 ± 0.01)T/GeV-(147.7 ± 1.9). The freeze-out temperature in the early Universe, where the Hubble rate wins over the baryon number violation rate, is T* = (131.7 ± 2.3) GeV. These values, beyond being intrinsic properties of the standard model, are relevant for, e.g., low-scale leptogenesis scenarios.

  13. A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.

    PubMed

    Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger

    2018-04-19

    Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model

  14. [NDVI difference rate recognition model of deciduous broad-leaved forest based on HJ-CCD remote sensing data].

    PubMed

    Wang, Yan; Tian, Qing-Jiu; Huang, Yan; Wei, Hong-Wei

    2013-04-01

    The present paper takes Chuzhou in Anhui Province as the research area, and deciduous broad-leaved forest as the research object. Then it constructs the recognition model about deciduous broad-leaved forest was constructed using NDVI difference rate between leaf expansion and flowering and fruit-bearing, and the model was applied to HJ-CCD remote sensing image on April 1, 2012 and May 4, 2012. At last, the spatial distribution map of deciduous broad-leaved forest was extracted effectively, and the results of extraction were verified and evaluated. The result shows the validity of NDVI difference rate extraction method proposed in this paper and also verifies the applicability of using HJ-CCD data for vegetation classification and recognition.

  15. Phytoplankton growth rate modelling: can spectroscopic cell chemotyping be superior to physiological predictors?

    PubMed

    Fanesi, Andrea; Wagner, Heiko; Wilhelm, Christian

    2017-02-08

    Climate change has a strong impact on phytoplankton communities and water quality. However, the development of robust techniques to assess phytoplankton growth is still in progress. In this study, the growth rate of phytoplankton cells grown at different temperatures was modelled based on conventional physiological traits (e.g. chlorophyll, carbon and photosynthetic parameters) using the partial least square regression (PLSR) algorithm and compared with a new approach combining Fourier transform infrared-spectroscopy and PLSR. In this second model, it is assumed that the macromolecular composition of phytoplankton cells represents an intracellular marker for growth. The models have comparable high predictive power (R 2 > 0.8) and low error in predicting new observations. Interestingly, not all of the predictors present the same weight in the modelling of growth rate. A set of specific parameters, such as non-photochemical fluorescence quenching (NPQ) and the quantum yield of carbon production in the first model, and lipid, protein and carbohydrate contents for the second one, strongly covary with cell growth rate regardless of the taxonomic position of the phytoplankton species investigated. This reflects a set of specific physiological adjustments covarying with growth rate, conserved among taxonomically distant algal species that might be used as guidelines for the improvement of modern primary production models. The high predictive power of both sets of cellular traits for growth rate is of great importance for applied phycological studies. Our approach may find application as a quality control tool for the monitoring of phytoplankton populations in natural communities or in photobioreactors. © 2017 The Author(s).

  16. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    PubMed

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  17. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates

    PubMed Central

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene

    2016-01-01

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891

  18. Multi-scale Modeling of the Impact Response of a Strain Rate Sensitive High-Manganese Austenitic Steel

    NASA Astrophysics Data System (ADS)

    Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan

    2014-09-01

    A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.

  19. Error rate performance of atmospheric laser communication based on bubble model

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Wang, Jin; Li, Yan

    2009-08-01

    Free-Space Optics (FSO) can provide an effective line-of-sight and wireless communication with high bandwidth over a short distance. As a promising field of wireless communication, FSO is being accepted as an alternative of the more expensive fiber-optic based solutions. Despite the advantages of FSO, atmospheric turbulence has a significant impact on laser beam propagating through the channel in the atmosphere over a long distance. Turbulent eddies of various size and refractive index result in intensity scintillation and phase wander, which can severely impair the quality of FSO communication system. In this paper, a new geometrical model is used to assess the effects of turbulence on laser beam in its propagation path. The atmosphere is modeled along the transmission path filled with spatial-distributed spherical bubbles. The size and refractive index discontinuity of each bubble is K-distributed. This Monte Carlo technique allows us to estimate the fluctuation of intensity and phase shifts along the path. A pair of uncollimated rays arrives at the receiver through different path, and an optical path difference is produced. This difference causes a delay between the two rays. At the receiver, as the two rays are superposed, the delay ultimately affects the judgement of the bits. In the simulation, we assume that when the delay exceeds half of the bit width, bit error is possible. On the contrary, when the delay is less than the bit width, the bit error will not happen. Based on this assumption, we calculate the BER under different conditions, and results are further analyzed.

  20. Forecasting the mortality rates of Malaysian population using Heligman-Pollard model

    NASA Astrophysics Data System (ADS)

    Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd

    2017-08-01

    Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.

  1. A multi-scale model of dislocation plasticity in α-Fe: Incorporating temperature, strain rate and non-Schmid effects

    DOE PAGES

    Lim, H.; Hale, L. M.; Zimmerman, J. A.; ...

    2015-01-05

    In this study, we develop an atomistically informed crystal plasticity finite element (CP-FE) model for body-centered-cubic (BCC) α-Fe that incorporates non-Schmid stress dependent slip with temperature and strain rate effects. Based on recent insights obtained from atomistic simulations, we propose a new constitutive model that combines a generalized non-Schmid yield law with aspects from a line tension (LT) model for describing activation enthalpy required for the motion of dislocation kinks. Atomistic calculations are conducted to quantify the non-Schmid effects while both experimental data and atomistic simulations are used to assess the temperature and strain rate effects. The parameterized constitutive equationmore » is implemented into a BCC CP-FE model to simulate plastic deformation of single and polycrystalline Fe which is compared with experimental data from the literature. This direct comparison demonstrates that the atomistically informed model accurately captures the effects of crystal orientation, temperature and strain rate on the flow behavior of siangle crystal Fe. Furthermore, our proposed CP-FE model exhibits temperature and strain rate dependent flow and yield surfaces in polycrystalline Fe that deviate from conventional CP-FE models based on Schmid's law.« less

  2. Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation

    PubMed Central

    Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J.; Moore, Kenneth J.; Thorburn, Peter; Archontoulis, Sotirios V.

    2016-01-01

    Improved prediction of optimal N fertilizer rates for corn (Zea mays L.) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean (Glycine max L.) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha-1) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR’s were within the historical N rate error range (40–50 kg N ha-1). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward

  3. Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation.

    PubMed

    Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J; Moore, Kenneth J; Thorburn, Peter; Archontoulis, Sotirios V

    2016-01-01

    Improved prediction of optimal N fertilizer rates for corn ( Zea mays L. ) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean ( Glycine max L. ) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha -1 ) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR's were within the historical N rate error range (40-50 kg N ha -1 ). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward

  4. Male sexual strategies modify ratings of female models with specific waist-to-hip ratios.

    PubMed

    Brase, Gary L; Walker, Gary

    2004-06-01

    Female waist-to-hip ratio (WHR) has generally been an important general predictor of ratings of physical attractiveness and related characteristics. Individual differences in ratings do exist, however, and may be related to differences in the reproductive tactics of the male raters such as pursuit of short-term or long-term relationships and adjustments based on perceptions of one's own quality as a mate. Forty males, categorized according to sociosexual orientation and physical qualities (WHR, Body Mass Index, and self-rated desirability), rated female models on both attractiveness and likelihood they would approach them. Sociosexually restricted males were less likely to approach females rated as most attractive (with 0.68-0.72 WHR), as compared with unrestricted males. Males with lower scores in terms of physical qualities gave ratings indicating more favorable evaluations of female models with lower WHR. The results indicate that attractiveness and willingness to approach are overlapping but distinguishable constructs, both of which are influenced by variations in characteristics of the raters.

  5. On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi

    2010-12-01

    The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.

  6. Temperature-dependent rate models of vascular cambium cell mortality

    Treesearch

    Matthew B. Dickinson; Edward A. Johnson

    2004-01-01

    We use two rate-process models to describe cell mortality at elevated temperatures as a means of understanding vascular cambium cell death during surface fires. In the models, cell death is caused by irreversible damage to cellular molecules that occurs at rates that increase exponentially with temperature. The models differ in whether cells show cumulative effects of...

  7. Modeling the Endogenous Sunlight Inactivation Rates of Laboratory Strain and Wastewater E. coli and Enterococci Using Biological Weighting Functions.

    PubMed

    Silverman, Andrea I; Nelson, Kara L

    2016-11-15

    Models that predict sunlight inactivation rates of bacteria are valuable tools for predicting the fate of pathogens in recreational waters and designing natural wastewater treatment systems to meet disinfection goals. We developed biological weighting function (BWF)-based numerical models to estimate the endogenous sunlight inactivation rates of E. coli and enterococci. BWF-based models allow the prediction of inactivation rates under a range of environmental conditions that shift the magnitude or spectral distribution of sunlight irradiance (e.g., different times, latitudes, water absorbances, depth). Separate models were developed for laboratory strain bacteria cultured in the laboratory and indigenous organisms concentrated directly from wastewater. Wastewater bacteria were found to be 5-7 times less susceptible to full-spectrum simulated sunlight than the laboratory bacteria, highlighting the importance of conducting experiments with bacteria sourced directly from wastewater. The inactivation rate models fit experimental data well and were successful in predicting the inactivation rates of wastewater E. coli and enterococci measured in clear marine water by researchers from a different laboratory. Additional research is recommended to develop strategies to account for the effects of elevated water pH on predicted inactivation rates.

  8. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  9. Estimation of an optimal chemotherapy utilisation rate for cancer: setting an evidence-based benchmark for quality cancer care.

    PubMed

    Jacob, S A; Ng, W L; Do, V

    2015-02-01

    There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by

  10. The Use of Leaf Functional Traits for Modeling the Timing and Rate of Canopy Development

    NASA Astrophysics Data System (ADS)

    Savoy, P.; Mackay, D. S.

    2015-12-01

    Leaves vary in their habit, with some being short lived and possessing high intrinsic photosynthetic rates and others being long lived with lower photosynthetic capacity. Longer lived leaves will thus tend to cost more to produce and be able to assimilate carbon over a longer period of time. The timing and seasonality of forest canopies is a cost benefit strategy for the exploitation of favorable environmental conditions and avoidance of unfavorable conditions. Because of the selective pressure for plants to gather a return on leaf investment in relation to their leaf habit we propose that there is a relationship between plant functional traits and the timing and rate of canopy development. In a recent study it was shown that errors in predicted canopy dynamics could be reduced via a single parameter (τ) which modified the timing and rate of canopy development (Savoy & Mackay 2015). If τ is related to underlying mechanisms of plant physiology then it should vary predictably. To test this we will first examine the relationship between τ and observable biophysical variables which vary in ecologically meaningful ways. Then we will develop a model based on leaf traits which will regulate the timing and rate at which vegetation reaches peak rates of assimilation. The model will then be tested at eddy covariance sites which span a range environmental conditions. Preliminary results demonstrate a strong relationship (R2 = 0.58) between estimated values of τ and leaf carbon to nitrogen ratio, which is important for representing the costs of leaf construction and nitrogen investment into photosynthetic machinery of leaves. By developing a canopy seasonality model based on plant functional traits and rooted in the framework of leaf economics it is possible to have a more flexible and generalized model. Such a model will be more adept at making predictions under novel environmental conditions than purely correlative empirical models.

  11. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Lawrence; Howard, Mike; Levesque, George; Souers, Clark

    2011-06-01

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to ALE hydrodynamics codes to model detonations. We term our model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculate EOS values based on the concentrations. A validation suite of model simulations compared to recent high fidelity metal push experiments at ambient and cold temperatures has been developed. We present here a study of multi-time scale kinetic rate effects for these experiments. Prepared by LLNL under Contract DE-AC52-07NA27344.

  12. Word-level language modeling for P300 spellers based on discriminative graphical models

    NASA Astrophysics Data System (ADS)

    Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2015-04-01

    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.

  13. A rabbit ventricular action potential model replicating cardiac dynamics at rapid heart rates.

    PubMed

    Mahajan, Aman; Shiferaw, Yohannes; Sato, Daisuke; Baher, Ali; Olcese, Riccardo; Xie, Lai-Hua; Yang, Ming-Jim; Chen, Peng-Sheng; Restrepo, Juan G; Karma, Alain; Garfinkel, Alan; Qu, Zhilin; Weiss, James N

    2008-01-15

    Mathematical modeling of the cardiac action potential has proven to be a powerful tool for illuminating various aspects of cardiac function, including cardiac arrhythmias. However, no currently available detailed action potential model accurately reproduces the dynamics of the cardiac action potential and intracellular calcium (Ca(i)) cycling at rapid heart rates relevant to ventricular tachycardia and fibrillation. The aim of this study was to develop such a model. Using an existing rabbit ventricular action potential model, we modified the L-type calcium (Ca) current (I(Ca,L)) and Ca(i) cycling formulations based on new experimental patch-clamp data obtained in isolated rabbit ventricular myocytes, using the perforated patch configuration at 35-37 degrees C. Incorporating a minimal seven-state Markovian model of I(Ca,L) that reproduced Ca- and voltage-dependent kinetics in combination with our previously published dynamic Ca(i) cycling model, the new model replicates experimentally observed action potential duration and Ca(i) transient alternans at rapid heart rates, and accurately reproduces experimental action potential duration restitution curves obtained by either dynamic or S1S2 pacing.

  14. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The

  15. Model-Based Knowing: How Do Students Ground Their Understanding About Climate Systems in Agent-Based Computer Models?

    NASA Astrophysics Data System (ADS)

    Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.

    2017-12-01

    This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.

  16. A Minimalistic Resource Allocation Model to Explain Ubiquitous Increase in Protein Expression with Growth Rate

    PubMed Central

    Keren, Leeat; Segal, Eran; Milo, Ron

    2016-01-01

    Most proteins show changes in level across growth conditions. Many of these changes seem to be coordinated with the specific growth rate rather than the growth environment or the protein function. Although cellular growth rates, gene expression levels and gene regulation have been at the center of biological research for decades, there are only a few models giving a base line prediction of the dependence of the proteome fraction occupied by a gene with the specific growth rate. We present a simple model that predicts a widely coordinated increase in the fraction of many proteins out of the proteome, proportionally with the growth rate. The model reveals how passive redistribution of resources, due to active regulation of only a few proteins, can have proteome wide effects that are quantitatively predictable. Our model provides a potential explanation for why and how such a coordinated response of a large fraction of the proteome to the specific growth rate arises under different environmental conditions. The simplicity of our model can also be useful by serving as a baseline null hypothesis in the search for active regulation. We exemplify the usage of the model by analyzing the relationship between growth rate and proteome composition for the model microorganism E.coli as reflected in recent proteomics data sets spanning various growth conditions. We find that the fraction out of the proteome of a large number of proteins, and from different cellular processes, increases proportionally with the growth rate. Notably, ribosomal proteins, which have been previously reported to increase in fraction with growth rate, are only a small part of this group of proteins. We suggest that, although the fractions of many proteins change with the growth rate, such changes may be partially driven by a global effect, not necessarily requiring specific cellular control mechanisms. PMID:27073913

  17. Analytical Modeling of the High Strain Rate Deformation of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Roberts, Gary D.; Gilat, Amos

    2003-01-01

    The results presented here are part of an ongoing research program to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. State variable constitutive equations originally developed for metals have been modified in order to model the nonlinear, strain rate dependent deformation of polymeric matrix materials. To account for the effects of hydrostatic stresses, which are significant in polymers, the classical 5 plasticity theory definitions of effective stress and effective plastic strain are modified by applying variations of the Drucker-Prager yield criterion. To verify the revised formulation, the shear and tensile deformation of a representative toughened epoxy is analyzed across a wide range of strain rates (from quasi-static to high strain rates) and the results are compared to experimentally obtained values. For the analyzed polymers, both the tensile and shear stress-strain curves computed using the analytical model correlate well with values obtained through experimental tests. The polymer constitutive equations are implemented within a strength of materials based micromechanics method to predict the nonlinear, strain rate dependent deformation of polymer matrix composites. In the micromechanics, the unit cell is divided up into a number of independently analyzed slices, and laminate theory is then applied to obtain the effective deformation of the unit cell. The composite mechanics are verified by analyzing the deformation of a representative polymer matrix composite (composed using the representative polymer analyzed for the correlation of the polymer constitutive equations) for several fiber orientation angles across a variety of strain rates. The computed values compare favorably to experimentally obtained results.

  18. Cybernetic modeling based on pathway analysis for Penicillium chrysogenum fed-batch fermentation.

    PubMed

    Geng, Jun; Yuan, Jingqi

    2010-08-01

    A macrokinetic model employing cybernetic methodology is proposed to describe mycelium growth and penicillin production. Based on the primordial and complete metabolic network of Penicillium chrysogenum found in the literature, the modeling procedure is guided by metabolic flux analysis and cybernetic modeling framework. The abstracted cybernetic model describes the transients of the consumption rates of the substrates, the assimilation rates of intermediates, the biomass growth rate, as well as the penicillin formation rate. Combined with the bioreactor model, these reaction rates are linked with the most important state variables, i.e., mycelium, substrate and product concentrations. Simplex method is used to estimate the sensitive parameters of the model. Finally, validation of the model is carried out with 20 batches of industrial-scale penicillin cultivation.

  19. Rate Theory Modeling and Simulations of Silicide Fuel at LWR Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yinbin; Ye, Bei; Mei, Zhigang

    Uranium silicide (U 3Si 2) fuel has higher thermal conductivity and higher uranium density, making it a promising candidate for the accident-tolerant fuel (ATF) used in light water reactors (LWRs). However, previous studies on the fuel performance of U 3Si 2, including both experimental and computational approaches, have been focusing on the irradiation conditions in research reactors, which usually involve low operation temperatures and high fuel burnups. Thus, it is important to examine the fuel performance of U 3Si 2 at typical LWR conditions so as to evaluate the feasibility of replacing conventional uranium dioxide fuel with this silicide fuelmore » material. As in-reactor irradiation experiments involve significant time and financial cost, it is appropriate to utilize modeling tools to estimate the behavior of U 3Si 2 in LWRs based on all those available research reactor experimental references and state-of-the-art density functional theory (DFT) calculation capabilities at the early development stage. Hence, in this report, a comprehensive investigation of the fission gas swelling behavior of U 3Si 2 at LWR conditions is introduced. The modeling efforts mentioned in this report was based on the rate theory (RT) model of fission gas bubble evolution that has been successfully applied for a variety of fuel materials at devious reactor conditions. Both existing experimental data and DFT-calculated results were used for the optimization of the parameters adopted by the RT model. Meanwhile, the fuel-cladding interaction was captured by the coupling of the RT model with simplified mechanical correlations. Therefore, the swelling behavior of U 3Si 2 fuel and its consequent interaction with cladding in LWRs was predicted by the rate theory modeling, providing valuable information for the development of U 3Si 2 fuel as an accident-tolerant alternative for uranium dioxide.« less

  20. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  1. A simulation model for the determination of tabarru' rate in a family takaful

    NASA Astrophysics Data System (ADS)

    Ismail, Hamizun bin

    2014-06-01

    The concept of tabarru' that is incorporated in family takaful serves to eliminate the element of uncertainty in the contract as a participant agree to relinquish as donation certain portion of his contribution. The most important feature in family takaful is that it does not guarantee a definite return on a participant's contribution, unlike its conventional counterpart where a premium is paid in return for a guaranteed amount of insurance benefit. In other words, investment return on contributed funds by the participants are based on actual investment experience. The objective of this study is to set up a framework for the determination of tabarru' rate by simulation. The model is based on binomial death process. Specifically, linear tabarru' rate and flat tabarru' rate are introduced. The results of the simulation trials show that the linear assumption on the tabarru' rate has an advantage over the flat counterpart as far as the risk of the investment accumulation on maturity is concerned.

  2. Empirical Study of User Preferences Based on Rating Data of Movies

    PubMed Central

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  3. Empirical Study of User Preferences Based on Rating Data of Movies.

    PubMed

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings.

  4. Modeling Strain Rate Effect of Heterogeneous Materials Using SPH Method

    NASA Astrophysics Data System (ADS)

    Ma, G. W.; Wang, X. J.; Li, Q. M.

    2010-11-01

    The strain rate effect on the dynamic compressive failure of heterogeneous material based on the smoothed particle hydrodynamics (SPH) method is studied. The SPH method employs a rate-insensitive elasto-plastic damage model incorporated with a Weibull distribution law to reflect the mechanical behavior of heterogeneous rock-like materials. A series of simulations are performed for heterogeneous specimens by applying axial velocity conditions, which induce different strain-rate loadings to the specimen. A detailed failure process of the specimens in terms of microscopic crack-activities and the macro-mechanical response are discussed. Failure mechanisms between the low and high strain rate cases are compared. The result shows that the strain-rate effects on the rock strength are mainly caused by the changing internal pressure due to the inertial effects as well as the material heterogeneity. It also demonstrates that the inertial effect becomes significant only when the induced strain rate exceeds a threshold, below which, the dynamic strength enhancement can be explained due to the heterogeneities in the material. It also shows that the dynamic strength is affected more significantly for a relatively more heterogeneous specimen, which coincides with the experimental results showing that the poor quality specimen had a relatively larger increase in the dynamic strength.

  5. The role of climate in the global patterns of ecosystem carbon turnover rates - contrasts between data and models

    NASA Astrophysics Data System (ADS)

    Carvalhais, N.; Forkel, M.; Khomik, M.; Bellarby, J.; Migliavacca, M.; Thurner, M.; Beer, C.; Jung, M.; Mu, M.; Randerson, J. T.; Saatchi, S. S.; Santoro, M.; Reichstein, M.

    2012-12-01

    The turnover rates of carbon in terrestrial ecosystems and their sensitivity to climate are instrumental properties for diagnosing the interannual variability and forecasting trends of biogeochemical processes and carbon-cycle-climate feedbacks. We propose to globally look at the spatial distribution of turnover rates of carbon to explore the association between bioclimatic regimes and the rates at which carbon cycles in terrestrial ecosystems. Based on data-driven approaches of ecosystem carbon fluxes and data-based estimates of ecosystem carbon stocks it is possible to build fully observationally supported diagnostics. These data driven diagnostics support the benchmarking of CMIP5 model outputs (Coupled Model Intercomparison Project Phase 5) with observationally based estimates. The models' performance is addressed by confronting spatial patterns of carbon fluxes and stocks with data, as well as the global and regional sensitivities of turnover rates to climate. Our results show strong latitudinal gradients globally, mostly controlled by temperature, which are not always paralleled by CMIP5 simulations. In northern colder regions is also where the largest difference in temperature sensitivity between models and data occurs. Interestingly, there seem to be two different statistical populations in the data (some with high, others with low apparent temperature sensitivity of carbon turnover rates), where the different models only seem to describe either one or the other population. Additionally, the comparisons within bioclimatic classes can even show opposite patterns between turnover rates and temperature in water limited regions. Overall, our analysis emphasizes the role of finding patterns and intrinsic properties instead of plain magnitudes of fluxes for diagnosing the sensitivities of terrestrial biogeochemical cycles to climate. Further, our regional analysis suggests a significant gap in addressing the partial influence of water in the ecosystem carbon

  6. A Bayesian Hierarchical Modeling Scheme for Estimating Erosion Rates Under Current Climate Conditions

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2014-12-01

    Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.

  7. A grid of MHD models for stellar mass loss and spin-down rates of solar analogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, O.; Drake, J. J.

    2014-03-01

    Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less

  8. Decision-case mix model for analyzing variation in cesarean rates.

    PubMed

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  9. Arduino-based noise robust online heart-rate detection.

    PubMed

    Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda

    2017-04-01

    This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.

  10. Success rate evaluation of clinical governance implementation in teaching hospitals in Kerman (Iran) based on nine steps of Karsh's model.

    PubMed

    Vali, Leila; Mastaneh, Zahra; Mouseli, Ali; Kardanmoghadam, Vida; Kamali, Sodabeh

    2017-07-01

    One of the ways to improve the quality of services in the health system is through clinical governance. This method aims to create a framework for clinical services providers to be accountable in return for continuing improvement of quality and maintaining standards of services. To evaluate the success rate of clinical governance implementation in Kerman teaching hospitals based on 9 steps of Karsh's Model. This cross-sectional study was conducted in 2015 on 94 people including chief executive officers (CEOs), nursing managers, clinical governance managers and experts, head nurses and nurses. The required data were collected through a researcher-made questionnaire containing 38 questions with three-point Likert Scale (good, moderate, and weak). The Karsh's Model consists of nine steps including top management commitment to change, accountability for change, creating a structured approach for change, training, pilot implementation, communication, feedback, simulation, and end-user participation. Data analysis using descriptive statistics and Mann-Whitney-Wilcoxon test was done by SPSS software version 16. About 81.9 % of respondents were female and 74.5 have a Bachelor of Nursing (BN) degree. In general, the status of clinical governance implementation in studied hospitals based on 9 steps of the model was 44 % (moderate). A significant relationship was observed among accountability and organizational position (p=0.0012) and field of study (p=0.000). Also, there were significant relationships between structure-based approach and organizational position (p=0.007), communication and demographic characteristics (p=0.000), and end-user participation with organizational position (p=0.03). Clinical governance should be implemented by correct needs assessment and participation of all stakeholders, to ensure its enforcement in practice, and to enhance the quality of services.

  11. Comparison of statistical models to estimate parasite growth rate in the induced blood stage malaria model.

    PubMed

    Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise

    2017-08-25

    The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites

  12. Beyond The Blueprint: Development Of Genome-Informed Trait-Based Models For Prediction Of Microbial Dynamics And Biogeochemical Rates

    NASA Astrophysics Data System (ADS)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.

    2014-12-01

    In soils and sediments microorganisms perform essential ecosystem services through their roles in regulating the stability of carbon and the flux of nutrients, and the purification of water. But these are complex systems with the physical, chemical and biological components all intimately connected. Components of this complexity are gradually being uncovered and our understanding of the extent of microbial functional diversity in particular has been enhanced greatly with the development of cultivation independent approaches. However we have not moved far beyond a descriptive and correlative use of this powerful resource. As the ability to reconstruct thousands of genomes from microbial populations using metagenomic techniques gains momentum, the challenge will be to develop an understanding of how these metabolic blueprints serve to influence the fitness of organisms within these complex systems and how populations emerge and impact the physical and chemical properties of their environment. In the presentation we will discuss the development of a trait-based model of microbial activity that simulates coupled guilds of microorganisms that are parameterized including traits extracted from large-scale metagenomic data. Using a reactive transport framework we simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for respiration, biomass development and exo-enzyme production. Each group within a functional guild is parameterized with a unique combination of traits governing organism fitness under dynamic environmental conditions. This presentation will address our latest developments in the estimation of trait values related to growth rate and the identification and linkage of key fitness traits associated with respiratory and fermentative pathways, macromolecule depolymerization enzymes and nitrogen fixation from metagenomic data. We are testing model sensitivity to initial microbial composition and intra

  13. Uncertainty estimation with bias-correction for flow series based on rating curve

    NASA Astrophysics Data System (ADS)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  14. Improved community model for social networks based on social mobility

    NASA Astrophysics Data System (ADS)

    Lu, Zhe-Ming; Wu, Zhen; Luo, Hao; Wang, Hao-Xian

    2015-07-01

    This paper proposes an improved community model for social networks based on social mobility. The relationship between the group distribution and the community size is investigated in terms of communication rate and turnover rate. The degree distributions, clustering coefficients, average distances and diameters of networks are analyzed. Experimental results demonstrate that the proposed model possesses the small-world property and can reproduce social networks effectively and efficiently.

  15. An Agent-Based Modeling Approach for Determining Corn Stover Removal Rate and Transboundary Effects

    NASA Astrophysics Data System (ADS)

    Gan, Jianbang; Langeveld, J. W. A.; Smith, C. T.

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  16. Rate of Learning Models, Mental Models, and Item Response Theory

    NASA Astrophysics Data System (ADS)

    Pritchard, David E.; Lee, Y.; Bao, L.

    2006-12-01

    We present three learning models that make different assumptions about how the rate of a student's learning depends on the amount that they know already. These are motivated by the mental models of Tabula Rasa, Constructivist, and Tutoring theories. These models predict the postscore for a given prescore after a given period of instruction. Constructivist models show a close connection with Item Response Theory. Comparison with data from both Hake and MIT shows that the Tabula Rasa models not only fit incomparably better, but fit the MIT data within error across a wide range of pretest scores. We discuss the implications of this finding.

  17. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  18. Statistical inference, the bootstrap, and neural-network modeling with application to foreign exchange rates.

    PubMed

    White, H; Racine, J

    2001-01-01

    We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.

  19. Implications of two Holocene time-dependent geomagnetic models for cosmogenic nuclide production rate scaling

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel

    2016-01-01

    The geomagnetic field is a major influence on in situ cosmogenic nuclide production rates at a given location (in addition to atmospheric pressure and, to a lesser extent, solar modulation effects). A better understanding of how past fluctuations in these influences affected production rates should allow more accurate application of cosmogenic nuclides. As such, this work explores the cosmogenic nuclide production rate scaling implications of two recent time-dependent spherical harmonic geomagnetic models spanning the Holocene. Korte and Constable (2011, Phys. Earth Planet. Inter.188, 247-259) and Korte et al. (2011, Earth Planet. Sci. Lett. 312, 497-505) recently updated earlier spherical harmonic paleomagnetic models with new paleomagnetic data from sediment cores in addition to new archeomagnetic and volcanic data. These updated models offer improved resolution and accuracy over the previous versions, in part due to increased temporal and spatial data coverage. In addition, Pavón-Carrasco et al. (2014, Earth Planet. Sci. Lett. 388, 98-109) developed another time-dependent spherical harmonic model of the Holocene geomagnetic field, based solely on archeomagnetic and volcanic paleomagnetic data from the same underlying paleomagnetic database as the Korte et al. models, but extending to 14 ka. With the new models as input, trajectory-traced estimates of effective vertical cutoff rigidity (RC - the standard method for ordering cosmic ray data) yield significantly different time-integrated scaling predictions when compared to each other and to results using the earlier models. In addition, predictions of each new model using RC are tested empirically using recently published production rate calibration data for both 10Be and 3He, and compared to predictions using corresponding time-varying geocentric dipolar RC formulations and a static geocentric axial dipole (GAD) model. Results for the few calibration sites from geomagnetically sensitive regions suggest that the

  20. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed

  1. Estimating Rates of Permafrost Degradation and their Impact on Ecosystems across Alaska and Northwest Canada using the Process-based Permafrost Dynamics Model GIPL as a Component of the Integrated Ecosystem Model (IEM)

    NASA Astrophysics Data System (ADS)

    Marchenko, S. S.; Genet, H.; Euskirchen, E. S.; Breen, A. L.; McGuire, A. D.; Rupp, S. T.; Romanovsky, V. E.; Bolton, W. R.; Walsh, J. E.

    2016-12-01

    The impact of climate warming on permafrost and the potential of climate feedbacks resulting from permafrost thawing have recently received a great deal of attention. Permafrost temperature has increased in most locations in the Arctic and Sub-Arctic during the past 30-40 years. The typical increase in permafrost temperature is 1-3°C. The process-based permafrost dynamics model GIPL developed in the Geophysical Institute Permafrost Lab, and which is the permafrost module of the Integrated Ecosystem Model (IEM) has been using to quantify the nature and rate of permafrost degradation and its impact on ecosystems, infrastructure, CO2 and CH4fluxes and net C storage following permafrost thaw across Alaska and Northwest Canada. The IEM project is a multi-institutional and multi-disciplinary effort aimed at understanding potential landscape, habitat and ecosystem change across the IEM domain. The IEM project also aims to tie three scientific models together Terrestrial Ecosystem Model (TEM), the ALFRESCO (ALaska FRame-based EcoSystem Code) and GIPL so that they exchange data at run-time. The models produce forecasts of future fire, vegetation, organic matter, permafrost and hydrology regimes. The climate forcing data are based on the historical CRU3.1 data set for the retrospective analysis period (1901-2009) and the CMIP3 CCCMA-CGCM3.1 and MPI-ECHAM5/MPI-OM climate models for the future period (2009-2100). All data sets were downscaled to a 1 km resolution, using a differencing methodology (i.e., a delta method) and the Parameter-elevation Regressions on Independent Slopes Model (PRISM) climatology. We estimated the dynamics of permafrost temperature, active layer thickness, area occupied by permafrost, and volume of thawed soils across the IEM domain. The modeling results indicate how different types of ecosystems affect the thermal state of permafrost and its stability. Although the rate of soil warming and permafrost degradation in peatland areas are slower than

  2. Individual-Based Completion Rates for Apprentices. Technical Paper

    ERIC Educational Resources Information Center

    Karmel, Tom

    2011-01-01

    Low completion rates for apprentices and trainees have received considerable attention recently and it has been argued that NCVER seriously understates completion rates. In this paper Tom Karmel uses NCVER data on recommencements to estimate individual-based completion rates. It is estimated that around one-quarter of trade apprentices swap…

  3. MicroRNAfold: pre-microRNA secondary structure prediction based on modified NCM model with thermodynamics-based scoring strategy.

    PubMed

    Han, Dianwei; Zhang, Jun; Tang, Guiliang

    2012-01-01

    An accurate prediction of the pre-microRNA secondary structure is important in miRNA informatics. Based on a recently proposed model, nucleotide cyclic motifs (NCM), to predict RNA secondary structure, we propose and implement a Modified NCM (MNCM) model with a physics-based scoring strategy to tackle the problem of pre-microRNA folding. Our microRNAfold is implemented using a global optimal algorithm based on the bottom-up local optimal solutions. Our experimental results show that microRNAfold outperforms the current leading prediction tools in terms of True Negative rate, False Negative rate, Specificity, and Matthews coefficient ratio.

  4. High rate constitutive modeling of aluminium alloy tube

    NASA Astrophysics Data System (ADS)

    Salisbury, C. P.; Worswick, M. J.; Mayer, R.

    2006-08-01

    As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.

  5. Rate and timing cues associated with the cochlear amplifier: level discrimination based on monaural cross-frequency coincidence detection.

    PubMed

    Heinz, M G; Colburn, H S; Carney, L H

    2001-10-01

    The perceptual significance of the cochlear amplifier was evaluated by predicting level-discrimination performance based on stochastic auditory-nerve (AN) activity. Performance was calculated for three models of processing: the optimal all-information processor (based on discharge times), the optimal rate-place processor (based on discharge counts), and a monaural coincidence-based processor that uses a non-optimal combination of rate and temporal information. An analytical AN model included compressive magnitude and level-dependent-phase responses associated with the cochlear amplifier, and high-, medium-, and low-spontaneous-rate (SR) fibers with characteristic frequencies (CFs) spanning the AN population. The relative contributions of nonlinear magnitude and nonlinear phase responses to level encoding were compared by using four versions of the model, which included and excluded the nonlinear gain and phase responses in all possible combinations. Nonlinear basilar-membrane (BM) phase responses are robustly encoded in near-CF AN fibers at low frequencies. Strongly compressive BM responses at high frequencies near CF interact with the high thresholds of low-SR AN fibers to produce large dynamic ranges. Coincidence performance based on a narrow range of AN CFs was robust across a wide dynamic range at both low and high frequencies, and matched human performance levels. Coincidence performance based on all CFs demonstrated the "near-miss" to Weber's law at low frequencies and the high-frequency "mid-level bump." Monaural coincidence detection is a physiologically realistic mechanism that is extremely general in that it can utilize AN information (average-rate, synchrony, and nonlinear-phase cues) from all SR groups.

  6. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  7. A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate.

    PubMed

    Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Thorburn, Peter J; Castellano, Michael J; Moore, Kenneth J; VanLoocke, Andrew; Heaton, Emily A; Archontoulis, Sotirios V

    2018-01-01

    Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time ( R 2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity ( R 2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined ( n = 31) with an average error range of ±38 kg N ha -1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather

  8. A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate

    PubMed Central

    Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Thorburn, Peter J.; Castellano, Michael J.; Moore, Kenneth J.; VanLoocke, Andrew; Heaton, Emily A.; Archontoulis, Sotirios V.

    2018-01-01

    Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time (R2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity (R2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined (n = 31) with an average error range of ±38 kg N ha−1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years

  9. A Microstructure-Based Constitutive Model for Superplastic Forming

    NASA Astrophysics Data System (ADS)

    Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel

    2012-11-01

    A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.

  10. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  11. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  12. Patient navigation based on predictive modeling decreases no-show rates in cancer care.

    PubMed

    Percac-Lima, Sanja; Cronin, Patrick R; Ryan, David P; Chabner, Bruce A; Daly, Emily A; Kimball, Alexandra B

    2015-05-15

    Patient adherence to appointments is key to improving outcomes in health care. "No-show" appointments contribute to suboptimal resource use. Patient navigation and telephone reminders have been shown to improve cancer care and adherence, particularly in disadvantaged populations, but may not be cost-effective if not targeted at the appropriate patients. In 5 clinics within a large academic cancer center, patients who were considered to be likely (the top 20th percentile) to miss a scheduled appointment without contacting the clinic ahead of time ("no-shows") were identified using a predictive model and then randomized to an intervention versus a usual-care group. The intervention group received telephone calls from a bilingual patient navigator 7 days before and 1 day before the appointment. Over a 5-month period, of the 40,075 appointments scheduled, 4425 patient appointments were deemed to be at high risk of a "no-show" event. After the patient navigation intervention, the no-show rate in the intervention group was 10.2% (167 of 1631), compared with 17.5% in the control group (280 of 1603) (P<.001). Reaching a patient or family member was associated with a significantly lower no-show rate (5.9% and 3.0%, respectively; P<.001 and .006, respectively) compared with leaving a message (14.7%: P = .117) or no contact (no-show rate, 21.6%: P = .857). Telephone navigation targeted at those patients predicted to be at high risk of visit nonadherence was found to effectively and substantially improve patient adherence to cancer clinic appointments. Further studies are needed to determine the long-term impact on patient outcomes, but short-term gains in the optimization of resources can be recognized immediately. © 2015 American Cancer Society.

  13. A Case-Based Learning Model in Orthodontics.

    ERIC Educational Resources Information Center

    Engel, Francoise E.; Hendricson, William D.

    1994-01-01

    A case-based, student-centered instructional model designed to mimic orthodontic problem solving and decision making in dental general practice is described. Small groups of students analyze case data, then record and discuss their diagnoses and treatments. Students and instructors rated the seminars positively, and students reported improved…

  14. Modeling the intracellular pathogen-immune interaction with cure rate

    NASA Astrophysics Data System (ADS)

    Dubey, Balram; Dubey, Preeti; Dubey, Uma S.

    2016-09-01

    Many common and emergent infectious diseases like Influenza, SARS, Hepatitis, Ebola etc. are caused by viral pathogens. These infections can be controlled or prevented by understanding the dynamics of pathogen-immune interaction in vivo. In this paper, interaction of pathogens with uninfected and infected cells in presence or absence of immune response are considered in four different cases. In the first case, the model considers the saturated nonlinear infection rate and linear cure rate without absorption of pathogens into uninfected cells and without immune response. The next model considers the effect of absorption of pathogens into uninfected cells while all other terms are same as in the first case. The third model incorporates innate immune response, humoral immune response and Cytotoxic T lymphocytes (CTL) mediated immune response with cure rate and without absorption of pathogens into uninfected cells. The last model is an extension of the third model in which the effect of absorption of pathogens into uninfected cells has been considered. Positivity and boundedness of solutions are established to ensure the well-posedness of the problem. It has been found that all the four models have two equilibria, namely, pathogen-free equilibrium point and pathogen-present equilibrium point. In each case, stability analysis of each equilibrium point is investigated. Pathogen-free equilibrium is globally asymptotically stable when basic reproduction number is less or equal to unity. This implies that control or prevention of infection is independent of initial concentration of uninfected cells, infected cells, pathogens and immune responses in the body. The proposed models show that introduction of immune response and cure rate strongly affects the stability behavior of the system. Further, on computing basic reproduction number, it has been found to be minimum for the fourth model vis-a-vis other models. The analytical findings of each model have been exemplified by

  15. Improved model for the angular dependence of excimer laser ablation rates in polymer materials

    NASA Astrophysics Data System (ADS)

    Pedder, J. E. A.; Holmes, A. S.; Dyer, P. E.

    2009-10-01

    Measurements of the angle-dependent ablation rates of polymers that have applications in microdevice fabrication are reported. A simple model based on Beer's law, including plume absorption, is shown to give good agreement with the experimental findings for polycarbonate and SU8, ablated using the 193 and 248 nm excimer lasers, respectively. The modeling forms a useful tool for designing masks needed to fabricate complex surface relief by ablation.

  16. Rate Theory Modeling and Simulation of Silicide Fuel at LWR Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yinbin; Ye, Bei; Hofman, Gerard

    As a promising candidate for the accident tolerant fuel (ATF) used in light water reactors (LWRs), the fuel performance of uranium silicide (U 3Si 2) at LWR conditions needs to be well understood. In this report, rate theory model was developed based on existing experimental data and density functional theory (DFT) calculations so as to predict the fission gas behavior in U 3Si 2 at LWR conditions. The fission gas behavior of U 3Si 2 can be divided into three temperature regimes. During steady-state operation, the majority of the fission gas stays in intragranular bubbles, whereas the dominance of intergranularmore » bubbles and fission gas release only occurs beyond 1000 K. The steady-state rate theory model was also used as reference to establish a gaseous swelling correlation of U 3Si 2 for the BISON code. Meanwhile, the overpressurized bubble model was also developed so that the fission gas behavior at LOCA can be simulated. LOCA simulation showed that intragranular bubbles are still dominant after a 70 second LOCA, resulting in a controllable gaseous swelling. The fission gas behavior of U 3Si 2 at LWR conditions is benign according to the rate theory prediction at both steady-state and LOCA conditions, which provides important references to the qualification of U 3Si 2 as a LWR fuel material with excellent fuel performance and enhanced accident tolerance.« less

  17. Analysis of an algae-based CELSS. I - Model development

    NASA Technical Reports Server (NTRS)

    Holtzapple, Mark T.; Little, Frank E.; Makela, Merry E.; Patterson, C. O.

    1989-01-01

    A steady state chemical model and computer program have been developed for a life support system and applied to trade-off studies. The model is based on human demand for food and oxygen determined from crew metabolic needs. The model includes modules for water recycle, waste treatment, CO2 removal and treatment, and food production. The computer program calculates rates of use and material balance for food, O2, the recycle of human waste and trash, H2O, N2, and food production/supply. A simple noniterative solution for the model has been developed using the steady state rate equations for the chemical reactions. The model and program have been used in system sizing and subsystem trade-off studies of a partially closed life support system.

  18. Mechanism-based model for tumor drug resistance.

    PubMed

    Kuczek, T; Chan, T C

    1992-01-01

    The development of tumor resistance to cytotoxic agents has important implications in the treatment of cancer. If supported by experimental data, mathematical models of resistance can provide useful information on the underlying mechanisms and aid in the design of therapeutic regimens. We report on the development of a model of tumor-growth kinetics based on the assumption that the rates of cell growth in a tumor are normally distributed. We further assumed that the growth rate of each cell is proportional to its rate of total pyrimidine synthesis (de novo plus salvage). Using an ovarian carcinoma cell line (2008) and resistant variants selected for chronic exposure to a pyrimidine antimetabolite, N-phosphonacetyl-L-aspartate (PALA), we derived a simple and specific analytical form describing the growth curves generated in 72 h growth assays. The model assumes that the rate of de novo pyrimidine synthesis, denoted alpha, is shifted down by an amount proportional to the log10 PALA concentration and that cells whose rate of pyrimidine synthesis falls below a critical level, denoted alpha 0, can no longer grow. This is described by the equation: Probability (growth) = probability (alpha 0 less than alpha-constant x log10 [PALA]). This model predicts that when growth curves are plotted on probit paper, they will produce straight lines. This prediction is in agreement with the data we obtained for the 2008 cells. Another prediction of this model is that the same probit plots for the resistant variants should shift to the right in a parallel fashion. Probit plots of the dose-response data obtained for each resistant 2008 line following chronic exposure to PALA again confirmed this prediction. Correlation of the rightward shift of dose responses to uridine transport (r = 0.99) also suggests that salvage metabolism plays a key role in tumor-cell resistance to PALA. Furthermore, the slope of the regression lines enables the detection of synergy such as that observed between

  19. Removal rate model for magnetorheological finishing of glass.

    PubMed

    Degroote, Jessica E; Marino, Anne E; Wilson, John P; Bishop, Amy L; Lambropoulos, John C; Jacobs, Stephen D

    2007-11-10

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magnetorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time, to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.

  20. Removal Rate Model for Magnetorheological Finishing of Glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeGroote, J.E.; Marino, A.E.; WIlson, J.P.

    2007-11-14

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magntorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time,more » to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.« less

  1. American option pricing in Gauss-Markov interest rate models

    NASA Astrophysics Data System (ADS)

    Galluccio, Stefano

    1999-07-01

    In the context of Gaussian non-homogeneous interest-rate models, we study the problem of American bond option pricing. In particular, we show how to efficiently compute the exercise boundary in these models in order to decompose the price as a sum of a European option and an American premium. Generalizations to coupon-bearing bonds and jump-diffusion processes for the interest rates are also discussed.

  2. A Constant Rate of Spontaneous Mutation in DNA-Based Microbes

    NASA Astrophysics Data System (ADS)

    Drake, John W.

    1991-08-01

    In terms of evolution and fitness, the most significant spontaneous mutation rate is likely to be that for the entire genome (or its nonfrivolous fraction). Information is now available to calculate this rate for several DNA-based haploid microbes, including bacteriophages with single- or double-stranded DNA, a bacterium, a yeast, and a filamentous fungus. Their genome sizes vary by ≈6500-fold. Their average mutation rates per base pair vary by ≈16,000-fold, whereas their mutation rates per genome vary by only ≈2.5-fold, apparently randomly, around a mean value of 0.0033 per DNA replication. The average mutation rate per base pair is inversely proportional to genome size. Therefore, a nearly invariant microbial mutation rate appears to have evolved. Because this rate is uniform in such diverse organisms, it is likely to be determined by deep general forces, perhaps by a balance between the usually deleterious effects of mutation and the physiological costs of further reducing mutation rates.

  3. Building a better methane generation model: Validating models with methane recovery rates from 35 Canadian landfills.

    PubMed

    Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E

    2009-07-01

    The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.

  4. Satellite altimetry based rating curves throughout the entire Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present

  5. Self-rated health, multimorbidity and depression in Mexican older adults: Proposal and evaluation of a simple conceptual model.

    PubMed

    Bustos-Vázquez, Eduardo; Fernández-Niño, Julián Alfredo; Astudillo-Garcia, Claudia Iveth

    2017-04-01

    Self-rated health is an individual and subjective conceptualization involving the intersection of biological, social and psychological factors. It provides an invaluable and unique evaluation of a person's general health status. To propose and evaluate a simple conceptual model to understand self-rated health and its relationship to multimorbidity, disability and depressive symptoms in Mexican older adults. We conducted a cross-sectional study based on a national representative sample of 8,874 adults of 60 years of age and older. Self-perception of a positive health status was determined according to a Likert-type scale based on the question: "What do you think is your current health status?" Intermediate variables included multimorbidity, disability and depressive symptoms, as well as dichotomous exogenous variables (sex, having a partner, participation in decision-making and poverty). The proposed conceptual model was validated using a general structural equation model with a logit link function for positive self-rated health. A direct association was found between multimorbidity and positive self-rated health (OR=0.48; 95% CI: 0.42-0.55), disability and positive self-rated health (OR=0.35; 95% CI: 0.30-0.40), depressive symptoms and positive self-rated health (OR=0.38; 95% CI: 0.34-0.43). The model also validated indirect associations between disability and depressive symptoms (OR=2.25; 95% CI: 2.01- 2.52), multimorbidity and depressive symptoms (OR=1.79; 95% CI: 1.61-2.00) and multimorbidity and disability (OR=1.98; 95% CI: 1.78-2.20). A parsimonious theoretical model was empirically evaluated, which enabled identifying direct and indirect associations with positive self-rated health.

  6. A Latent Class Unfolding Model for Analyzing Single Stimulus Preference Ratings.

    ERIC Educational Resources Information Center

    De Soete, Geert; Heiser, Willem J.

    1993-01-01

    A latent class unfolding model is developed for single stimulus preference ratings. One advantage is the possibility of testing the spatial unfolding model against the unconstrained latent class model for rating data. The model is applied to data about party preferences of members of the Dutch parliament. (SLD)

  7. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    NASA Astrophysics Data System (ADS)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  8. A simple reaction-rate model for turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.

    1975-01-01

    A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.

  9. Prospective Evaluation of the Global Earthquake Activity Rate Model (GEAR1) Earthquake Forecast: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schorlemmer, Danijel; Beutin, Thomas

    2017-04-01

    The Global Earthquake Activity Rate Model (GEAR1) is a hybrid seismicity model, constructed from a loglinear combination of smoothed seismicity from the Global Centroid Moment Tensor (CMT) earthquake catalog and geodetic strain rates (Global Strain Rate Map, version 2.1). For the 2005-2012 retrospective evaluation period, GEAR1 outperformed both parent strain rate and smoothed seismicity forecasts. Since 1. October 2015, GEAR1 has been prospectively evaluated by the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center. Here, we present initial one-year test results of the GEAR1, GSRM and GSRM2.1, as well as localized evaluation of GEAR1 performance. The models were evaluated on the consistency in number (N-test), spatial (S-test) and magnitude (M-test) distribution of forecasted and observed earthquakes, as well as overall data consistency (CL-, L-tests). Performance at target earthquake locations was compared between models using the classical paired T-test and its non-parametric equivalent, the W-test, to determine if one model could be rejected in favor of another at the 0.05 significance level. For the evaluation period from 1. October 2015 to 1. October 2016, the GEAR1, GSRM and GSRM2.1 forecasts pass all CSEP likelihood tests. Comparative test results show statistically significant improvement of GEAR1 performance over both strain rate-based forecasts, both of which can be rejected in favor of GEAR1. Using point process residual analysis, we investigate the spatial distribution of differences in GEAR1, GSRM and GSRM2 model performance, to identify regions where the GEAR1 model should be adjusted, that could not be inferred from CSEP test results. Furthermore, we investigate whether the optimal combination of smoothed seismicity and strain rates remains stable over space and time.

  10. Ads' click-through rates predicting based on gated recurrent unit neural networks

    NASA Astrophysics Data System (ADS)

    Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi

    2018-05-01

    In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.

  11. [A systematic review of worldwide natural history models of colorectal cancer: classification, transition rate and a recommendation for developing Chinese population-specific model].

    PubMed

    Li, Z F; Huang, H Y; Shi, J F; Guo, C G; Zou, S M; Liu, C C; Wang, Y; Wang, L; Zhu, S L; Wu, S L; Dai, M

    2017-02-10

    Objective: To review the worldwide studies on natural history models among colorectal cancer (CRC), and to inform building a Chinese population-specific CRC model and developing a platform for further evaluation of CRC screening and other interventions in population in China. Methods: A structured literature search process was conducted in PubMed and the target publication dates were from January 1995 to December 2014. Information about classification systems on both colorectal cancer and precancer on corresponding transition rate, were extracted and summarized. Indicators were mainly expressed by the medians and ranges of annual progression or regression rate. Results: A total of 24 studies were extracted from 1 022 studies, most were from America ( n =9), but 2 from China including 1 from the mainland area, mainly based on Markov model ( n =22). Classification systems for adenomas included progression risk ( n =9) and the sizes of adenoma ( n =13, divided into two ways) as follows: 1) Based on studies where adenoma was risk-dependent, the median annual transition rates, from ' normal status' to ' non-advanced adenoma', 'non-advanced' to ' advanced' and ' advanced adenoma' to CRC were 0.016 0 (range: 0.002 2-0.020 0), 0.020 (range: 0.002-0.177) and 0.044 (range: 0.005-0.063), respectively. 2) Median annual transition rates, based on studies where adenoma were classified by sizes, into <10 mm and ≥10 mm ( n =7), from ' normal' to adenoma <10 mm, from adenoma <10 mm to adenoma ≥10 mm and adenoma ≥ 10 mm to CRC, were 0.016 7 (range: 0.015 0-0.037 0), 0.020 (range: 0.015-0.035) and 0.040 0 (range: 0.008 5-0.050 0), respectively. 3) Median annual transition rates, based on studies where adenoma, were classified by sizes into diminutive (≤5 mm), small (6-9 mm) and large adenoma (≥10 mm) ( n =6), from ' normal' to diminutive adenoma,'diminutive' to ' small','small' to ' large', and large adenoma to CRC were 0.013 (range: 0.009-0.019), 0.043 (range: 0

  12. INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?

    PubMed

    Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P

    2015-01-01

    Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.

  13. Modeling the shear rate and pressure drop in a hydrodynamic cavitation reactor with experimental validation based on KI decomposition studies.

    PubMed

    Badve, Mandar P; Alpar, Tibor; Pandit, Aniruddha B; Gogate, Parag R; Csoka, Levente

    2015-01-01

    A mathematical model describing the shear rate and pressure variation in a complex flow field created in a hydrodynamic cavitation reactor (stator and rotor assembly) has been depicted in the present study. The design of the reactor is such that the rotor is provided with surface indentations and cavitational events are expected to occur on the surface of the rotor as well as within the indentations. The flow characteristics of the fluid have been investigated on the basis of high accuracy compact difference schemes and Navier-Stokes method. The evolution of streamlining structures during rotation, pressure field and shear rate of a Newtonian fluid flow have been numerically established. The simulation results suggest that the characteristics of shear rate and pressure area are quite different based on the magnitude of the rotation velocity of the rotor. It was observed that area of the high shear zone at the indentation leading edge shrinks with an increase in the rotational speed of the rotor, although the magnitude of the shear rate increases linearly. It is therefore concluded that higher rotational speeds of the rotor, tends to stabilize the flow, which in turn results into less cavitational activity compared to that observed around 2200-2500RPM. Experiments were carried out with initial concentration of KI as 2000ppm. Maximum of 50ppm of iodine liberation was observed at 2200RPM. Experimental as well as simulation results indicate that the maximum cavitational activity can be seen when rotation speed is around 2200-2500RPM. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Nonlinearity and Strain-Rate Dependence in the Deformation Response of Polymer Matrix Composites Modeled

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2000-01-01

    There has been no accurate procedure for modeling the high-speed impact of composite materials, but such an analytical capability will be required in designing reliable lightweight engine-containment systems. The majority of the models in use assume a linear elastic material response that does not vary with strain rate. However, for containment systems, polymer matrix composites incorporating ductile polymers are likely to be used. For such a material, the deformation response is likely to be nonlinear and to vary with strain rate. An analytical model has been developed at the NASA Glenn Research Center at Lewis Field that incorporates both of these features. A set of constitutive equations that was originally developed to analyze the viscoplastic deformation of metals (Ramaswamy-Stouffer equations) was modified to simulate the nonlinear, rate-dependent deformation of polymers. Specifically, the effects of hydrostatic stresses on the inelastic response, which can be significant in polymers, were accounted for by a modification of the definition of the effective stress. The constitutive equations were then incorporated into a composite micromechanics model based on the mechanics of materials theory. This theory predicts the deformation response of a composite material from the properties and behavior of the individual constituents. In this manner, the nonlinear, rate-dependent deformation response of a polymer matrix composite can be predicted.

  15. Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G. P.

    Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.

  16. Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation

    NASA Astrophysics Data System (ADS)

    Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.

    2017-12-01

    One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).

  17. A Nonlinear Dynamic Inversion Predictor-Based Model Reference Adaptive Controller for a Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.

    2010-01-01

    Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).

  18. O₂migration rates in [NiFe] hydrogenases. A joint approach combining free-energy calculations and kinetic modeling.

    PubMed

    Topin, Jérémie; Diharce, Julien; Fiorucci, Sébastien; Antonczak, Serge; Golebiowski, Jérôme

    2014-01-23

    Hydrogenases are promising candidates for the catalytic production of green energy by means of biological ways. The major impediment to such a production is rooted in their inhibition under aerobic conditions. In this work, we model dioxygen migration rates in mutants of a hydrogenase of Desulfovibrio fructusovorans. The approach relies on the calculation of the whole potential of mean force for O2 migration within the wild-type as well as in V74M, V74F, and V74Q mutant channels. The three free-energy barriers along the entire migration pathway are converted into chemical rates through modeling based on Transition State Theory. The use of such a model recovers the trend of O2 migration rates among the series.

  19. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  20. Comprehensive Analyses of Ventricular Myocyte Models Identify Targets Exhibiting Favorable Rate Dependence

    PubMed Central

    Bugana, Marco; Severi, Stefano; Sobie, Eric A.

    2014-01-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration

  1. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    PubMed

    Cummins, Megan A; Dalal, Pavan J; Bugana, Marco; Severi, Stefano; Sobie, Eric A

    2014-03-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration

  2. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics

    PubMed Central

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  3. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  4. Experimental Study and Modelling of Poly (Methyl Methacrylate) and Polycarbonate Compressive Behavior from Low to High Strain Rates

    NASA Astrophysics Data System (ADS)

    El-Qoubaa, Z.; Colard, L.; Matadi Boumbimba, R.; Rusinek, A.

    2018-06-01

    This paper concerns an experimental investigation of Polycarbonate and Poly (methyl methacrylate) compressive behavior from low to high strain rates. Experiments were conducted from 0.001/s to ≈ 5000/s for PC and from 0.001/s to ≈ 2000/s for PMMA. The true strain-stress behavior is established and analyzed at various stain rates. Both PC and PMMA mechanical behavior appears as known, to be strain rate and temperature dependent. The DSGZ model is selected for modelling the strain-stress curves while the yield stress is reproduced using the cooperative model and a modified Eyring equation based on Eyring first process theory. All the three models predictions are in agreement with experiments performed on PC and PMMA.

  5. Experimental Study and Modelling of Poly (Methyl Methacrylate) and Polycarbonate Compressive Behavior from Low to High Strain Rates

    NASA Astrophysics Data System (ADS)

    El-Qoubaa, Z.; Colard, L.; Matadi Boumbimba, R.; Rusinek, A.

    2018-03-01

    This paper concerns an experimental investigation of Polycarbonate and Poly (methyl methacrylate) compressive behavior from low to high strain rates. Experiments were conducted from 0.001/s to ≈ 5000/s for PC and from 0.001/s to ≈ 2000/s for PMMA. The true strain-stress behavior is established and analyzed at various stain rates. Both PC and PMMA mechanical behavior appears as known, to be strain rate and temperature dependent. The DSGZ model is selected for modelling the strain-stress curves while the yield stress is reproduced using the cooperative model and a modified Eyring equation based on Eyring first process theory. All the three models predictions are in agreement with experiments performed on PC and PMMA.

  6. An alternative derivation of the stationary distribution of the multivariate neutral Wright-Fisher model for low mutation rates with a view to mutation rate estimation from site frequency data.

    PubMed

    Schrempf, Dominik; Hobolth, Asger

    2017-04-01

    Recently, Burden and Tang (2016) provided an analytical expression for the stationary distribution of the multivariate neutral Wright-Fisher model with low mutation rates. In this paper we present a simple, alternative derivation that illustrates the approximation. Our proof is based on the discrete multivariate boundary mutation model which has three key ingredients. First, the decoupled Moran model is used to describe genetic drift. Second, low mutation rates are assumed by limiting mutations to monomorphic states. Third, the mutation rate matrix is separated into a time-reversible part and a flux part, as suggested by Burden and Tang (2016). An application of our result to data from several great apes reveals that the assumption of stationarity may be inadequate or that other evolutionary forces like selection or biased gene conversion are acting. Furthermore we find that the model with a reversible mutation rate matrix provides a reasonably good fit to the data compared to the one with a non-reversible mutation rate matrix. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. Coupling impervious surface rate derived from satellite remote sensing with distributed hydrological model for highly urbanized watershed flood forecasting

    NASA Astrophysics Data System (ADS)

    Dong, L.

    2017-12-01

    Abstract: The original urban surface structure changed a lot because of the rapid development of urbanization. Impermeable area has increased a lot. It causes great pressure for city flood control and drainage. Songmushan reservoir basin with high degree of urbanization is taken for an example. Pixel from Landsat is decomposed by Linear spectral mixture model and the proportion of urban area in it is considered as impervious rate. Based on impervious rate data before and after urbanization, an physically based distributed hydrological model, Liuxihe Model, is used to simulate the process of hydrology. The research shows that the performance of the flood forecasting of high urbanization area carried out with Liuxihe Model is perfect and can meet the requirement of the accuracy of city flood control and drainage. The increase of impervious area causes conflux speed more quickly and peak flow to be increased. It also makes the time of peak flow advance and the runoff coefficient increase. Key words: Liuxihe Model; Impervious rate; City flood control and drainage; Urbanization; Songmushan reservoir basin

  8. Metabolically Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates (Final Report, 2009)

    EPA Science Inventory

    EPA announced the availability of the final report, Metabolically Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates. This report provides a revised approach for calculating an individual's ventilation rate directly from their oxygen c...

  9. A reconsideration of negative ratings for network-based recommendation

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Ren, Liang; Lin, Wenbin

    2018-01-01

    Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.

  10. A multiscale strength model for tantalum over an extended range of strain rates

    NASA Astrophysics Data System (ADS)

    Barton, N. R.; Rhee, M.

    2013-09-01

    A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].

  11. A continuous damage model based on stepwise-stress creep rupture tests

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.

    1985-01-01

    A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.

  12. MR- versus CT-based high-dose-rate interstitial brachytherapy for vaginal recurrence of endometrial cancer.

    PubMed

    Kamran, Sophia C; Manuel, Matthias M; Catalano, Paul; Cho, Linda; Damato, Antonio L; Lee, Larissa J; Schmidt, Ehud J; Viswanathan, Akila N

    To compare clinical outcomes of MR-based versus CT-based high-dose-rate interstitial brachytherapy (ISBT) for vaginal recurrence of endometrioid endometrial cancer (EC). We reviewed 66 patients with vaginal recurrent EC; 18 had MR-based ISBT on a prospective clinical trial and 48 had CT-based treatment. Kaplan-Meier survival modeling was used to generate estimates for local control (LC), disease-free interval (DFI), and overall survival (OS), and multivariate Cox modeling was used to assess prognostic factors. Toxicities were evaluated and compared. Median followup was 33 months (CT 30 months, MR 35 months). Median cumulative equivalent dose in 2-Gy fractions was 75.5 Gy for MR-ISBT and 73.8 Gy for CT-ISBT (p = 0.58). MR patients were older (p = 0.03) and had larger tumor size (>4 cm vs. ≤ 4 cm) compared to CT patients (p = 0.04). For MR-based versus CT-based ISBT, 3-year KM rate for local control was 100% versus 78% (p = 0.04), DFI was 69% versus 55% (p = 0.1), and OS was 63% versus 75% (p = 0.81), respectively. On multivariate analysis, tumor Grade 3 was associated with worse OS (HR 3.57, 95% CI 1.25, 11.36) in a model with MR-ISBT (HR 0.56, 95% CI 0.16, 1.89). Toxicities were not significantly different between the two modalities. Despite worse patient prognostic features, MR-ISBT was associated with a significantly better (100%) 3-year local control, comparable survival, and improved DFI rates compared to CT. Toxicities did not differ compared to CT-ISBT patients. Tumor grade contributed as the most significant predictor for survival. Larger prospective studies are needed to assess the impact of MR-ISBT on survival outcomes. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  13. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  14. Statistically Based Morphodynamic Modeling of Tracer Slowdown

    NASA Astrophysics Data System (ADS)

    Borhani, S.; Ghasemi, A.; Hill, K. M.; Viparelli, E.

    2017-12-01

    Tracer particles are used to study bedload transport in gravel-bed rivers. One of the advantages associated with using of tracer particles is that they allow for direct measures of the entrainment rates and their size distributions. The main issue in large scale studies with tracer particles is the difference between tracer stone short term and long term behavior. This difference is due to the fact that particles undergo vertical mixing or move to less active locations such as bars or even floodplains. For these reasons the average virtual velocity of tracer particle decreases in time, i.e. the tracer slowdown. In summary, tracer slowdown can have a significant impact on the estimation of bedload transport rate or long term dispersal of contaminated sediment. The vast majority of the morphodynamic models that account for the non-uniformity of the bed material (tracer and not tracer, in this case) are based on a discrete description of the alluvial deposit. The deposit is divided in two different regions; the active layer and the substrate. The active layer is a thin layer in the topmost part of the deposit whose particles can interact with the bed material transport. The substrate is the part of the deposit below the active layer. Due to the discrete representation of the alluvial deposit, active layer models are not able to reproduce tracer slowdown. In this study we try to model the slowdown of tracer particles with the continuous Parker-Paola-Leclair morphodynamic framework. This continuous, i.e. not layer-based, framework is based on a stochastic description of the temporal variation of bed surface elevation, and of the elevation specific particle entrainment and deposition. Particle entrainment rates are computed as a function of the flow and sediment characteristics, while particle deposition is estimated with a step length formulation. Here we present one of the first implementation of the continuum framework at laboratory scale, its validation against

  15. Modeling of the pyruvate production with Escherichia coli: comparison of mechanistic and neural networks-based models.

    PubMed

    Zelić, B; Bolf, N; Vasić-Racki, D

    2006-06-01

    Three different models: the unstructured mechanistic black-box model, the input-output neural network-based model and the externally recurrent neural network model were used to describe the pyruvate production process from glucose and acetate using the genetically modified Escherichia coli YYC202 ldhA::Kan strain. The experimental data were used from the recently described batch and fed-batch experiments [ Zelić B, Study of the process development for Escherichia coli-based pyruvate production. PhD Thesis, University of Zagreb, Faculty of Chemical Engineering and Technology, Zagreb, Croatia, July 2003. (In English); Zelić et al. Bioproc Biosyst Eng 26:249-258 (2004); Zelić et al. Eng Life Sci 3:299-305 (2003); Zelić et al Biotechnol Bioeng 85:638-646 (2004)]. The neural networks were built out of the experimental data obtained in the fed-batch pyruvate production experiments with the constant glucose feed rate. The model validation was performed using the experimental results obtained from the batch and fed-batch pyruvate production experiments with the constant acetate feed rate. Dynamics of the substrate and product concentration changes was estimated using two neural network-based models for biomass and pyruvate. It was shown that neural networks could be used for the modeling of complex microbial fermentation processes, even in conditions in which mechanistic unstructured models cannot be applied.

  16. Item Response Models for Local Dependence among Multiple Ratings

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Su, Chi-Ming; Qiu, Xue-Lan

    2014-01-01

    Ratings given to the same item response may have a stronger correlation than those given to different item responses, especially when raters interact with one another before giving ratings. The rater bundle model was developed to account for such local dependence by forming multiple ratings given to an item response as a bundle and assigning…

  17. Deformed shell model study of event rates for WIMP-73Ge scattering

    NASA Astrophysics Data System (ADS)

    Sahu, R.; Kota, V. K. B.

    2017-12-01

    The event detection rates for the Weakly Interacting Massive Particles (WIMP) (a dark matter candidate) are calculated with 73Ge as the detector. The calculations are performed within the deformed shell model (DSM) based on Hartree-Fock states. First, the energy levels and magnetic moment for the ground state and two low-lying positive parity states for this nucleus are calculated and compared with experiment. The agreement is quite satisfactory. Then the nuclear wave functions are used to investigate the elastic and inelastic scattering of WIMP from 73Ge; inelastic scattering, especially for the 9/2+ → 5/2+ transition, is studied for the first time. The nuclear structure factors which are independent of supersymmetric model are also calculated as a function of WIMP mass. The event rates are calculated for a given set of nucleonic current parameters. The calculation shows that 73Ge is a good detector for detecting dark matter.

  18. The scaling of contact rates with population density for the infectious disease models.

    PubMed

    Hu, Hao; Nigmatulina, Karima; Eckhoff, Philip

    2013-08-01

    Contact rates and patterns among individuals in a geographic area drive transmission of directly-transmitted pathogens, making it essential to understand and estimate contacts for simulation of disease dynamics. Under the uniform mixing assumption, one of two mechanisms is typically used to describe the relation between contact rate and population density: density-dependent or frequency-dependent. Based on existing evidence of population threshold and human mobility patterns, we formulated a spatial contact model to describe the appropriate form of transmission with initial growth at low density and saturation at higher density. We show that the two mechanisms are extreme cases that do not capture real population movement across all scales. Empirical data of human and wildlife diseases indicate that a nonlinear function may work better when looking at the full spectrum of densities. This estimation can be applied to large areas with population mixing in general activities. For crowds with unusually large densities (e.g., transportation terminals, stadiums, or mass gatherings), the lack of organized social contact structure deviates the physical contacts towards a special case of the spatial contact model - the dynamics of kinetic gas molecule collision. In this case, an ideal gas model with van der Waals correction fits well; existing movement observation data and the contact rate between individuals is estimated using kinetic theory. A complete picture of contact rate scaling with population density may help clarify the definition of transmission rates in heterogeneous, large-scale spatial systems. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Modelling present-day basal melt rates for Antarctic ice shelves using a parametrization of buoyant meltwater plumes

    NASA Astrophysics Data System (ADS)

    Lazeroms, Werner M. J.; Jenkins, Adrian; Hilmar Gudmundsson, G.; van de Wal, Roderik S. W.

    2018-01-01

    Basal melting below ice shelves is a major factor in mass loss from the Antarctic Ice Sheet, which can contribute significantly to possible future sea-level rise. Therefore, it is important to have an adequate description of the basal melt rates for use in ice-dynamical models. Most current ice models use rather simple parametrizations based on the local balance of heat between ice and ocean. In this work, however, we use a recently derived parametrization of the melt rates based on a buoyant meltwater plume travelling upward beneath an ice shelf. This plume parametrization combines a non-linear ocean temperature sensitivity with an inherent geometry dependence, which is mainly described by the grounding-line depth and the local slope of the ice-shelf base. For the first time, this type of parametrization is evaluated on a two-dimensional grid covering the entire Antarctic continent. In order to apply the essentially one-dimensional parametrization to realistic ice-shelf geometries, we present an algorithm that determines effective values for the grounding-line depth and basal slope in any point beneath an ice shelf. Furthermore, since detailed knowledge of temperatures and circulation patterns in the ice-shelf cavities is sparse or absent, we construct an effective ocean temperature field from observational data with the purpose of matching (area-averaged) melt rates from the model with observed present-day melt rates. Our results qualitatively replicate large-scale observed features in basal melt rates around Antarctica, not only in terms of average values, but also in terms of the spatial pattern, with high melt rates typically occurring near the grounding line. The plume parametrization and the effective temperature field presented here are therefore promising tools for future simulations of the Antarctic Ice Sheet requiring a more realistic oceanic forcing.

  20. Mathematical model of cycad cones' thermogenic temperature responses: inverse calorimetry to estimate metabolic heating rates.

    PubMed

    Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I

    2012-12-21

    A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Visual Perception Based Rate Control Algorithm for HEVC

    NASA Astrophysics Data System (ADS)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  2. Hindered disulfide bonds to regulate release rate of model drug from mesoporous silica.

    PubMed

    Nadrah, Peter; Maver, Uroš; Jemec, Anita; Tišler, Tatjana; Bele, Marjan; Dražić, Goran; Benčina, Mojca; Pintar, Albin; Planinšek, Odon; Gaberšček, Miran

    2013-05-01

    With the advancement of drug delivery systems based on mesoporous silica nanoparticles (MSNs), a simple and efficient method regulating the drug release kinetics is needed. We developed redox-responsive release systems with three levels of hindrance around the disulfide bond. A model drug (rhodamine B dye) was loaded into MSNs' mesoporous voids. The pore opening was capped with β-cyclodextrin in order to prevent leakage of drug. Indeed, in absence of a reducing agent the systems exhibited little leakage, while the addition of dithiothreitol cleaved the disulfide bonds and enabled the release of cargo. The release rate and the amount of released dye were tuned by the level of hindrance around disulfide bonds, with the increased hindrance causing a decrease in the release rate as well as in the amount of released drug. Thus, we demonstrated the ability of the present mesoporous systems to intrinsically control the release rate and the amount of the released cargo by only minor structural variations. Furthermore, an in vivo experiment on zebrafish confirmed that the present model delivery system is nonteratogenic.

  3. Strengthened PAN-based carbon fibers obtained by slow heating rate carbonization

    PubMed Central

    Kim, Min-A; Jang, Dawon; Tejima, Syogo; Cruz-Silva, Rodolfo; Joh, Han-Ik; Kim, Hwan Chul; Lee, Sungho; Endo, Morinobu

    2016-01-01

    Large efforts have been made over the last 40 years to increase the mechanical strength of polyacrylonitrile (PAN)-based carbon fibers (CFs) using a variety of chemical or physical protocols. In this paper, we report a new method to increase CFs mechanical strength using a slow heating rate during the carbonization process. This new approach increases both the carbon sp3 bonding and the number of nitrogen atoms with quaternary bonding in the hexagonal carbon network. Theoretical calculations support a crosslinking model promoted by the interstitial carbon atoms located in the graphitic interlayer spaces. The improvement in mechanical performance by a controlled crosslinking between the carbon hexagonal layers of the PAN based CFs is a new concept that can contribute further in the tailoring of CFs performance based on the understanding of their microstructure down to the atomic scale. PMID:27004752

  4. Strengthened PAN-based carbon fibers obtained by slow heating rate carbonization.

    PubMed

    Kim, Min-A; Jang, Dawon; Tejima, Syogo; Cruz-Silva, Rodolfo; Joh, Han-Ik; Kim, Hwan Chul; Lee, Sungho; Endo, Morinobu

    2016-03-23

    Large efforts have been made over the last 40 years to increase the mechanical strength of polyacrylonitrile (PAN)-based carbon fibers (CFs) using a variety of chemical or physical protocols. In this paper, we report a new method to increase CFs mechanical strength using a slow heating rate during the carbonization process. This new approach increases both the carbon sp(3) bonding and the number of nitrogen atoms with quaternary bonding in the hexagonal carbon network. Theoretical calculations support a crosslinking model promoted by the interstitial carbon atoms located in the graphitic interlayer spaces. The improvement in mechanical performance by a controlled crosslinking between the carbon hexagonal layers of the PAN based CFs is a new concept that can contribute further in the tailoring of CFs performance based on the understanding of their microstructure down to the atomic scale.

  5. Do Physicians Respond to Web-Based Patient Ratings? An Analysis of Physicians’ Responses to More Than One Million Web-Based Ratings Over a Six-Year Period

    PubMed Central

    Sauter, Lisa; Jablonski, Lisa; Sander, Uwe; Taheri-Zadeh, Fatemeh

    2017-01-01

    Background Physician-rating websites (PRWs) may lead to quality improvements in case they enable and establish a peer-to-peer communication between patients and physicians. Yet, we know little about whether and how physicians respond on the Web to patient ratings. Objective The objective of this study was to describe trends in physicians’ Web-based responses to patient ratings over time, to identify what physician characteristics influence Web-based responses, and to examine the topics physicians are likely to respond to. Methods We analyzed physician responses to more than 1 million patient ratings displayed on the German PRW, jameda, from 2010 to 2015. Quantitative analysis contained chi-square analyses and the Mann-Whitney U test. Quantitative content techniques were applied to determine the topics physicians respond to based on a randomly selected sample of 600 Web-based ratings and corresponding physician responses. Results Overall, physicians responded to 1.58% (16,640/1,052,347) of all Web-based ratings, with an increasing trend over time from 0.70% (157/22,355) in 2010 to 1.88% (6377/339,919) in 2015. Web-based ratings that were responded to had significantly worse rating results than ratings that were not responded to (2.15 vs 1.74, P<.001). Physicians who respond on the Web to patient ratings differ significantly from nonresponders regarding several characteristics such as gender and patient recommendation results (P<.001 each). Regarding scaled-survey rating elements, physicians were most likely to respond to the waiting time within the practice (19.4%, 99/509) and the time spent with the patient (18.3%, 110/600). Almost one-third of topics in narrative comments were answered by the physicians (30.66%, 382/1246). Conclusions So far, only a minority of physicians have taken the chance to respond on the Web to patient ratings. This is likely because of (1) the low awareness of PRWs among physicians, (2) the fact that only a few PRWs enable physicians to

  6. Oxygen consumption rates by different oenological tannins in a model wine solution.

    PubMed

    Pascual, Olga; Vignault, Adeline; Gombau, Jordi; Navarro, Maria; Gómez-Alonso, Sergio; García-Romero, Esteban; Canals, Joan Miquel; Hermosín-Gutíerrez, Isidro; Teissedre, Pierre-Louis; Zamora, Fernando

    2017-11-01

    The kinetics of oxygen consumption by different oenological tannins were measured in a model wine solution using the non-invasive method based on luminiscence. The results indicate that the oxygen consumption rate follows second-order kinetics depending on tannin and oxygen concentrations. They also confirm that the oxygen consumption rate is influenced by temperature in accordance with Arrhenius law. The indications are that ellagitannins are the fastest oxygen consumers of the different oenological tannins, followed in decreasing order by quebracho tannins, skin tannins, seed tannins and finally gallotannins. This methodology can therefore be proposed as an index for determining the effectiveness of different commercial tannins in protecting wines against oxidation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Growth rate in the dynamical dark energy models.

    PubMed

    Avsajanishvili, Olga; Arkhipova, Natalia A; Samushia, Lado; Kahniashvili, Tina

    Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter [Formula: see text] that describes the steepness of the scalar field potential.

  8. An agent-based modelling framework to explore the role of social media and stubborn people on evacuation rates during flooding events

    NASA Astrophysics Data System (ADS)

    Du, E.; Cai, X.; Minsker, B. S.; Sun, Z.

    2017-12-01

    Flood warnings from various information sources are important for individuals to make evacuation decisions during a flood event. In this study, we develop a general opinion dynamics model to simulate how individuals update their flood hazard awareness when exposed to multiple information sources, including global broadcast, social media, and observations of neighbors' actions. The opinion dynamics model is coupled with a traffic model to simulate the evacuation processes of a residential community with a given transportation network. Through various scenarios, we investigate how social media affect the opinion dynamics and evacuation processes. We find that stronger social media can make evacuation processes more sensitive to the change of global broadcast and neighbor observations, and thus, impose larger uncertainty on evacuation rates (i.e., a large range of evacuation rates corresponding to sources of information). For instance, evacuation rates are lower when social media become more influential and individuals have less trust in global broadcast. Stubborn individuals can significantly affect the opinion dynamics and reduce evacuation rates. In addition, evacuation rates respond to the percentage of stubborn agents in a non-linear manner, i.e., above a threshold, the impact of stubborn agents will be intensified by stronger social media. These results highlight the role of social media in flood evacuation processes and the need to monitor social media so that misinformation can be corrected in a timely manner. The joint impacts of social media, quality of flood warnings and transportation capacity on evacuation rates are also discussed.

  9. Improving Video Based Heart Rate Monitoring.

    PubMed

    Lin, Jian; Rozado, David; Duenser, Andreas

    2015-01-01

    Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.

  10. Not that neglected! Base rates influence related and unrelated judgments.

    PubMed

    Białek, Michał

    2017-06-01

    It is claimed that people are unable (or unwilling) to incorporate prior probabilities into posterior assessments, such as their estimation of the likelihood of a person with characteristics typical of an engineer actually being an engineer given that they are drawn from a sample including a very small number of engineers. This paper shows that base rates are incorporated in classifications (Experiment 1) and, moreover, that base rates also affect unrelated judgments, such as how well a provided description of a person fits a stereotypical engineer (Experiment 2). Finally, Experiment 3 shows that individuals who make both types of assessments - though using base rates to the same extent in the former judgments - are able to decrease the extent to which they incorporate base rates in the latter judgments. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Study of tissue oxygen supply rate in a macroscopic photodynamic therapy singlet oxygen model

    NASA Astrophysics Data System (ADS)

    Zhu, Timothy C.; Liu, Baochang; Penjweini, Rozhin

    2015-03-01

    An appropriate expression for the oxygen supply rate (Γs) is required for the macroscopic modeling of the complex mechanisms of photodynamic therapy (PDT). It is unrealistic to model the actual heterogeneous tumor microvascular networks coupled with the PDT processes because of the large computational requirement. In this study, a theoretical microscopic model based on uniformly distributed Krogh cylinders is used to calculate Γs=g (1-[O]/[]0) that can replace the complex modeling of blood vasculature while maintaining a reasonable resemblance to reality; g is the maximum oxygen supply rate and [O]/[]0 is the volume-average tissue oxygen concentration normalized to its value prior to PDT. The model incorporates kinetic equations of oxygen diffusion and convection within capillaries and oxygen saturation from oxyhemoglobin. Oxygen supply to the tissue is via diffusion from the uniformly distributed blood vessels. Oxygen can also diffuse along the radius and the longitudinal axis of the cylinder within tissue. The relations of Γs to [3O2]/] are examined for a biologically reasonable range of the physiological parameters for the microvasculature and several light fluence rates (ϕ). The results show a linear relationship between Γs and [3O2]/], independent of ϕ and photochemical parameters; the obtained g ranges from 0.4 to 1390 μM/s.

  12. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  13. Selective visualization of fluorescent sterols in Caenorhabditis elegans by bleach-rate-based image segmentation.

    PubMed

    Wüstner, Daniel; Landt Larsen, Ane; Faergeman, Nils J; Brewer, Jonathan R; Sage, Daniel

    2010-04-01

    The nematode Caenorhabditis elegans is a genetically tractable model organism to investigate sterol transport. In vivo imaging of the fluorescent sterol, dehydroergosterol (DHE), is challenged by C. elegans' high autofluorescence in the same spectral region as emission of DHE. We present a method to detect DHE selectively, based on its rapid bleaching kinetics compared to cellular autofluorescence. Worms were repeatedly imaged on an ultraviolet-sensitive wide field (UV-WF) microscope, and bleaching kinetics of DHE were fitted on a pixel-basis to mathematical models describing the intensity decay. Bleach-rate constants were determined for DHE in vivo and confirmed in model membranes. Using this method, we could detect enrichment of DHE in specific tissues like the nerve ring, the spermateca and oocytes. We confirm these results in C. elegans gut-granule-loss (glo) mutants with reduced autofluorescence and compare our method with three-photon excitation microscopy of sterol in selected tissues. Bleach-rate-based UV-WF imaging is a useful tool for genetic screening experiments on sterol transport, as exemplified by RNA interference against the rme-2 gene coding for the yolk receptor and for worm homologues of Niemann-Pick C disease proteins. Our approach is generally useful for identifying fluorescent probes in the presence of high cellular autofluorescence.

  14. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    NASA Astrophysics Data System (ADS)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  15. The European style arithmetic Asian option pricing with stochastic interest rate based on Black Scholes model

    NASA Astrophysics Data System (ADS)

    Winarti, Yuyun Guna; Noviyanti, Lienda; Setyanto, Gatot R.

    2017-03-01

    The stock investment is a high risk investment. Therefore, there are derivative securities to reduce these risks. One of them is Asian option. The most fundamental of option is option pricing. Many factors that determine the option price are underlying asset price, strike price, maturity date, volatility, risk free interest rate and dividends. Various option pricing usually assume that risk free interest rate is constant. While in reality, this factor is stochastic process. The arithmetic Asian option is free from distribution, then, its pricing is done using the modified Black-Scholes model. In this research, the modification use the Curran approximation. This research focuses on the arithmetic Asian option pricing without dividends. The data used is the stock daily closing data of Telkom from January 1 2016 to June 30 2016. Finnaly, those option price can be used as an option trading strategy.

  16. Microscopic modeling of gas-surface scattering. I. A combined molecular dynamics-rate equation approach

    NASA Astrophysics Data System (ADS)

    Filinov, A.; Bonitz, M.; Loffhagen, D.

    2018-06-01

    A combination of first principle molecular dynamics (MD) simulations with a rate equation model (MD-RE approach) is presented to study the trapping and the scattering of rare gas atoms from metal surfaces. The temporal evolution of the atom fractions that are either adsorbed or scattered into the continuum is investigated in detail. We demonstrate that for this description one has to consider trapped, quasi-trapped and scattering states, and present an energetic definition of these states. The rate equations contain the transition probabilities between the states. We demonstrate how these rate equations can be derived from kinetic theory. Moreover, we present a rigorous way to determine the transition probabilities from a microscopic analysis of the particle trajectories generated by MD simulations. Once the system reaches quasi-equilibrium, the rates converge to stationary values, and the subsequent thermal adsorption/desorption dynamics is completely described by the rate equations without the need to perform further time-consuming MD simulations. As a proof of concept of our approach, MD simulations for argon atoms interacting with a platinum (111) surface are presented. A detailed deterministic trajectory analysis is performed, and the transition rates are constructed. The dependence of the rates on the incidence conditions and the lattice temperature is analyzed. Based on this example, we analyze the time scale of the gas-surface system to approach the quasi-stationary state. The MD-RE model has great relevance for the plasma-surface modeling as it makes an extension of accurate simulations to long, experimentally relevant time scales possible. Its application to the computation of atomic sticking probabilities is given in the second part (paper II).

  17. Forecasting induced seismicity rate and Mmax using calibrated numerical models

    NASA Astrophysics Data System (ADS)

    Dempsey, D.; Suckale, J.

    2016-12-01

    At Groningen, The Netherlands, several decades of induced seismicity from gas extraction has culminated in a M 3.6 event (mid 2012). From a public safety and commercial perspective, it is desirable to anticipate future seismicity outcomes at Groningen. One way to quantify earthquake risk is Probabilistic Seismic Hazard Analysis (PSHA), which requires an estimate of the future seismicity rate and its magnitude frequency distribution (MFD). This approach is effective at quantifying risk from tectonic events because the seismicity rate, once measured, is almost constant over timescales of interest. In contrast, rates of induced seismicity vary significantly over building lifetimes, largely in response to changes in injection or extraction. Thus, the key to extending PSHA to induced earthquakes is to estimate future changes of the seismicity rate in response to some proposed operating schedule. Numerical models can describe the physical link between fluid pressure, effective stress change, and the earthquake process (triggering and propagation). However, models with predictive potential of individual earthquakes face the difficulty of characterizing specific heterogeneity - stress, strength, roughness, etc. - at locations of interest. Modeling catalogs of earthquakes provides a means of averaging over this uncertainty, focusing instead on the collective features of the seismicity, e.g., its rate and MFD. The model we use incorporates fluid pressure and stress changes to describe nucleation and crack-like propagation of earthquakes on stochastically characterized 1D faults. This enables simulation of synthetic catalogs of induced seismicity from which the seismicity rate, location and MFD are extracted. A probability distribution for Mmax - the largest event in some specified time window - is also computed. Because the model captures the physics linking seismicity to changes in the reservoir, earthquake observations and operating information can be used to calibrate a

  18. A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.

    PubMed

    Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu

    2015-12-01

    Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.

  19. Accounting- versus economic-based rates of return: implications for profitability measures in the pharmaceutical industry.

    PubMed

    Skrepnek, Grant H

    2004-01-01

    Accounting-based profits have indicated that pharmaceutical firms have achieved greater returns relative to other sectors. However, partially due to the theoretically inappropriate reporting of research and development (R&D) expenditures according to generally accepted accounting principles, evidence suggests that a substantial and upward bias is present in accounting-based rates of return for corporations with high levels of intangible assets. Given the intensity of R&D in pharmaceutical firms, accounting-based profit metrics in the drug sector may be affected to a greater extent than other industries. The aim of this work was to address measurement issues associated with corporate performance and factors that contribute to the bias within accounting-based rates of return. Seminal and broadly cited works on the subject of accounting- versus economic-based rates of return were reviewed from the economic and finance literature, with an emphasis placed on issues and scientific evidence directly related to the drug development process and pharmaceutical industry. With international convergence and harmonization of accounting standards being imminent, stricter adherence to theoretically sound economic principles is advocated, particularly those based on discounted cash-flow methods. Researchers, financial analysts, and policy makers must be cognizant of the biases and limitations present within numerous corporate performance measures. Furthermore, the development of more robust and valid economic models of the pharmaceutical industry is required to capture the unique dimensions of risk and return of the drug development process. Empiric work has illustrated that estimates of economic-based rates of return range from approximately 2 to approximately 11 percentage points below various accounting-based rates of return for drug companies. Because differences in the nature of risk and uncertainty borne by drug manufacturers versus other sectors make comparative assessments

  20. Modeling decay rates of dead wood in a neotropical forest.

    PubMed

    Hérault, Bruno; Beauchêne, Jacques; Muller, Félix; Wagner, Fabien; Baraloto, Christopher; Blanc, Lilian; Martin, Jean-Michel

    2010-09-01

    Variation of dead wood decay rates among tropical trees remains one source of uncertainty in global models of the carbon cycle. Taking advantage of a broad forest plot network surveyed for tree mortality over a 23-year period, we measured the remaining fraction of boles from 367 dead trees from 26 neotropical species widely varying in wood density (0.23-1.24 g cm(-3)) and tree circumference at death time (31.5-272.0 cm). We modeled decay rates within a Bayesian framework assuming a first order differential equation to model the decomposition process and tested for the effects of forest management (selective logging vs. unexploited), of mode of death (standing vs. downed) and of topographical levels (bottomlands vs. hillsides vs. hilltops) on wood decay rates. The general decay model predicts the observed remaining fraction of dead wood (R2 = 60%) with only two biological predictors: tree circumference at death time and wood specific density. Neither selective logging nor local topography had a differential effect on wood decay rates. Including the mode of death into the model revealed that standing dead trees decomposed faster than downed dead trees, but the gain of model accuracy remains rather marginal. Overall, these results suggest that the release of carbon from tropical dead trees to the atmosphere can be simply estimated using tree circumference at death time and wood density.

  1. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  2. Category Rating Is Based on Prototypes and Not Instances: Evidence from Feedback-Dependent Context Effects

    ERIC Educational Resources Information Center

    Petrov, Alexander A.

    2011-01-01

    Context effects in category rating on a 7-point scale are shown to reverse direction depending on feedback. Context (skewed stimulus frequencies) was manipulated between and feedback within subjects in two experiments. The diverging predictions of prototype- and exemplar-based scaling theories were tested using two representative models: ANCHOR…

  3. Evaluation of rate law approximations in bottom-up kinetic models of metabolism.

    PubMed

    Du, Bin; Zielinski, Daniel C; Kavvas, Erol S; Dräger, Andreas; Tan, Justin; Zhang, Zhen; Ruggiero, Kayla E; Arzumanyan, Garri A; Palsson, Bernhard O

    2016-06-06

    The mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question. In this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations. Overall, our work generally supports the use of approximate rate laws when

  4. Using Google Earth to Explore Strain Rate Models of Southern California

    NASA Astrophysics Data System (ADS)

    Richard, G. A.; Bell, E. A.; Holt, W. E.

    2007-12-01

    A series of strain rate models for the Transverse Ranges of southern California were developed based on Quaternary fault slip data and geodetic data from high precision GPS stations in southern California. Pacific-North America velocity boundary conditions are applied for all models. Topography changes are calculated using the model dilatation rates, which predict crustal thickness changes under the assumption of Airy isostasy and a specified rate of crustal volume loss through erosion. The models were designed to produce graphical and numerical output representing the configuration of the region from 3 million years ago to 3 million years into the future at intervals of 50 thousand years. Using a North American reference frame, graphical output for the topography and faults and numerical output for locations of faults and points on the crust marked by the locations on cities were used to create data in KML format that can be used in Google Earth to represent time intervals of 50 thousand years. As markers familiar to students, the cities provide a geographic context that can be used to quantify crustal movement, using the Google Earth ruler tool. By comparing distances that markers for selected cities have moved in various parts of the region, students discover that the greatest amount of crustal deformation has occurred in the vicinity of the boundary between the North American and Pacific plates. Students can also identify areas of compression or extension by finding pairs of city markers that have converged or diverged, respectively, over time. The Google Earth layers also reveal that faults that are not parallel to the plate boundary have tended to rotate clockwise due to the right lateral motion along the plate boundary zone. KML TimeSpan markup was added to two versions of the model, enabling the layers to be displayed in an automatic sequenced loop for a movie effect. The data is also available as QuickTime (.mov) and Graphics Interchange Format (.gif

  5. Pregnancy rates in HIV-positive women using contraceptives and efavirenz-based or nevirapine-based antiretroviral therapy in Kenya: a retrospective cohort study.

    PubMed

    Patel, Rena C; Onono, Maricianah; Gandhi, Monica; Blat, Cinthia; Hagey, Jill; Shade, Starley B; Vittinghoff, Eric; Bukusi, Elizabeth A; Newmann, Sara J; Cohen, Craig R

    2015-11-01

    Concerns have been raised about efavirenz reducing the effectiveness of contraceptive implants. We aimed to establish whether pregnancy rates differ between HIV-positive women who use various contraceptive methods and either efavirenz-based or nevirapine-based antiretroviral therapy (ART) regimens. We did this retrospective cohort study of HIV-positive women aged 15-45 years enrolled in 19 HIV care facilities supported by Family AIDS Care and Education Services in western Kenya between Jan 1, 2011, and Dec 31, 2013. Our primary outcome was incident pregnancy diagnosed clinically. The primary exposure was a combination of contraceptive method and efavirenz-based or nevirapine-based ART regimen. We used Poisson models, adjusting for repeated measures, and demographic, behavioural, and clinical factors, to compare pregnancy rates among women receiving different contraceptive and ART combinations. 24,560 women contributed 37,635 years of follow-up with 3337 incident pregnancies. In women using implants, adjusted pregnancy incidence was 1.1 per 100 person-years (95% CI 0.72-1.5) for nevirapine-based ART users and 3.3 per 100 person-years (1.8-4.8) for efavirenz-based ART users (adjusted incidence rate ratio [IRR] 3.0, 95% CI 1.3-4.6). In women using depot medroxyprogesterone acetate, adjusted pregnancy incidence was 4.5 per 100 person-years (95% CI 3.7-5.2) for nevirapine-based ART users and 5.4 per 100 person-years (4.0-6.8) for efavirenz-based ART users (adjusted IRR 1.2, 95% CI 0.91-1.5). Women using other contraceptive methods, except for intrauterine devices and permanent methods, had 3.1-4.1 higher rates of pregnancy than did those using implants, with 1.6-2.8 higher rates in women using efavirenz-based ART. Although HIV-positive women using implants and efavirenz-based ART had a three-times higher risk of contraceptive failure than did those using nevirapine-based ART, these women still had lower contraceptive failure rates than did those receiving all other

  6. Modeling rate sensitivity of exercise transient responses to limb motion.

    PubMed

    Yamashiro, Stanley M; Kato, Takahide

    2014-10-01

    Transient responses of ventilation (V̇e) to limb motion can exhibit predictive characteristics. In response to a change in limb motion, a rapid change in V̇e is commonly observed with characteristics different than during a change in workload. This rapid change has been attributed to a feed-forward or adaptive response. Rate sensitivity was explored as a specific hypothesis to explain predictive V̇e responses to limb motion. A simple model assuming an additive feed-forward summation of V̇e proportional to the rate of change of limb motion was studied. This model was able to successfully account for the adaptive phase correction observed during human sinusoidal changes in limb motion. Adaptation of rate sensitivity might also explain the reduction of the fast component of V̇e responses previously reported following sudden exercise termination. Adaptation of the fast component of V̇e response could occur by reduction of rate sensitivity. Rate sensitivity of limb motion was predicted by the model to reduce the phase delay between limb motion and V̇e response without changing the steady-state response to exercise load. In this way, V̇e can respond more quickly to an exercise change without interfering with overall feedback control. The asymmetry between responses to an incremental and decremental ramp change in exercise can also be accounted for by the proposed model. Rate sensitivity leads to predicted behavior, which resembles responses observed in exercise tied to expiratory reserve volume. Copyright © 2014 the American Physiological Society.

  7. Reconfigurable radio receiver with fractional sample rate converter and multi-rate ADC based on LO-derived sampling clock

    NASA Astrophysics Data System (ADS)

    Park, Sungkyung; Park, Chester Sungchung

    2018-03-01

    A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.

  8. Modeling heart rate variability by stochastic feedback

    NASA Technical Reports Server (NTRS)

    Amaral, L. A.; Goldberger, A. L.; Stanley, H. E.

    1999-01-01

    We consider the question of how the cardiac rhythm spontaneously self-regulates and propose a new mechanism as a possible answer. We model the neuroautonomic regulation of the heart rate as a stochastic feedback system and find that the model successfully accounts for key characteristics of cardiac variability, including the 1/f power spectrum, the functional form and scaling of the distribution of variations of the interbeat intervals, and the correlations in the Fourier phases which indicate nonlinear dynamics.

  9. Forecasting selected specific age mortality rate of Malaysia by using Lee-Carter model

    NASA Astrophysics Data System (ADS)

    Shukri Kamaruddin, Halim; Ismail, Noriszura

    2018-03-01

    Observing mortality pattern and trend is an important subject for any country to maintain a good social-economy in the next projection years. The declining in mortality trend gives a good impression of what a government has done towards macro citizen in one nation. Selecting a particular mortality model can be a tricky based on the approached method adapting. Lee-Carter model is adapted because of its simplicity and reliability of the outcome results with approach of regression. Implementation of Lee-Carter in finding a fitted model and hence its projection has been used worldwide in most of mortality research in developed countries. This paper studies the mortality pattern of Malaysia in the past by using original model of Lee-Carter (1992) and hence its cross-sectional observation for a single age. The data is indexed by age of death and year of death from 1984 to 2012, in which are supplied by Department of Statistics Malaysia. The results are modelled by using RStudio and the keen analysis will focus on the trend and projection of mortality rate and age specific mortality rate in the future. This paper can be extended to different variants extensions of Lee-Carter or any stochastic mortality tool by using Malaysia mortality experience as a centre of the main issue.

  10. A pheromone-rate-based analysis on the convergence time of ACO algorithm.

    PubMed

    Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng

    2009-08-01

    Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms.

  11. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  12. Do Physicians Respond to Web-Based Patient Ratings? An Analysis of Physicians' Responses to More Than One Million Web-Based Ratings Over a Six-Year Period.

    PubMed

    Emmert, Martin; Sauter, Lisa; Jablonski, Lisa; Sander, Uwe; Taheri-Zadeh, Fatemeh

    2017-07-26

    Physician-rating websites (PRWs) may lead to quality improvements in case they enable and establish a peer-to-peer communication between patients and physicians. Yet, we know little about whether and how physicians respond on the Web to patient ratings. The objective of this study was to describe trends in physicians' Web-based responses to patient ratings over time, to identify what physician characteristics influence Web-based responses, and to examine the topics physicians are likely to respond to. We analyzed physician responses to more than 1 million patient ratings displayed on the German PRW, jameda, from 2010 to 2015. Quantitative analysis contained chi-square analyses and the Mann-Whitney U test. Quantitative content techniques were applied to determine the topics physicians respond to based on a randomly selected sample of 600 Web-based ratings and corresponding physician responses. Overall, physicians responded to 1.58% (16,640/1,052,347) of all Web-based ratings, with an increasing trend over time from 0.70% (157/22,355) in 2010 to 1.88% (6377/339,919) in 2015. Web-based ratings that were responded to had significantly worse rating results than ratings that were not responded to (2.15 vs 1.74, P<.001). Physicians who respond on the Web to patient ratings differ significantly from nonresponders regarding several characteristics such as gender and patient recommendation results (P<.001 each). Regarding scaled-survey rating elements, physicians were most likely to respond to the waiting time within the practice (19.4%, 99/509) and the time spent with the patient (18.3%, 110/600). Almost one-third of topics in narrative comments were answered by the physicians (30.66%, 382/1246). So far, only a minority of physicians have taken the chance to respond on the Web to patient ratings. This is likely because of (1) the low awareness of PRWs among physicians, (2) the fact that only a few PRWs enable physicians to respond on the Web to patient ratings, and (3) the

  13. A comparison of recharge rates in aquifers of the United States based on groundwater-age data

    USGS Publications Warehouse

    McMahon, P.B.; Plummer, Niel; Böhlke, J.K.; Shapiro, S.D.; Hinkle, S.R.

    2011-01-01

    An overview is presented of existing groundwater-age data and their implications for assessing rates and timescales of recharge in selected unconfined aquifer systems of the United States. Apparent age distributions in aquifers determined from chlorofluorocarbon, sulfur hexafluoride, tritium/helium-3, and radiocarbon measurements from 565 wells in 45 networks were used to calculate groundwater recharge rates. Timescales of recharge were defined by 1,873 distributed tritium measurements and 102 radiocarbon measurements from 27 well networks. Recharge rates ranged from < 10 to 1,200 mm/yr in selected aquifers on the basis of measured vertical age distributions and assuming exponential age gradients. On a regional basis, recharge rates based on tracers of young groundwater exhibited a significant inverse correlation with mean annual air temperature and a significant positive correlation with mean annual precipitation. Comparison of recharge derived from groundwater ages with recharge derived from stream base-flow evaluation showed similar overall patterns but substantial local differences. Results from this compilation demonstrate that age-based recharge estimates can provide useful insights into spatial and temporal variability in recharge at a national scale and factors controlling that variability. Local age-based recharge estimates provide empirical data and process information that are needed for testing and improving more spatially complete model-based methods.

  14. Comparison of entrainment rates from a tank experiment with results using the one-dimensional turbulence model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerstein, Alan R.; Sayler, B. J.; Wunsch, S.

    2010-05-01

    Recent work suggests that cloud effects remain one of the largest sources of uncertainty in model-based estimates of climate sensitivity. In particular, the entrainment rate in stratocumulus-topped mixed layers needs better models. More than thirty years ago a clever laboratory experiment was conducted by McEwan and Paltridge to examine an analog of the entrainment process at the top of stratiform clouds. Sayler and Breidenthal extended this pioneering work and determined the effect of the Richardson number on the dimensionless entrainment rate. The experiments gave hints that the interaction between molecular effects and the one-sided turbulence seems to be crucial formore » understanding entrainment. From the numerical point of view large-eddy simulation (LES) does not allow explicitly resolving all the fine scale processes at the entrainment interface. Direct numerical simulation (DNS) is limited due to the Reynolds number and is not the tool of choice for parameter studies. Therefore it is useful to investigate new modeling strategies, such as stochastic turbulence models which allow sufficient resolution at least in one dimension while having acceptable run times. We will present results of the One-Dimensional Turbulence stochastic simulation model applied to the experimental setup of Sayler and Breidenthal. The results on radiatively induced entrainment follow quite well the scaling of the entrainment rate with the Richardson number that was experimentally found for a set of trials. Moreover, we investigate the influence of molecular effects, the fluids optical properties, and the artifact of parasitic turbulence experimentally observed in the laminar layer. In the simulations the parameters are varied systematically for even larger ranges than in the experiment. Based on the obtained results a more complex parameterization of the entrainment rate than currently discussed in the literature seems to be necessary.« less

  15. A physically based analytical spatial air temperature and humidity model

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  16. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    NASA Astrophysics Data System (ADS)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  17. Division-Based, Growth Rate Diversity in Bacteria

    PubMed Central

    Gangwe Nana, Ghislain Y.; Ripoll, Camille; Cabin-Flaman, Armelle; Gibouin, David; Delaune, Anthony; Janniere, Laurent; Grancher, Gerard; Chagny, Gaelle; Loutelier-Bourhis, Corinne; Lentzen, Esther; Grysan, Patrick; Audinot, Jean-Nicolas; Norris, Vic

    2018-01-01

    To investigate the nature and origins of growth rate diversity in bacteria, we grew Escherichia coli and Bacillus subtilis in liquid minimal media and, after different periods of 15N-labeling, analyzed and imaged isotope distributions in individual cells with Secondary Ion Mass Spectrometry. We find a striking inter- and intra-cellular diversity, even in steady state growth. This is consistent with the strand-dependent, hyperstructure-based hypothesis that a major function of the cell cycle is to generate coherent, growth rate diversity via the semi-conservative pattern of inheritance of strands of DNA and associated macromolecular assemblies. We also propose quantitative, general, measures of growth rate diversity for studies of cell physiology that include antibiotic resistance. PMID:29867792

  18. Spray Combustion Modeling with VOF and Finite-Rate Chemistry

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Wang, Ten-See

    1996-01-01

    A spray atomization and combustion model is developed based on the volume-of-fluid (VOF) transport equation with finite-rate chemistry model. The gas-liquid interface mass, momentum and energy conservation laws are modeled by continuum surface force mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed range flows. The objectives of the present study are: (1) to develop and verify the fractional volume-of-fluid (VOF) cell partitioning approach into a predictor-corrector algorithm to deal with multiphase (gas-liquid) free surface flow problems; (2) to implement the developed unified algorithm in a general purpose computational fluid dynamics (CFD) code, Finite Difference Navier-Stokes (FDNS), with droplet dynamics and finite-rate chemistry models; and (3) to demonstrate the effectiveness of the present approach by simulating benchmark problems of jet breakup/spray atomization and combustion. Modeling multiphase fluid flows poses a significant challenge because a required boundary must be applied to a transient, irregular surface that is discontinuous, and the flow regimes considered can range from incompressible to highspeed compressible flows. The flow-process modeling is further complicated by surface tension, interfacial heat and mass transfer, spray formation and turbulence, and their interactions. The major contribution of the present method is to combine the novel feature of the Volume of Fluid (VOF) method and the Eulerian/Lagrangian method into a unified algorithm for efficient noniterative, time-accurate calculations of multiphase free surface flows valid at all speeds. The proposed method reformulated the VOF equation to strongly couple two distinct phases (liquid and gas), and tracks droplets on a Lagrangian frame when spray model is required, using a unified predictor-corrector technique to account for the non-linear linkages through the convective contributions of VOF. The discontinuities within the

  19. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    EPA Science Inventory

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  20. Continuum modeling of rate-dependent granular flows in SPH

    DOE PAGES

    Hurley, Ryan C.; Andrade, José E.

    2016-09-13

    In this paper, we discuss a constitutive law for modeling rate-dependent granular flows that has been implemented in smoothed particle hydrodynamics (SPH). We model granular materials using a viscoplastic constitutive law that produces a Drucker–Prager-like yield condition in the limit of vanishing flow. A friction law for non-steady flows, incorporating rate-dependence and dilation, is derived and implemented within the constitutive law. We compare our SPH simulations with experimental data, demonstrating that they can capture both steady and non-steady dynamic flow behavior, notably including transient column collapse profiles. In conclusion, this technique may therefore be attractive for modeling the time-dependent evolutionmore » of natural and industrial flows.« less

  1. Eigen model with general fitness functions and degradation rates

    NASA Astrophysics Data System (ADS)

    Hu, Chin-Kun; Saakian, David B.

    2006-03-01

    We present an exact solution of Eigen's quasispecies model with a general degradation rate and fitness functions, including a square root decrease of fitness with increasing Hamming distance from the wild type. The found behavior of the model with a degradation rate is analogous to a viral quasi-species under attack by the immune system of the host. Our exact solutions also revise the known results of neutral networks in quasispecies theory. To explain the existence of mutants with large Hamming distances from the wild type, we propose three different modifications of the Eigen model: mutation landscape, multiple adjacent mutations, and frequency-dependent fitness in which the steady state solution shows a multi-center behavior.

  2. Clinical comparison of two linear-quadratic model-based isoeffect fractionation schemes of high-dose-rate intracavitary brachytherapy for cervical cancer.

    PubMed

    Wang, Chong-Jong; Huang, Eng-Yen; Sun, Li-Min; Chen, Hui-Chun; Fang, Fu-Min; Hsu, Hsuan-Chih; Changchien, Chan-Chao; Leung, Stephen Wan

    2004-05-01

    Two linear-quadratic model-based isoeffect fractionation schemes of high-dose-rate intracavitary brachytherapy (HDR-IC) were used to treat cervical cancer in two consecutive periods. Patient outcomes and complications were analyzed and compared. Between November 1987 and December 1996, a total of 541 women diagnosed with cervical cancer were treated with curative-intent radiotherapy. Patients were categorized into two groups according to the two isoeffect schemes used. Group 1 consisted of 254 patients treated with external beam radiotherapy (EBRT) plus 7.2 Gy HDR-IC to Point A for three fractions in the first period. Group 2 consisted of 284 patients treated with EBRT plus 4.8 Gy HDR-IC for five fractions in the second period. The goal of the new scheme for the latter group was to deliver an isoeffect dose that maintained similar tumor control but reduced normal tissue complications. The calculated biologically effective dose (BED(10), assuming an alpha/beta ratio = 10) of EBRT plus HDR-IC for tumor and acute responding tissue in Groups 1 and 2 was 90 Gy(10) (52.8 + 37.2 Gy) and 88.6 Gy(10) (53.1 + 35.5 Gy), respectively. The corresponding BED(3) for late responding tissue (assuming an alpha/beta ratio = 3) in Groups 1 and 2 was 146.7 Gy(3) (73.3 + 73.4 Gy) and 134.4 Gy(3) (72 + 62.4 Gy), respectively. Patients were followed for 6.1-15.2 years (median, 9.8 years). Overall, 66 patients (12.2%) developed pelvic recurrence. Of these, 53 patients had central recurrence. Of the 53 patients with central recurrence, 24 (9.4%) were in Group 1 and 29 (10.1%) in Group 2 (p = 0.722). The actuarial pelvic control rate for Groups 1 and 2 was 88.2% and 86.3% at 5 years and 87.3% and 85.5% at 10 years, respectively (p = 0.504). The actuarial overall survival rate for Groups 1 and 2 was 63.5% and 56.1% at 5 years and 47.8% and 49.3% at 10 years, respectively (p = 0.734). The actuarial proctitis rate for Groups 1 and 2 was 49.7% and 32.7% at 5 years and 50.5% and 32.7% at 10 years

  3. How Hot Precursor Modify Island Nucleation: A Rate-Equation Model

    NASA Astrophysics Data System (ADS)

    Morales-Cifuentes, Josue; Einstein, T. L.; Pimpinelli, Alberto

    2015-03-01

    We describe the analysis, based on rate equations, of the hot precursor model mentioned in the previous talk. Two key parameters are the competing times of ballistic monomers decaying into thermalized monomers vs. being captured by an island, which naturally define a ``thermalization'' scale for the system. We interpret the energies and dimmensionless parameters used in the model, and provide both an implicit analytic solution and a convenient asymptotic approximation. Further analysis reveals novel scaling regimes and nonmonotonic crossovers between them. To test our model, we applied it to experiments on parahexaphenyl (6P) on sputtered mica. With the resulting parameters, the curves derived from our analytic treatment account very well for the data at the 4 different temperatures. The fit shows that the high-flux regime corresponds not to ALA (attachment-limited aggregation) or HMA (hot monomer aggregation) but rather to an intermediate scaling regime related to DLA (diffusion-limited aggregation). We hope this work stimulates further experimental investigations. Work at UMD supported by NSF CHE 13-05892.

  4. Comparing emission rates derived from a model with a plume-based approach and quantifying the contribution of vehicle classes to on-road emissions and air quality.

    PubMed

    Xu, Junshi; Wang, Jonathan; Hilker, Nathan; Fallah-Shorshani, Masoud; Saleh, Marc; Tu, Ran; Wang, An; Minet, Laura; Stogios, Christos; Evans, Greg; Hatzopoulou, Marianne

    2018-06-05

    This study presents a comparison of fleet averaged emission factors (EFs) derived from a traffic emission model with EFs estimated using plume-based measurements, including an investigation of the contribution of vehicle classes to carbon monoxide (CO), nitrogen oxides (NO x ), and elemental carbon (EC) along an urban corridor. To this end, a field campaign was conducted over one week in June 2016 on an arterial road in Toronto, Canada. Traffic data were collected using a traffic camera and a radar, while air quality was characterized using two monitoring stations: one located at ground-level and another at the rooftop of a four-storey building. A traffic simulation model was calibrated and validated and sec-by-sec speed profiles for all vehicle trajectories were extracted to model emissions. In addition, dispersion modelling was conducted to identify the extent to which differences in emissions translate to differences in near-road concentrations. Our results indicate that modelled EFs for CO and NO x are twice as high as plume-based EFs. Besides, modelled results indicate that transit bus emissions accounted for 60% and 70% of the total emissions of NO x and EC. Transit bus emission rates in g/passenger.km for NO x and EC were up to 8 and 22 times the emission rates of passenger cars. In contrast, the Toronto streetcars, which are electrically fuelled, were found to improve near-road air quality despite their negative impact on traffic speeds. Finally, we observe that the difference in estimated concentrations derived from the two methods is not as large as the difference in estimated emissions due to the influence of meteorology and of the urban background given that the study network is located in a busy downtown area. Implications This study presents a comparison of fleet averaged emission factors (EFs) derived from a traffic emission model with EFs estimated using plume-based measurements, including an investigation of the contribution of vehicle classes to

  5. Can low-resolution airborne laser scanning data be used to model stream rating curves?

    USGS Publications Warehouse

    Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar

    2015-01-01

    This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  6. Millimeter wave attenuation prediction using a piecewise uniform rain rate model

    NASA Technical Reports Server (NTRS)

    Persinger, R. R.; Stutzman, W. L.; Bostian, C. W.; Castle, R. E., Jr.

    1980-01-01

    A piecewise uniform rain rate distribution model is introduced as a quasi-physical model of real rain along earth-space millimeter wave propagation paths. It permits calculation of the total attenuation from specific attenuation in a simple fashion. The model predications are verified by comparison with direct attenuation measurements for several frequencies, elevation angles, and locations. Also, coupled with the Rice-Holmberg rain rate model, attenuation statistics are predicated from rainfall accumulation data.

  7. Model cerebellar granule cells can faithfully transmit modulated firing rate signals

    PubMed Central

    Rössert, Christian; Solinas, Sergio; D'Angelo, Egidio; Dean, Paul; Porrill, John

    2014-01-01

    A crucial assumption of many high-level system models of the cerebellum is that information in the granular layer is encoded in a linear manner. However, granule cells are known for their non-linear and resonant synaptic and intrinsic properties that could potentially impede linear signal transmission. In this modeling study we analyse how electrophysiological granule cell properties and spike sampling influence information coded by firing rate modulation, assuming no signal-related, i.e., uncorrelated inhibitory feedback (open-loop mode). A detailed one-compartment granule cell model was excited in simulation by either direct current or mossy-fiber synaptic inputs. Vestibular signals were represented as tonic inputs to the flocculus modulated at frequencies up to 20 Hz (approximate upper frequency limit of vestibular-ocular reflex, VOR). Model outputs were assessed using estimates of both the transfer function, and the fidelity of input-signal reconstruction measured as variance-accounted-for. The detailed granule cell model with realistic mossy-fiber synaptic inputs could transmit information faithfully and linearly in the frequency range of the vestibular-ocular reflex. This was achieved most simply if the model neurons had a firing rate at least twice the highest required frequency of modulation, but lower rates were also adequate provided a population of neurons was utilized, especially in combination with push-pull coding. The exact number of neurons required for faithful transmission depended on the precise values of firing rate and noise. The model neurons were also able to combine excitatory and inhibitory signals linearly, and could be replaced by a simpler (modified) integrate-and-fire neuron in the case of high tonic firing rates. These findings suggest that granule cells can in principle code modulated firing-rate inputs in a linear manner, and are thus consistent with the high-level adaptive-filter model of the cerebellar microcircuit. PMID:25352777

  8. Population decay time and distribution of exciton states analyzed by rate equations based on theoretical phononic and electron-collisional rate coefficients

    NASA Astrophysics Data System (ADS)

    Oki, Kensuke; Ma, Bei; Ishitani, Yoshihiro

    2017-11-01

    Population distributions and transition fluxes of the A exciton in bulk GaN are theoretically analyzed using rate equations of states of the principal quantum number n up to 5 and the continuum. These rate equations consist of the terms of radiative, electron-collisional, and phononic processes. The dependence of the rate coefficients on temperature is revealed on the basis of the collisional-radiative model of hydrogen plasma for the electron-collisional processes and theoretical formulation using Fermi's "golden rule" for the phononic processes. The respective effects of the variations in electron, exciton, and lattice temperatures are exhibited. This analysis is a base of the discussion on nonthermal equilibrium states of carrier-exciton-phonon dynamics. It is found that the exciton dissociation is enhanced even below 150 K mainly by the increase in the lattice temperature. When the thermal-equilibrium temperature increases, the population fluxes between the states of n >1 and the continuum become more dominant. Below 20 K, the severe deviation from the Saha-Boltzmann distribution occurs owing to the interband excitation flux being higher than the excitation flux from the 1 S state. The population decay time of the 1 S state at 300 K is more than ten times longer than the recombination lifetime of excitons with kinetic energy but without the upper levels (n >1 and the continuum). This phenomenon is caused by a shift of population distribution to the upper levels. This phonon-exciton-radiation model gives insights into the limitations of conventional analyses such as the ABC model, the Arrhenius plot, the two-level model (n =1 and the continuum), and the neglect of the upper levels.

  9. Comprehensive model of microalgae photosynthesis rate as a function of culture conditions in photobioreactors.

    PubMed

    Costache, T A; Acién Fernández, F Gabriel; Morales, M M; Fernández-Sevilla, J M; Stamatin, I; Molina, E

    2013-09-01

    In this paper, the influence of culture conditions (irradiance, temperature, pH, and dissolved oxygen) on the photosynthesis rate of Scenedesmus almeriensis cultures is analyzed. Short-run experiments were performed to study cell response to variations in culture conditions, which take place in changing environments such as outdoor photobioreactors. Experiments were performed by subjecting diluted samples of cells to different levels of irradiance, temperature, pH, and dissolved oxygen concentration. Results demonstrate the existence of photoinhibition phenomena at irradiances higher than 1,000 μE/m(2) s; in addition to reduced photosynthesis rates at inadequate temperatures or pH-the optimal values being 35 °C and 8, respectively. Moreover, photosynthesis rate reduction at dissolved oxygen concentrations above 20 mg/l is demonstrated. Data have been used to develop an integrated model based on considering the simultaneous influence of irradiance, temperature, pH, and dissolved oxygen. The model fits the experimental results in the range of culture conditions tested, and it was validated using data obtained by the simultaneous variation of two of the modified variables. Furthermore, the model fits experimental results obtained from an outdoor culture of S. almeriensis performed in an open raceway reactor. Results demonstrate that photosynthetic efficiency is modified as a function of culture conditions, and can be used to determine the proximity of culture conditions to optimal values. Optimal conditions found (T = 35 °C, pH = 8, dissolved oxygen concentration <20 mg/l) allows to maximize the use of light by the cells. The developed model is a powerful tool for the optimal design and management of microalgae-based processes, especially outdoors, where the cultures are subject to daily culture condition variations.

  10. An administrative claims model for profiling hospital 30-day mortality rates for pneumonia patients.

    PubMed

    Bratzler, Dale W; Normand, Sharon-Lise T; Wang, Yun; O'Donnell, Walter J; Metersky, Mark; Han, Lein F; Rapp, Michael T; Krumholz, Harlan M

    2011-04-12

    Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998-2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998-2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25(th), 50(th), and 75(th) percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model.

  11. Prediction of terrestrial gamma dose rate based on geological formations and soil types in the Johor State, Malaysia.

    PubMed

    Saleh, Muneer Aziz; Ramli, Ahmad Termizi; bin Hamzah, Khaidzir; Alajerami, Yasser; Moharib, Mohammed; Saeed, Ismael

    2015-10-01

    This study aims to predict and estimate unmeasured terrestrial gamma dose rate (TGDR) using statistical analysis methods to derive a model from the actual measurement based on geological formation and soil type. The measurements of TGDR were conducted in the state of Johor with a total of 3873 measured points which covered all geological formations, soil types and districts. The measurements were taken 1 m above the soil surface using NaI [Ti] detector. The measured gamma dose rates ranged from 9 nGy h(-1) to 1237 nGy h(-1) with a mean value of 151 nGy h(-1). The data have been normalized to fit a normal distribution. Tests of significance were conducted among all geological formations and soil types, using the unbalanced one way ANOVA. The results indicated strong significant differences due to the different geological formations and soil types present in Johor State. Pearson Correlation was used to measure the relations between gamma dose rate based on geological formation and soil type (D(G,S)) with the gamma dose rate based on geological formation (D(G)) or soil type (D(s)). A very good correlation was found between D(G,S) and D(G) or D(G,S) and D(s). A total of 118 pairs of geological formations and soil types were used to derive the statistical contribution of geological formations and soil types to gamma dose rates. The contribution of the gamma dose rate from geological formation and soil type were found to be 0.594 and 0.399, respectively. The null hypotheses were accepted for 83% of examined data, therefore, the model could be used to predict gamma dose rates based on geological formation and soil type information. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks

    PubMed Central

    Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.

    2013-01-01

    Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236

  13. Application of a novel grey self-memory coupling model to forecast the incidence rates of two notifiable diseases in China: dysentery and gonorrhea.

    PubMed

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control.

  14. Application of a Novel Grey Self-Memory Coupling Model to Forecast the Incidence Rates of Two Notifiable Diseases in China: Dysentery and Gonorrhea

    PubMed Central

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Objective In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. Methods The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Results Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. Conclusion The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control. PMID:25546054

  15. Equipment management risk rating system based on engineering endpoints.

    PubMed

    James, P J

    1999-01-01

    The equipment management risk ratings system outlined here offers two significant departures from current practice: risk classifications are based on intrinsic device risks, and the risk rating system is based on engineering endpoints. Intrinsic device risks are categorized as physical, clinical and technical, and these flow from the incoming equipment assessment process. Engineering risk management is based on verification of engineering endpoints such as clinical measurements or energy delivery. This practice eliminates the ambiguity associated with ranking risk in terms of physiologic and higher-level outcome endpoints such as no significant hazards, low significance, injury, or mortality.

  16. Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2015-12-01

    The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.

  17. Sensitivity of mineral dissolution rates to physical weathering : A modeling approach

    NASA Astrophysics Data System (ADS)

    Opolot, Emmanuel; Finke, Peter

    2015-04-01

    There is continued interest on accurate estimation of natural weathering rates owing to their importance in soil formation, nutrient cycling, estimation of acidification in soils, rivers and lakes, and in understanding the role of silicate weathering in carbon sequestration. At the same time a challenge does exist to reconcile discrepancies between laboratory-determined weathering rates and natural weathering rates. Studies have consistently reported laboratory rates to be in orders of magnitude faster than the natural weathering rates (White, 2009). These discrepancies have mainly been attributed to (i) changes in fluid composition (ii) changes in primary mineral surfaces (reactive sites) and (iii) the formation of secondary phases; that could slow natural weathering rates. It is indeed difficult to measure the interactive effect of the intrinsic factors (e.g. mineral composition, surface area) and extrinsic factors (e.g. solution composition, climate, bioturbation) occurring at the natural setting, in the laboratory experiments. A modeling approach could be useful in this case. A number of geochemical models (e.g. PHREEQC, EQ3/EQ6) already exist and are capable of estimating mineral dissolution / precipitation rates as a function of time and mineral mass. However most of these approaches assume a constant surface area in a given volume of water (White, 2009). This assumption may become invalid especially at long time scales. One of the widely used weathering models is the PROFILE model (Sverdrup and Warfvinge, 1993). The PROFILE model takes into account the mineral composition, solution composition and surface area in determining dissolution / precipitation rates. However there is less coupling with other processes (e.g. physical weathering, clay migration, bioturbation) which could directly or indirectly influence dissolution / precipitation rates. We propose in this study a coupling between chemical weathering mechanism (defined as a function of reactive area

  18. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication.

    PubMed

    Yang, Ching-Han; Chang, Chin-Chun; Liang, Deron

    2018-03-28

    All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication-an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment-confirm the feasibility of this approach.

  19. Evaluating terrain based criteria for snow avalanche exposure ratings using GIS

    NASA Astrophysics Data System (ADS)

    Delparte, Donna; Jamieson, Bruce; Waters, Nigel

    2010-05-01

    Snow avalanche terrain in backcountry regions of Canada is increasingly being assessed based upon the Avalanche Terrain Exposure Scale (ATES). ATES is a terrain based classification introduced in 2004 by Parks Canada to identify "simple", "challenging" and "complex" backcountry areas. The ATES rating system has been applied to well over 200 backcountry routes, has been used in guidebooks, trailhead signs and maps and is part of the trip planning component of the AVALUATOR™, a simple decision-support tool for backcountry users. Geographic Information Systems (GIS) offers a means to model and visualize terrain based criteria through the use of digital elevation model (DEM) and land cover data. Primary topographic variables such as slope, aspect and curvature are easily derived from a DEM and are compatible with the equivalent evaluation criteria in ATES. Other components of the ATES classification are difficult to extract from a DEM as they are not strictly terrain based. An overview is provided of the terrain variables that can be generated from DEM and land cover data; criteria from ATES which are not clearly terrain based are identified for further study or revision. The second component of this investigation was the development of an algorithm for inputting suitable ATES criteria into a GIS, thereby mimicking the process avalanche experts use when applying the ATES classification to snow avalanche terrain. GIS based classifications were compared to existing expert assessments for validity. The advantage of automating the ATES classification process through GIS is to assist avalanche experts with categorizing and mapping remote backcountry terrain.

  20. Rate and State Friction Relation for Nanoscale Contacts: Thermally Activated Prandtl-Tomlinson Model with Chemical Aging

    NASA Astrophysics Data System (ADS)

    Tian, Kaiwen; Goldsby, David L.; Carpick, Robert W.

    2018-05-01

    Rate and state friction (RSF) laws are widely used empirical relationships that describe macroscale to microscale frictional behavior. They entail a linear combination of the direct effect (the increase of friction with sliding velocity due to the reduced influence of thermal excitations) and the evolution effect (the change in friction with changes in contact "state," such as the real contact area or the degree of interfacial chemical bonds). Recent atomic force microscope (AFM) experiments and simulations found that nanoscale single-asperity amorphous silica-silica contacts exhibit logarithmic aging (increasing friction with time) over several decades of contact time, due to the formation of interfacial chemical bonds. Here we establish a physically based RSF relation for such contacts by combining the thermally activated Prandtl-Tomlinson (PTT) model with an evolution effect based on the physics of chemical aging. This thermally activated Prandtl-Tomlinson model with chemical aging (PTTCA), like the PTT model, uses the loading point velocity for describing the direct effect, not the tip velocity (as in conventional RSF laws). Also, in the PTTCA model, the combination of the evolution and direct effects may be nonlinear. We present AFM data consistent with the PTTCA model whereby in aging tests, for a given hold time, static friction increases with the logarithm of the loading point velocity. Kinetic friction also increases with the logarithm of the loading point velocity at sufficiently high velocities, but at a different increasing rate. The discrepancy between the rates of increase of static and kinetic friction with velocity arises from the fact that appreciable aging during static contact changes the energy landscape. Our approach extends the PTT model, originally used for crystalline substrates, to amorphous materials. It also establishes how conventional RSF laws can be modified for nanoscale single-asperity contacts to provide a physically based friction

  1. Analysis of an algae-based CELSS. Part 1: model development

    NASA Technical Reports Server (NTRS)

    Holtzapple, M. T.; Little, F. E.; Makela, M. E.; Patterson, C. O.

    1989-01-01

    A steady state chemical model and computer program have been developed for a life support system and applied to trade-off studies. The model is based on human demand for food and oxygen determined from crew metabolic needs. The model includes modules for water recycle, waste treatment, CO2 removal and treatment, and food production. The computer program calculates rates of use and material balance for food. O2, the recycle of human waste and trash, H2O, N2, and food production supply. A simple non-iterative solution for the model has been developed using the steady state rate equations for the chemical reactions. The model and program have been used in system sizing and subsystem trade-off studies of a partially closed life support system.

  2. dK/da effects on the SCC growth rates of nickel base alloys in high-temperature water

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Wang, Jiamei; Du, Donghai; Andresen, Peter L.; Zhang, Lefu

    2018-05-01

    The effect of dK/da on crack growth behavior of nickel base alloys has been studied by conducting stress corrosion cracking tests under positive and negative dK/da loading conditions on Alloys 690, 600 and X-750 in high temperature water. Results indicate that positive dK/da accelerates the SCC growth rates, and the accelerating effect increases with dK/da and the initial CGR. The FRI model was found to underestimate the dK/da effect by ∼100X, especially for strain hardening materials, and this underscores the need for improved insight and models for crack tip strain rate. The effect of crack tip strain rate and dK/dt in particular can explain the dK/da accelerating effect.

  3. Developing a java android application of KMV-Merton default rate model

    NASA Astrophysics Data System (ADS)

    Yusof, Norliza Muhamad; Anuar, Aini Hayati; Isa, Norsyaheeda Natasha; Zulkafli, Sharifah Nursyuhada Syed; Sapini, Muhamad Luqman

    2017-11-01

    This paper presents a developed java android application for KMV-Merton model in predicting the defaut rate of a firm. Predicting default rate is essential in the risk management area as default risk can be immediately transmitted from one entity to another entity. This is the reason default risk is known as a global risk. Although there are several efforts, instruments and methods used to manage the risk, it is said to be insufficient. To the best of our knowledge, there has been limited innovation in developing the default risk mathematical model into a mobile application. Therefore, through this study, default risk is predicted quantitatively using the KMV-Merton model. The KMV-Merton model has been integrated in the form of java program using the Android Studio Software. The developed java android application is tested by predicting the levels of default risk of the three different rated companies. It is found that the levels of default risk are equivalent to the ratings of the respective companies. This shows that the default rate predicted by the KMV-Merton model using the developed java android application can be a significant tool to the risk mangement field. The developed java android application grants users an alternative to predict level of default risk within less procedure.

  4. Energy minimization of mobile video devices with a hardware H.264/AVC encoder based on energy-rate-distortion optimization

    NASA Astrophysics Data System (ADS)

    Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min

    2014-09-01

    In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.

  5. A Comprehensive Prediction Model of Hydraulic Extended-Reach Limit Considering the Allowable Range of Drilling Fluid Flow Rate in Horizontal Drilling.

    PubMed

    Li, Xin; Gao, Deli; Chen, Xuyue

    2017-06-08

    Hydraulic extended-reach limit (HERL) model of horizontal extended-reach well (ERW) can predict the maximum measured depth (MMD) of the horizontal ERW. The HERL refers to the well's MMD when drilling fluid cannot be normally circulated by drilling pump. Previous model analyzed the following two constraint conditions, drilling pump rated pressure and rated power. However, effects of the allowable range of drilling fluid flow rate (Q min  ≤ Q ≤ Q max ) were not considered. In this study, three cases of HERL model are proposed according to the relationship between allowable range of drilling fluid flow rate and rated flow rate of drilling pump (Q r ). A horizontal ERW is analyzed to predict its HERL, especially its horizontal-section limit (L h ). Results show that when Q min  ≤ Q r  ≤ Q max (Case I), L h depends both on horizontal-section limit based on rated pump pressure (L h1 ) and horizontal-section limit based on rated pump power (L h2 ); when Q min  < Q max  < Q r (Case II), L h is exclusively controlled by L h1 ; while L h is only determined by L h2 when Q r  < Q min  < Q max (Case III). Furthermore, L h1 first increases and then decreases with the increase in drilling fluid flow rate, while L h2 keeps decreasing as the drilling fluid flow rate increases. The comprehensive model provides a more accurate prediction on HERL.

  6. A rate insensitive linear viscoelastic model for soft tissues

    PubMed Central

    Zhang, Wei; Chen, Henry Y.; Kassab, Ghassan S.

    2012-01-01

    It is well known that many biological soft tissues behave as viscoelastic materials with hysteresis curves being nearly independent of strain rate when loading frequency is varied over a large range. In this work, the rate insensitive feature of biological materials is taken into account by a generalized Maxwell model. To minimize the number of model parameters, it is assumed that the characteristic frequencies of Maxwell elements form a geometric series. As a result, the model is characterized by five material constants: μ0, τ, m, ρ and β, where μ0 is the relaxed elastic modulus, τ the characteristic relaxation time, m the number of Maxwell elements, ρ the gap between characteristic frequencies, and β = μ1/μ0 with μ1 being the elastic modulus of the Maxwell body that has relaxation time τ. The physical basis of the model is motivated by the microstructural architecture of typical soft tissues. The novel model shows excellent fit of relaxation data on the canine aorta and captures the salient features of vascular viscoelasticity with significantly fewer model parameters. PMID:17512585

  7. An empirical model of the phytoplankton chlorophyll : carbon ratio-the conversion factor between productivity and growth rate

    USGS Publications Warehouse

    Cloern, James E.; Grenz, Christian; Vidergar-Lucas, Lisa

    1995-01-01

    We present an empirical model that describes the ratio of phytoplankton chlorophyll a to carbon, Chl: C, as a function of temperature, daily irradiance, and nutrient-limited growth rate. Our model is based on 219 published measurements of algal cultures exposed to light-limited or nutrient-limited growth conditions. We illustrate an approach for using this estimator of Chl: C to calculate phytoplankton population growth rate from measured primary productivity. This adaptive Chl: C model gives rise to interactive light-nutrient effects in which growth efficiency increases with nutrient availability under low-light conditions. One implication of this interaction is the enhancement of phytoplankton growth efficiency, in addition to enhancement of biomass yield, as a response to eutrophication.

  8. Aseismic and seismic slip induced by fluid injection from poroelastic and rate-state friction modeling

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Deng, K.; Harrington, R. M.; Clerc, F.

    2016-12-01

    Solid matrix stress change and pore pressure diffusion caused by fluid injection has been postulated as key factors for inducing earthquakes and aseismic slip on pre-existing faults. In this study, we have developed a numerical model that simulates aseismic and seismic slip in a rate-and-state friction framework with poroelastic stress perturbations from multi-stage hydraulic fracturing scenarios. We apply the physics-based model to the 2013-2015 earthquake sequences near Fox Creek, Alberta, Canada, where three magnitude 4.5 earthquakes were potentially induced by nearby hydraulic fracturing activity. In particular, we use the relocated December 2013 seismicity sequence to approximate the fault orientation, and find the seismicity migration spatiotemporally correlate with the positive Coulomb stress changes calculated from the poroelastic model. When the poroelastic stress changes are introduced to the rate-state friction model, we find that slip on the fault evolves from aseismic to seismic in a manner similar to the onset of seismicity. For a 15-stage hydraulic fracturing that lasted for 10 days, modeled fault slip rate starts to accelerate after 3 days of fracking, and rapidly develops into a seismic event, which also temporally coincides with the onset of induced seismicity. The poroelastic stress perturbation and consequently fault slip rate continue to evolve and remain high for several weeks after hydraulic fracturing has stopped, which may explain the continued seismicity after shut-in. In a comparison numerical experiment, fault slip rate quickly decreases to the interseismic level when stress perturbations are instantaneously returned to zero at shut-in. Furthermore, when stress perturbations are removed just a few hours after the fault slip rate starts to accelerate (that is, hydraulic fracturing is shut down prematurely), only aseismic slip is observed in the model. Our preliminary results thus suggest the design of fracturing duration and flow

  9. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    NASA Astrophysics Data System (ADS)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  10. Program documentation: Surface heating rate of thin skin models (THNSKN)

    NASA Technical Reports Server (NTRS)

    Mcbryde, J. D.

    1975-01-01

    Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.

  11. Does childhood cancer affect parental divorce rates? A population-based study.

    PubMed

    Syse, Astri; Loge, Jon H; Lyngstad, Torkild H

    2010-02-10

    PURPOSE Cancer in children may profoundly affect parents' personal relationships in terms of psychological stress and an increased care burden. This could hypothetically elevate divorce rates. Few studies on divorce occurrence exist, so the effect of childhood cancers on parental divorce rates was explored. PATIENTS AND METHODS Data on the entire Norwegian married population, age 17 to 69 years, with children age 0 to 20 years in 1974 to 2001 (N = 977,928 couples) were retrieved from the Cancer Registry, the Central Population Register, the Directorate of Taxes, and population censuses. Divorce rates for 4,590 couples who were parenting a child with cancer were compared with those of otherwise similar couples by discrete-time hazard regression models. Results Cancer in a child was not associated with an increased risk of parental divorce overall. An increased divorce rate was observed with Wilms tumor (odds ratio [OR], 1.52) but not with any of the other common childhood cancers. The child's age at diagnosis, time elapsed from diagnosis, and death from cancer did not influence divorce rates significantly. Increased divorce rates were observed for couples in whom the mothers had an education greater than high school level (OR, 1.16); the risk was particularly high shortly after diagnosis, for CNS cancers and Wilms tumors, for couples with children 0 to 9 years of age at diagnosis, and after a child's death. CONCLUSION This large, registry-based study shows that cancer in children is not associated with an increased parental divorce rate, except with Wilms tumors. Couples in whom the wife is highly educated appear to face increased divorce rates after a child's cancer, and this may warrant additional study.

  12. A High-Rate, Single-Crystal Model for Cyclotrimethylene Trinitramine including Phase Transformations and Plastic Slip

    DOE PAGES

    Addessio, Francis L.; Luscher, Darby Jon; Cawkwell, Marc Jon; ...

    2017-05-14

    A continuum model for the high-rate, thermo-mechanical deformation of single-crystal cyclotrimethylene trinitramine (RDX) is developed. The model includes the effects of anisotropy, large deformations, nonlinear thermo-elasticity, phase transformations, and plastic slip. A multiplicative decomposition of the deformation gradient is used. The volumetric elastic component of the deformation is accounted for through a free-energy based equation of state for the low- (α) and high-pressure (γ) polymorphs of RDX. Crystal plasticity is addressed using a phenomenological thermal activation model. The deformation gradient for the phase transformation is based on an approach that has been applied to martensitic transformations. Simulations were conducted andmore » compared to high-rate, impact loading of oriented RDX single crystals. The simulations considered multiple orientations of the crystal relative to the direction of shock loading and multiple sample thicknesses. Thirteen slip systems, which were inferred from indentation and x-ray topography, were used to model the α-polymorph. It is shown that by increasing the number of slip systems from the previously considered number of six (6) to thirteen (13) in the α-polymorph, better comparisons with data may be obtained. Simulations of impact conditions in the vicinity of the α- to γ-polymorph transformation (3.8 GPa) are considered. Eleven of the simulations, which were at pressures below the transformation value (3.0 GPa), were compared to experimental data. Comparison of the model was also made with available data for one experiment above the transformation pressure (4.4 GPa). Also, simulations are provided for a nominal pressure of 7.5 GPa to demonstrate the effect of the transformation kinetics on the deformation of a high-rate plate impact problem.« less

  13. A High-Rate, Single-Crystal Model for Cyclotrimethylene Trinitramine including Phase Transformations and Plastic Slip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addessio, Francis L.; Luscher, Darby Jon; Cawkwell, Marc Jon

    A continuum model for the high-rate, thermo-mechanical deformation of single-crystal cyclotrimethylene trinitramine (RDX) is developed. The model includes the effects of anisotropy, large deformations, nonlinear thermo-elasticity, phase transformations, and plastic slip. A multiplicative decomposition of the deformation gradient is used. The volumetric elastic component of the deformation is accounted for through a free-energy based equation of state for the low- (α) and high-pressure (γ) polymorphs of RDX. Crystal plasticity is addressed using a phenomenological thermal activation model. The deformation gradient for the phase transformation is based on an approach that has been applied to martensitic transformations. Simulations were conducted andmore » compared to high-rate, impact loading of oriented RDX single crystals. The simulations considered multiple orientations of the crystal relative to the direction of shock loading and multiple sample thicknesses. Thirteen slip systems, which were inferred from indentation and x-ray topography, were used to model the α-polymorph. It is shown that by increasing the number of slip systems from the previously considered number of six (6) to thirteen (13) in the α-polymorph, better comparisons with data may be obtained. Simulations of impact conditions in the vicinity of the α- to γ-polymorph transformation (3.8 GPa) are considered. Eleven of the simulations, which were at pressures below the transformation value (3.0 GPa), were compared to experimental data. Comparison of the model was also made with available data for one experiment above the transformation pressure (4.4 GPa). Also, simulations are provided for a nominal pressure of 7.5 GPa to demonstrate the effect of the transformation kinetics on the deformation of a high-rate plate impact problem.« less

  14. Development of building energy asset rating using stock modelling in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Na; Goel, Supriya; Makhmalbaf, Atefe

    2016-01-29

    The US Building Energy Asset Score helps building stakeholders quickly gain insight into the efficiency of building systems (envelope, electrical and mechanical systems). A robust, easy-to-understand 10-point scoring system was developed to facilitate an unbiased comparison of similar building types across the country. The Asset Score does not rely on a database or specific building baselines to establish a rating. Rather, distributions of energy use intensity (EUI) for various building use types were constructed using Latin hypercube sampling and converted to a series of stepped linear scales to score buildings. A score is calculated based on the modelled source EUImore » after adjusting for climate. A web-based scoring tool, which incorporates an analytical engine and a simulation engine, was developed to standardize energy modelling and reduce implementation cost. This paper discusses the methodology used to perform several hundred thousand building simulation runs and develop the scoring scales.« less

  15. Energy transfer model and its applications of ultrasonic gas flow-meter under static and dynamic flow rates

    NASA Astrophysics Data System (ADS)

    Fang, Min; Xu, Ke-Jun; Zhu, Wen-Jiao; Shen, Zi-Wen

    2016-01-01

    Most of the ultrasonic gas flow-meters measure the gas flow rate by calculating the ultrasonic transmission time difference between the downstream and upstream. Ultrasonic energy attenuation occurs in the processes of the ultrasonic generation, conversion, transmission, and reception. Additionally, at the same time, the gas flow will also affect the ultrasonic propagation during the measurement, which results in the ultrasonic energy attenuation and the offset of ultrasonic propagation path. Thus, the ultrasonic energy received by the transducer is weaker. When the gas flow rate increases, this effect becomes more apparent. It leads to the measurement accuracy reduced, and the measurement range narrowed. An energy transfer model, where the ultrasonic gas flow-meter under without/with the gas flow, is established by adopting the statistical analysis and curve fitting based on a large amount of experimental data. The static sub model without the gas flow expresses the energy conversion efficiency of ultrasonic gas transducers, and the dynamic sub model with the gas flow reflects the energy attenuation pattern following the flow rate variations. The mathematical model can be used to determine the minimum energy of the excitation signal for meeting the requirement of specific measurement range, and predict the maximum measurable flow rate in the case of fixed energy of excitation signal. Based on the above studies, a method to enhance the excitation signal energy is proposed under the output power of the transmitting circuit being a finite value so as to extend the measurement rage of ultrasonic gas flow-meter.

  16. A simplified 137Cs transport model for estimating erosion rates in undisturbed soil.

    PubMed

    Zhang, Xinbao; Long, Yi; He, Xiubin; Fu, Jiexiong; Zhang, Yunqi

    2008-08-01

    (137)Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of (137)Cs fallout from atmosphere in 1963. (137)Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The (137)Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the (137)Cs depth distribution in profile, where the maximum (137)Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total (137)Cs fallout amount deposited on the earth surface in 1963 and the (137)Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of (137)Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the (137)Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different (137)Cs loss proportions of the reference inventory at the Kaixian

  17. Estimation of Eddy Dissipation Rates from Mesoscale Model Simulations

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2012-01-01

    The Eddy Dissipation Rate is an important metric for representing the intensity of atmospheric turbulence and is used as an input parameter for predicting the decay of aircraft wake vortices. In this study, the forecasts of eddy dissipation rates obtained from the current state-of-the-art mesoscale model are evaluated for terminal area applications. The Weather Research and Forecast mesoscale model is used to simulate the planetary boundary layer at high horizontal and vertical mesh resolutions. The Bougeault-Lacarrer and the Mellor-Yamada-Janji schemes implemented in the Weather Research and Forecast model are evaluated against data collected during the National Aeronautics and Space Administration s Memphis Wake Vortex Field Experiment. Comparisons with other observations are included as well.

  18. A fugacity-based indoor residential pesticide fate model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Deborah H.; Furtaw, Edward J.; McKone, Thomas E.

    Dermal and non-dietary pathways are potentially significant exposure pathways to pesticides used in residences. Exposure pathways include dermal contact with residues on surfaces, ingestion from hand- and object-to-mouth activities, and absorption of pesticides into food. A limited amount of data has been collected on pesticide concentrations in various residential compartments following an application. But models are needed to interpret this data and make predictions about other pesticides based on chemical properties. In this paper, we propose a mass-balance compartment model based on fugacity principles. We include air (both gas phase and aerosols), carpet, smooth flooring, and walls as model compartments.more » Pesticide concentrations on furniture and toys, and in food, are being added to the model as data becomes available. We determine the compartmental fugacity capacity and mass transfer-rate coefficient for wallboard as an example. We also present the framework and equations needed for a dynamic mass-balance model.« less

  19. Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.

    DTIC Science & Technology

    1984-03-01

    Research Report COs 481 EMPIRICAL TESTS OF THE ASSUMPTIO:IS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany 00 00 CENTER FOR...Research Report CCS 481 EMPIRICAL TESTS OF THE ASSUMPTIONS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany March 1984...applying these tests to the U.S. dollar to Japanese Yen foreign exchange rates . Conclusions and discussion is given in section VI. 1The previous authors

  20. Characterization of exchange rate regimes based on scaling and correlation properties of volatility for ASEAN-5 countries

    NASA Astrophysics Data System (ADS)

    Muniandy, Sithi V.; Uning, Rosemary

    2006-11-01

    Foreign currency exchange rate policies of ASEAN member countries have undergone tremendous changes following the 1997 Asian financial crisis. In this paper, we study the fractal and long-memory characteristics in the volatility of five ASEAN founding members’ exchange rates with respect to US dollar. The impact of exchange rate policies implemented by the ASEAN-5 countries on the currency fluctuations during pre-, mid- and post-crisis are briefly discussed. The time series considered are daily price returns, absolute returns and aggregated absolute returns, each partitioned into three segments based on the crisis regimes. These time series are then modeled using fractional Gaussian noise, fractionally integrated ARFIMA (0,d,0) and generalized Cauchy process. The first two stationary models provide the description of long-range dependence through Hurst and fractional differencing parameter, respectively. Meanwhile, the generalized Cauchy process offers independent estimation of fractal dimension and long memory exponent. In comparison, among the three models we found that the generalized Cauchy process showed greater sensitivity to transition of exchange rate regimes that were implemented by ASEAN-5 countries.

  1. High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture

    DTIC Science & Technology

    2017-11-22

    High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating

  2. Measurement of the volume growth rate of single budding yeast with the MOSFET-based microfluidic Coulter counter

    PubMed Central

    Sun, Jiashu; Stowers, Chris C.; Boczko, Erik M.

    2012-01-01

    We report on measurements of the volume growth rate of ten individual budding yeast cells using a recently developed MOSFET-based microfluidic Coulter counter. The MOSFET-based microfluidic Coulter counter is very sensitive, provides signals that are immune from the baseline drift, and can work with cell culture media of complex composition. These desirable features allow us to directly measure the volume growth rate of single cells of Saccharomyces cerevisiae LYH3865 strain budding yeast in YNB culture media over a whole cell cycle. Results indicate that all budding yeast follow a sigmoid volume growth profile with reduced growth rates at the initial stage before the bud emerges and the final stage after the daughter gets mature. Analysis of the data indicates that even though all piecewise linear, Gomperitz, and Hill’s function models can fit the global growth profile equally well, the data strongly support local exponential growth phenomenon. Accurate volume growth measurements are important for applications in systems biology where quantitative parameters are required for modeling and simulation. PMID:20717618

  3. Measurement of the volume growth rate of single budding yeast with the MOSFET-based microfluidic Coulter counter.

    PubMed

    Sun, Jiashu; Stowers, Chris C; Boczko, Erik M; Li, Deyu

    2010-11-07

    We report on measurements of the volume growth rate of ten individual budding yeast cells using a recently developed MOSFET-based microfluidic Coulter counter. The MOSFET-based microfluidic Coulter counter is very sensitive, provides signals that are immune from the baseline drift, and can work with cell culture media of complex composition. These desirable features allow us to directly measure the volume growth rate of single cells of Saccharomyces cerevisiae LYH3865 strain budding yeast in YNB culture media over a whole cell cycle. Results indicate that all budding yeast follow a sigmoid volume growth profile with reduced growth rates at the initial stage before the bud emerges and the final stage after the daughter gets mature. Analysis of the data indicates that even though all piecewise linear, Gomperitz, and Hill's function models can fit the global growth profile equally well, the data strongly support local exponential growth phenomenon. Accurate volume growth measurements are important for applications in systems biology where quantitative parameters are required for modeling and simulation.

  4. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  5. Vehicle crashworthiness ratings in Australia.

    PubMed

    Cameron, M; Mach, T; Neiger, D; Graham, A; Ramsay, R; Pappas, M; Haley, J

    1994-08-01

    The paper reviews the published vehicle safety ratings based on mass crash data from the United States, Sweden, and Great Britain. It then describes the development of vehicle crashworthiness ratings based on injury compensation claims and police accident reports from Victoria and New South Wales, the two most populous states in Australia. Crashworthiness was measured by a combination of injury severity (of injured drivers) and injury risk (of drivers involved in crashes). Injury severity was based on 22,600 drivers injured in crashes in the two states. Injury risk was based on 70,900 drivers in New South Wales involved in crashes after which a vehicle was towed away. Injury risk measured in this way was compared with the "relative injury risk" of particular model cars involved in two car crashes in Victoria (where essentially only casualty crashes are reported), which was based on the method developed by Folksam Insurance in Sweden from Evans' double-pair comparison method. The results include crashworthiness ratings for the makes and models crashing in Australia in sufficient numbers to measure their crash performance adequately. The ratings were normalised for the driver sex and speed limit at the crash location, the two factors found to be strongly related to injury risk and/or severity and to vary substantially across makes and models of Australian crash-involved cars. This allows differences in crashworthiness of individual models to be seen, uncontaminated by major crash exposure differences.

  6. Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates (External Review Draft)

    EPA Science Inventory

    EPA has released a draft report entitled, Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates, for independent external peer review and public comment. NCEA published the Exposure Factors Handbook in 1997. This comprehens...

  7. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system.

    PubMed

    Dicke, Ulrike; Ewert, Stephan D; Dau, Torsten; Kollmeier, Birger

    2007-01-01

    Periodic amplitude modulations (AMs) of an acoustic stimulus are presumed to be encoded in temporal activity patterns of neurons in the cochlear nucleus. Physiological recordings indicate that this temporal AM code is transformed into a rate-based periodicity code along the ascending auditory pathway. The present study suggests a neural circuit for the transformation from the temporal to the rate-based code. Due to the neural connectivity of the circuit, bandpass shaped rate modulation transfer functions are obtained that correspond to recorded functions of inferior colliculus (IC) neurons. In contrast to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies (BMFs) of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units. In order to investigate the compatibility of the neural circuit with a linear modulation filterbank analysis as proposed in psychophysical studies, complex stimuli such as tones modulated by the sum of two sinusoids, narrowband noise, and iterated rippled noise were processed by the model. The model accounts for the encoding of AM depth over a large dynamic range and for modulation frequency selective processing of complex sounds.

  8. Discovery of novel mGluR1 antagonists: a multistep virtual screening approach based on an SVM model and a pharmacophore hypothesis significantly increases the hit rate and enrichment factor.

    PubMed

    Li, Guo-Bo; Yang, Ling-Ling; Feng, Shan; Zhou, Jian-Ping; Huang, Qi; Xie, Huan-Zhang; Li, Lin-Li; Yang, Sheng-Yong

    2011-03-15

    Development of glutamate non-competitive antagonists of mGluR1 (Metabotropic glutamate receptor subtype 1) has increasingly attracted much attention in recent years due to their potential therapeutic application for various nervous disorders. Since there is no crystal structure reported for mGluR1, ligand-based virtual screening (VS) methods, typically pharmacophore-based VS (PB-VS), are often used for the discovery of mGluR1 antagonists. Nevertheless, PB-VS usually suffers a lower hit rate and enrichment factor. In this investigation, we established a multistep ligand-based VS approach that is based on a support vector machine (SVM) classification model and a pharmacophore model. Performance evaluation of these methods in virtual screening against a large independent test set, M-MDDR, show that the multistep VS approach significantly increases the hit rate and enrichment factor compared with the individual SB-VS and PB-VS methods. The multistep VS approach was then used to screen several large chemical libraries including PubChem, Specs, and Enamine. Finally a total of 20 compounds were selected from the top ranking compounds, and shifted to the subsequent in vitro and in vivo studies, which results will be reported in the near future. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Development of a reactive burn model based on an explicit viscoplastic pore collapse model

    NASA Astrophysics Data System (ADS)

    Bouton, E.; Lefrançois, A.; Belmas, R.

    2017-01-01

    The aim of this study is to develop a reactive burn model based upon a microscopic hot spot model to compute the shock-initiation of pressed TATB high explosives. Such a model has been implemented in a lagrangian hydrodynamic code. In our calculations, 8 pore radii, ranging from 40 nm to 0.63 μm, have been taken into account and the porosity fraction associated to each void radius has been deduced from the Ultra-Small-Angle X-ray Scattering measurements (USAXS) for PBX-9502. The last parameter of our model is a burn rate that depends on three variables. The first two are the reaction progress variable and the lead shock pressure, the last one is the chemical reaction site number produced in the flow and calculated by the microscopic model. This burn rate has been calibrated by fitting pressure, velocity profiles and run distances to detonation. As the computed results are in close agreement with the measured ones, this model is able to perform a wide variety of numerical simulations including single, double shock waves and the desensitization phenomenon.

  10. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  11. The dynamic compressive behavior and constitutive modeling of D1 railway wheel steel over a wide range of strain rates and temperatures

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Su, Xingya; Zhao, Longmao

    The dynamic compressive behavior of D1 railway wheel steel at high strain rates was investigated using a split Hopkinson pressure bar (SHPB) apparatus. Three types of specimens, which were derived from the different positions (i.e., the rim, web and hub) of a railway wheel, were tested over a wide range of strain rates from 10-3 s-1 to 2.4 × 103 s-1 and temperatures from 213 K to 973 K. Influences of the strain rate and temperature on flow stress were discussed, and rate- and temperature-dependent constitutive relationships were assessed by the Cowper-Symonds model, Johnson-Cook model and a physically-based model, respectively. The experimental results show that the compressive true stress versus true strain response of D1 wheel steel is strain rate-dependent, and the strain hardening rate during the plastic flow stage decreases with the elevation of strain rate. Besides, the D1 wheel steel displays obvious temperature-dependence, and the third-type strain aging (3rd SA) is occurred at the temperature region of 673-973 K at a strain rate of ∼1500 s-1. Comparisons of experimental results with theoretical predictions indicate that the physically-based model has a better prediction capability for the 3rd SA characteristic of the tested D1 wheel steel.

  12. Modeling and estimating the jump risk of exchange rates: Applications to RMB

    NASA Astrophysics Data System (ADS)

    Wang, Yiming; Tong, Hanfei

    2008-11-01

    In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.

  13. A citation-based assessment of the performance of U.S. boiling water reactors following extended power up-rates

    NASA Astrophysics Data System (ADS)

    Heidrich, Brenden J.

    multiple independent variables on the event rate. Both the Cox and Weibull formulations were tested. The Cox formulation is more commonly used in survival analysis because of its flexibility. The best Cox model included fixed effects at the multi-reactor site level. The Weibull parametric formulation has the same base hazard rate as the Crow/AMSAA model. This theoretical connection was confirmed through a series of tests that demonstrated both models predicted the same base hazard rates. The Weibull formulation produced a model with most of the same statistically significant variables as the Cox model. The beneficial effect of extended power up-rates was predicted in the proportional hazards models as well as the Crow/AMSAA model. The Weibull model also indicated an effect that can be traced back to a plant’s construction. Performance was also found to improve in plants that had been divested from their original owners. This research developed a consistent evaluation toolkit for nuclear power plant performance using either a univariate method that allows for simple graphical evaluation at its heart or a more complex multivariate method that includes the effects of several independent variables with data that are available from public sources. Utilities or regulators with access to proprietary data may be able to expand upon this research with additional data that is not readily available to an academic researcher. Even without access to special data, the methods developed are valuable tools in evaluating and predicting nuclear power plant reliability performance.

  14. Fnk Model of Cracking Rate Calculus for a Variable Asymmetry Coefficient

    NASA Astrophysics Data System (ADS)

    Roşca, Vâlcu; Miriţoiu, Cosmin Mihai

    2017-12-01

    In the process of materials fracture, a very important parameter to study is the cracking rate growth da/dN. This paper proposes an analysis of the cracking rate, in a comparative way, by using four mathematical models:1 - polynomial method, by using successive iterations according to the ASTM E647 standard; 2 - model that uses the Paris formula; 3 - Walker formula method; 4 - NASGRO model or Forman - Newman - Konig equation, abbreviated as FNK model. This model is used in the NASA programs studies. For the tests, CT type specimens were made from stainless steel, V2A class, 10TiNiCr175 mark, and loaded to a variable tensile test axial - eccentrically, with the asymmetry coefficients: R= 0.1, 0.3 and 0.5; at the 213K (-60°C) temperature. There are analyzed the cracking rates variations according to the above models, especially through FNK method, highlighting the asymmetry factor variation.

  15. Modeling marine oily wastewater treatment by a probabilistic agent-based approach.

    PubMed

    Jing, Liang; Chen, Bing; Zhang, Baiyu; Ye, Xudong

    2018-02-01

    This study developed a novel probabilistic agent-based approach for modeling of marine oily wastewater treatment processes. It begins first by constructing a probability-based agent simulation model, followed by a global sensitivity analysis and a genetic algorithm-based calibration. The proposed modeling approach was tested through a case study of the removal of naphthalene from marine oily wastewater using UV irradiation. The removal of naphthalene was described by an agent-based simulation model using 8 types of agents and 11 reactions. Each reaction was governed by a probability parameter to determine its occurrence. The modeling results showed that the root mean square errors between modeled and observed removal rates were 8.73 and 11.03% for calibration and validation runs, respectively. Reaction competition was analyzed by comparing agent-based reaction probabilities, while agents' heterogeneity was visualized by plotting their real-time spatial distribution, showing a strong potential for reactor design and process optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A semi-empirical model for the complete orientation dependence of the growth rate for vapor phase epitaxy - Chloride VPE of GaAs

    NASA Technical Reports Server (NTRS)

    Seidel-Salinas, L. K.; Jones, S. H.; Duva, J. M.

    1992-01-01

    A semi-empirical model has been developed to determine the complete crystallographic orientation dependence of the growth rate for vapor phase epitaxy (VPE). Previous researchers have been able to determine this dependence for a limited range of orientations; however, our model yields relative growth rate information for any orientation. This model for diamond and zincblende structure materials is based on experimental growth rate data, gas phase diffusion, and surface reactions. Data for GaAs chloride VPE is used to illustrate the model. The resulting growth rate polar diagrams are used in conjunction with Wulff constructions to simulate epitaxial layer shapes as grown on patterned substrates. In general, this model can be applied to a variety of materials and vapor phase epitaxy systems.

  17. Can Sample-Specific Simulations Help Detect Low Base-Rate Taxonicity?

    ERIC Educational Resources Information Center

    Beach, Steven R. H.; Amir, Nader; Bau, Jinn Jonp

    2005-01-01

    The authors examined the role of the sample-specific simulations (SSS; A. M. Ruscio & J. Ruscio, 2002; J. Ruscio & A. M. Ruscio, 2004) procedure in detecting low base-rate taxa that might otherwise prove elusive. The procedure preserved key distributional characteristics for moderate to high base-rate taxa, but it performed inadequately for low…

  18. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  19. A crystallographic model for nickel base single crystal alloys

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Stouffer, D. C.

    1988-01-01

    The purpose of this research is to develop a tool for the mechanical analysis of nickel-base single-crystal superalloys, specifically Rene N4, used in gas turbine engine components. This objective is achieved by developing a rate-dependent anisotropic constitutive model and implementing it in a nonlinear three-dimensional finite-element code. The constitutive model is developed from metallurgical concepts utilizing a crystallographic approach. An extension of Schmid's law is combined with the Bodner-Partom equations to model the inelastic tension/compression asymmetry and orientation-dependence in octahedral slip. Schmid's law is used to approximate the inelastic response of the material in cube slip. The constitutive equations model the tensile behavior, creep response and strain-rate sensitivity of the single-crystal superalloys. Methods for deriving the material constants from standard tests are also discussed. The model is implemented in a finite-element code, and the computed and experimental results are compared for several orientations and loading conditions.

  20. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  1. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically.

  2. Acoustically based fetal heart rate monitor

    NASA Technical Reports Server (NTRS)

    Baker, Donald A.; Zuckerwar, Allan J.

    1991-01-01

    The acoustically based fetal heart rate monitor permits an expectant mother to perform the fetal Non-Stress Test in her home. The potential market would include the one million U.S. pregnancies per year requiring this type of prenatal surveillance. The monitor uses polyvinylidene fluoride (PVF2) piezoelectric polymer film for the acoustic sensors, which are mounted in a seven-element array on a cummerbund. Evaluation of the sensor ouput signals utilizes a digital signal processor, which performs a linear prediction routine in real time. Clinical tests reveal that the acoustically based monitor provides Non-Stress Test records which are comparable to those obtained with a commercial ultrasonic transducer.

  3. Constraints based analysis of extended cybernetic models.

    PubMed

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Effects of the DRG-based prospective payment system operated by the voluntarily participating providers on the cesarean section rates in Korea.

    PubMed

    Lee, Kwangsoo; Lee, Sangil

    2007-05-01

    This study explored the effects of the diagnosis-related group (DRG)-based prospective payment system (PPS) operated by voluntarily participating organizations on the cesarean section (CS) rates, and analyzed whether the participating health care organizations had similar CS rates despite the varied participation periods. The study sample included delivery claims data from the Korean national health insurance program for the year 2003. Risk factors were identified and used in the adjustment model to distinguish the main reason for CS. Their risk-adjusted CS rates were compared by the reimbursement methods, and the organizations' internal and external environments were controlled. The final risk-adjustment model for the CS rates meets the criteria for an effective model. There were no significant differences of CS rates between providers in the DRG and fee-for-service system after controlling for organizational variables. The CS rates did not vary significantly depending on the providers' DRG participation periods. The results provide evidence that the DRG payment system operated by volunteering health care organizations had no impact on the CS rates, which can lower the quality of care. Although the providers joined the DRG system in different years, there were no differences in the CS rates among the DRG providers. These results support the future expansion of the DRG-based PPS plan to all health care services in Korea.

  5. New Constraints on Models for Time-Variable Displacement Rates on the San Jacinto Fault Zone, Southern California

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Bennett, R.; Matti, J.

    2004-12-01

    Existing geodetic, geomorphic, and geologic studies yield apparently conflicting estimates of fault displacement rates over the last 1.5 m.y. in the greater San Andreas fault (SAF) system of southern California. Do these differences reflect biases in one or more of the inference methods, or is fault displacement really temporally variable? Arguments have been presented for both cases. We investigate the plausibility of variable-rate fault models by combining basin deposit provenance, fault trenching, seismicity, gravity, and magnetic data sets from the San Bernardino basin. These data allow us to trace the path and broad timing of strike-slip fault displacements in buried basement rocks, which in turn allows us to test weather variable-fault rate models fit the displacement path and rate data through the basin. The San Bernardino basin lies between the San Jacinto fault (SJF) and the SAF. Isostatic gravity signatures show a 2 km deep graben centered directly over the modern strand of the SJF, whereas the basin is shallow and a-symmetric next to the SAF. This observation indicates that stresses necessary to create the basin have been centered on the SJF for most of the basin's history. Linear magnetic anomalies, used as geologic markers, are offset ˜25 km across the northernmost strands of the SJF, which matches offset estimations south of the basin. These offset anomalies indicate that the SJF and SAF are discrete fault systems that do not directly interact south of the San Gabriel Mountains, therefore spatial slip variability combined with sparse sampling cannot explain the conflicting rate data. Furthermore, analyses of basin deposits indicate that movement on the SJF began between 1.3 to1.5 Ma, yielding an over-all average displacement rate in the range of 17 to 19 mm/yr, which is higher than some shorter-term estimates based on geodesy and geomorphology. Average displacement rates over this same time period for the San Bernardino strand of the SAF, on the

  6. Modeling heart rate variability including the effect of sleep stages

    NASA Astrophysics Data System (ADS)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  7. An Administrative Claims Model for Profiling Hospital 30-Day Mortality Rates for Pneumonia Patients

    PubMed Central

    Bratzler, Dale W.; Normand, Sharon-Lise T.; Wang, Yun; O'Donnell, Walter J.; Metersky, Mark; Han, Lein F.; Rapp, Michael T.; Krumholz, Harlan M.

    2011-01-01

    Background Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Methodology/Principal Findings Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998–2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998–2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25th, 50th, and 75th percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). Conclusions/Significance An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model. PMID:21532758

  8. Adaptation of hidden Markov models for recognizing speech of reduced frame rate.

    PubMed

    Lee, Lee-Min; Jean, Fu-Rong

    2013-12-01

    The frame rate of the observation sequence in distributed speech recognition applications may be reduced to suit a resource-limited front-end device. In order to use models trained using full-frame-rate data in the recognition of reduced frame-rate (RFR) data, we propose a method for adapting the transition probabilities of hidden Markov models (HMMs) to match the frame rate of the observation. Experiments on the recognition of clean and noisy connected digits are conducted to evaluate the proposed method. Experimental results show that the proposed method can effectively compensate for the frame-rate mismatch between the training and the test data. Using our adapted model to recognize the RFR speech data, one can significantly reduce the computation time and achieve the same level of accuracy as that of a method, which restores the frame rate using data interpolation.

  9. Fault-based PSHA of an active tectonic region characterized by low deformation rates: the case of the Lower Rhine Graben

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Vleminckx, Bart; Camelbeeck, Thierry

    2016-04-01

    The Lower Rhine Graben (LRG) is one of the few regions in intraplate NW Europe where seismic activity can be linked to active faults, yet probabilistic seismic hazard assessments of this region have hitherto been based on area-source models, in which the LRG is modeled as a single or a small number of seismotectonic zones with uniform seismicity. While fault-based PSHA has become common practice in more active regions of the world (e.g., California, Japan, New Zealand, Italy), knowledge of active faults has been lagging behind in other regions, due to incomplete tectonic inventory, low level of seismicity, lack of systematic fault parameterization, or a combination thereof. The past few years, efforts are increasingly being directed to the inclusion of fault sources in PSHA in these regions as well, in order to predict hazard on a more physically sound basis. In Europe, the EC project SHARE ("Seismic Hazard Harmonization in Europe", http://www.share-eu.org/) represented an important step forward in this regard. In the frame of this project, we previously compiled the first parameterized fault model for the LRG that can be applied in PSHA. We defined 15 fault sources based on major stepovers, bifurcations, gaps, and important changes in strike, dip direction or slip rate. Based on the available data, we were able to place reasonable bounds on the parameters required for time-independent PSHA: length, width, strike, dip, rake, slip rate, and maximum magnitude. With long-term slip rates remaining below 0.1 mm/yr, the LRG can be classified as a low-deformation-rate structure. Information on recurrence interval and elapsed time since the last major earthquake is lacking for most faults, impeding time-dependent PSHA. We consider different models to construct the magnitude-frequency distribution (MFD) of each fault: a slip-rate constrained form of the classical truncated Gutenberg-Richter MFD (Anderson & Luco, 1983) versus a characteristic MFD following Youngs

  10. Model-based intensification of a fed-batch microbial process for the maximization of polyhydroxybutyrate (PHB) production rate.

    PubMed

    Penloglou, Giannis; Vasileiadou, Athina; Chatzidoukas, Christos; Kiparissides, Costas

    2017-08-01

    An integrated metabolic-polymerization-macroscopic model, describing the microbial production of polyhydroxybutyrate (PHB) in Azohydromonas lata bacteria, was developed and validated using a comprehensive series of experimental measurements. The model accounted for biomass growth, biopolymer accumulation, carbon and nitrogen sources utilization, oxygen mass transfer and uptake rates and average molecular weights of the accumulated PHB, produced under batch and fed-batch cultivation conditions. Model predictions were in excellent agreement with experimental measurements. The validated model was subsequently utilized to calculate optimal operating conditions and feeding policies for maximizing PHB productivity for desired PHB molecular properties. More specifically, two optimal fed-batch strategies were calculated and experimentally tested: (1) a nitrogen-limited fed-batch policy and (2) a nitrogen sufficient one. The calculated optimal operating policies resulted in a maximum PHB content (94% g/g) in the cultivated bacteria and a biopolymer productivity of 4.2 g/(l h), respectively. Moreover, it was demonstrated that different PHB grades with weight average molecular weights of up to 1513 kg/mol could be produced via the optimal selection of bioprocess operating conditions.

  11. A transverse isotropic constitutive model for the aortic valve tissue incorporating rate-dependency and fibre dispersion: Application to biaxial deformation.

    PubMed

    Anssari-Benam, Afshin; Tseng, Yuan-Tsan; Bucchi, Andrea

    2018-05-26

    This paper presents a continuum-based transverse isotropic model incorporating rate-dependency and fibre dispersion, applied to the planar biaxial deformation of aortic valve (AV) specimens under various stretch rates. The rate dependency of the mechanical behaviour of the AV tissue under biaxial deformation, the (pseudo-) invariants of the right Cauchy-Green deformation-rate tensor Ċ associated with fibre dispersion, and a new fibre orientation density function motivated by fibre kinematics are presented for the first time. It is shown that the model captures the experimentally observed deformation of the specimens, and characterises a shear-thinning behaviour associated with the dissipative (viscous) kinematics of the matrix and the fibres. The application of the model for predicting the deformation behaviour of the AV under physiological rates is illustrated and an example of the predicted σ-λ curves is presented. While the development of the model was principally motivated by the AV biomechanics requisites, the comprehensive theoretical approach employed in the study renders the model suitable for application to other fibrous soft tissues that possess similar rate-dependent and structural attributes. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  12. Spatial and directional variation of growth rates in Arabidopsis root apex: a modelling study.

    PubMed

    Nakielski, Jerzy; Lipowczan, Marcin

    2013-01-01

    Growth and cellular organization of the Arabidopsis root apex are investigated in various aspects, but still little is known about spatial and directional variation of growth rates in very apical part of the apex, especially in 3D. The present paper aims to fill this gap with the aid of a computer modelling based on the growth tensor method. The root apex with a typical shape and cellular pattern is considered. Previously, on the basis of two types of empirical data: the published velocity profile along the root axis and dimensions of cell packets formed in the lateral part of the root cap, the displacement velocity field for the root apex was determined. Here this field is adopted to calculate the linear growth rate in different points and directions. The results are interpreted taking principal growth directions into account. The root apex manifests a significant anisotropy of the linear growth rate. The directional preferences depend on a position within the root apex. In the root proper the rate in the periclinal direction predominates everywhere, while in the root cap the predominating direction varies with distance from the quiescent centre. The rhizodermis is distinguished from the neighbouring tissues (cortex, root cap) by relatively high contribution of the growth rate in the anticlinal direction. The degree of growth anisotropy calculated for planes defined by principal growth directions and exemplary cell walls may be as high as 25. The changes in the growth rate variation are modelled.

  13. [Comparison of Flu Outbreak Reporting Standards Based on Transmission Dynamics Model].

    PubMed

    Yang, Guo-jing; Yi, Qing-jie; Li, Qin; Zeng, Qing

    2016-05-01

    To compare the current two flu outbreak reporting standards for the purpose of better prevention and control of flu outbreaks. A susceptible-exposed-infectious/asymptomatic-removed (SEIAR) model without interventions was set up first, followed by a model with interventions based on real situation. Simulated interventions were developed based on the two reporting standards, and evaluated by estimated duration of outbreaks, cumulative new cases, cumulative morbidity rates, decline in percentage of morbidity rates, and cumulative secondary cases. The basic reproductive number of the outbreak was estimated as 8. 2. The simulation produced similar results as the real situation. The effect of interventions based on reporting standard one (10 accumulated new cases in a week) was better than that of interventions based on reporting standard two (30 accumulated new cases in a week). The reporting standard one (10 accumulated new cases in a week) is more effective for prevention and control of flu outbreaks.

  14. A high-rate PCI-based telemetry processor system

    NASA Astrophysics Data System (ADS)

    Turri, R.

    2002-07-01

    The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.

  15. A growth inhibitory model with SOx influenced effective growth rate for estimation of algal biomass concentration under flue gas atmosphere

    USDA-ARS?s Scientific Manuscript database

    A theoretical model for the prediction of biomass concentration under real flue gas emission has been developed. The model considers the CO2 mass transfer rate, the critical SOx concentration and its role on pH based inter-conversion of bicarbonate in model building. The calibration and subsequent v...

  16. Development and validation of a climate-based ensemble prediction model for West Nile Virus infection rates in Culex mosquitoes, Suffolk County, New York.

    PubMed

    Little, Eliza; Campbell, Scott R; Shaman, Jeffrey

    2016-08-09

    West Nile Virus (WNV) is an endemic public health concern in the United States that produces periodic seasonal epidemics. Underlying these outbreaks is the enzootic cycle of WNV between mosquito vectors and bird hosts. Identifying the key environmental conditions that facilitate and accelerate this cycle can be used to inform effective vector control. Here, we model and forecast WNV infection rates among mosquito vectors in Suffolk County, New York using readily available meteorological and hydrological conditions. We first validate a statistical model built with surveillance data between 2001 and 2009 (m09) and specify a set of new statistical models using surveillance data from 2001 to 2012 (m12). This ensemble of new models is then used to make predictions for 2013-2015, and multimodel inference is employed to provide a formal probabilistic interpretation across the disparate individual model predictions. The findings of the m09 and m12 models align; with the ensemble of m12 models indicating an association between warm, dry early spring (April) conditions and increased annual WNV infection rates in Culex mosquitoes. This study shows that real-time climate information can be used to predict WNV infection rates in Culex mosquitoes prior to its seasonal peak and before WNV spillover transmission risk to humans is greatest.

  17. An Agent-Based Model for Studying Child Maltreatment and Child Maltreatment Prevention

    NASA Astrophysics Data System (ADS)

    Hu, Xiaolin; Puddy, Richard W.

    This paper presents an agent-based model that simulates the dynamics of child maltreatment and child maltreatment prevention. The developed model follows the principles of complex systems science and explicitly models a community and its families with multi-level factors and interconnections across the social ecology. This makes it possible to experiment how different factors and prevention strategies can affect the rate of child maltreatment. We present the background of this work and give an overview of the agent-based model and show some simulation results.

  18. SLS Model Based Design: A Navigation Perspective

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  19. Study of high strain rate plastic deformation of low carbon microalloyed steels using experimental observation and computational modeling

    NASA Astrophysics Data System (ADS)

    Majta, J.; Zurek, A. K.; Trujillo, C. P.; Bator, A.

    2003-09-01

    This work presents validation of the integrated computer model to predict the impact of the microstructure evolution on the mechanical behavior of niobium-microalloyed steels under dynamic loading conditions. The microstructurally based constitutive equations describing the mechanical behavior of the mixed α and γ phases are proposed. It is shown that for a given finishing temperature and strain, the Nb steel exhibits strong influence of strain rate on the flow stress and final structure. This tendency is also observed in calculated results obtained using proposed modeling procedures. High strain rates influence the deformation mechanism and reduce the extent of recovery occurring during and after deformation and, in turn, increase the driving force for transformation. On the other hand, the ratio of nucleation rate to growth rate increases for lower strain rates (due to the higher number of nuclei that can be produced during an extended loading time) leading to the refined ferrite structure. However, as it was expected such behavior produces higher inhomogeneity in the final product. Multistage quasistatic compression tests and test using the Hopkinson Pressure Bar under different temperature, strain, and strain rate conditions, are used for verification of the proposed models.

  20. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    NASA Astrophysics Data System (ADS)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  1. Simulating CRN derived erosion rates in a transient Andean catchment using the TTLEM model

    NASA Astrophysics Data System (ADS)

    Campforts, Benjamin; Vanacker, Veerle; Herman, Frédéric; Schwanghart, Wolfgang; Tenrorio Poma, Gustavo; Govers, Gerard

    2017-04-01

    Assessing the impact of mountain building and erosion on the earth surface is key to reconstruct and predict terrestrial landscape evolution. Landscape evolution models (LEMs) are an essential tool in this research effort as they allow to integrate our growing understanding of physical processes governing erosion and transport of mass across the surface. The recent development of several LEMs opens up new areas of research in landscape evolution. Here, we want to seize this opportunity by answering a fundamental research question: does a model designed to simulate landscape evolution over geological timescales allows to simulate spatially varying erosion rates at a millennial timescale? We selected the highly transient Paute catchment in the Southeastern Ecuadorian Andes as a study area. We found that our model (TTLEM) is capable to better explain the spatial patterns of ca. 30 Cosmogenic Radio Nuclide (CRN) derived catchment wide erosion rates in comparison to a classical, statistical approach. Thus, the use of process-based landscape evolution models may not only be of great help to understand long-term landscape evolution but also in understanding spatial and temporal variations in sediment fluxes at the millennial time scale.

  2. Moving base simulation evaluation of translational rate command systems for STOVL aircraft in hover

    NASA Technical Reports Server (NTRS)

    Franklin, James A.; Stortz, Michael W.

    1996-01-01

    Using a generalized simulation model, a moving-base simulation of a lift-fan short takeoff/vertical landing fighter aircraft has been conducted on the Vertical Motion Simulator at Ames Research Center. Objectives of the experiment were to determine the influence of system bandwidth and phase delay on flying qualities for translational rate command and vertical velocity command systems. Assessments were made for precision hover control and for landings aboard an LPH type amphibious assault ship in the presence of winds and rough seas. Results obtained define the boundaries between satisfactory and adequate flying qualities for these design features for longitudinal and lateral translational rate command and for vertical velocity command.

  3. [Design of Oxygen Saturation, Heart Rate, Respiration Rate Detection System Based on Smartphone of Android Operating System].

    PubMed

    Zhu, Mingshan; Zeng, Bixin

    2015-03-01

    In this paper, we designed an oxygen saturation, heart rate, respiration rate monitoring system based on smartphone of android operating system, physiological signal acquired by MSP430 microcontroller and transmitted by Bluetooth module.

  4. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.

    PubMed

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.

  5. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    NASA Astrophysics Data System (ADS)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  6. TOPICAL REVIEW: Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors

    NASA Astrophysics Data System (ADS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amador Ceron, E.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Aoudia, S.; Arain, M. A.; Araya, M.; Aronsson, M.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Atkinson, D. E.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barnum, S.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Beker, M. G.; Belczynski, K.; Benacquista, M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bigotta, S.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birindelli, S.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Blomberg, A.; Boccara, C.; Bock, O.; Bodiya, T. P.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Boyle, M.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Breyer, J.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Budzyński, R.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Burmeister, O.; Buskulic, D.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campagna, E.; Campsie, P.; Cannizzo, J.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C.; Carbognani, F.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande Mottin, E.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R.; Corda, C.; Cornish, N.; Corsi, A.; Costa, C. A.; Coulon, J. P.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Dari, A.; Das, K.; Dattilo, V.; Daudert, B.; Davier, M.; Davies, G.; Davis, A.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Degallaix, J.; del Prete, M.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Devanka, P.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Emilio, M. Di Paolo; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Dorsher, S.; Douglas, E. S. D.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Dueck, J.; Dumas, J. C.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Engel, R.; Etzel, T.; Evans, M.; Evans, T.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flanigan, M.; Flasch, K.; Foley, S.; Forrest, C.; Forsi, E.; Fotopoulos, N.; Fournier, J. D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gammaitoni, L.; Garofoli, J. A.; Garufi, F.; Gemme, G.; Genin, E.; Gennai, A.; Gholami, I.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gill, C.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hall, P.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Heefner, J.; Heitmann, H.; Hello, P.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Howell, E.; Hoyland, D.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, H.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kowalska, I.; Kozak, D.; Krause, T.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Królak, A.; Kuehn, G.; Kullman, J.; Kumar, R.; Kwee, P.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lazzarini, A.; Leaci, P.; Leong, J.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Lin, H.; Lindquist, P. E.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Luan, J.; Lubiński, M.; Lucianetti, A.; Lück, H.; Lundgren, A.; Machenschalk, B.; MacInnis, M.; Mackowski, J. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Mak, C.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIvor, G.; McKechan, D. J. A.; Meadors, G.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Merill, L.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mino, Y.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreau, J.; Moreno, G.; Morgado, N.; Morgia, A.; Morioka, T.; Mors, K.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; MowLowry, C.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Nash, T.; Nawrodt, R.; Nelson, J.; Neri, I.; Newton, G.; Nishizawa, A.; Nocera, F.; Nolting, D.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; Oldenburg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Pardi, S.; Pareja, M.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Postiglione, F.; Prato, M.; Predoi, V.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabaste, O.; Rabeling, D. S.; Radke, T.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Röver, C.; Rogstad, S.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sakata, S.; Sakosky, M.; Salemi, F.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santostasi, G.; Saraf, S.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Stein, A. J.; Stein, L. C.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Trummer, J.; Tseng, K.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vaishnav, B.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Veggel, A. A.; Vass, S.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yeaton-Massey, D.; Yoshida, S.; Yu, P. P.; Yvert, M.; Zanolin, M.; Zhang, L.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2010-09-01

    We present an up-to-date, comprehensive summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo. Astrophysical estimates for compact-binary coalescence rates depend on a number of assumptions and unknown model parameters and are still uncertain. The most confident among these estimates are the rate predictions for coalescing binary neutron stars which are based on extrapolations from observed binary pulsars in our galaxy. These yield a likely coalescence rate of 100 Myr-1 per Milky Way Equivalent Galaxy (MWEG), although the rate could plausibly range from 1 Myr-1 MWEG-1 to 1000 Myr-1 MWEG-1 (Kalogera et al 2004 Astrophys. J. 601 L179; Kalogera et al 2004 Astrophys. J. 614 L137 (erratum)). We convert coalescence rates into detection rates based on data from the LIGO S5 and Virgo VSR2 science runs and projected sensitivities for our advanced detectors. Using the detector sensitivities derived from these data, we find a likely detection rate of 0.02 per year for Initial LIGO-Virgo interferometers, with a plausible range between 2 × 10-4 and 0.2 per year. The likely binary neutron-star detection rate for the Advanced LIGO-Virgo network increases to 40 events per year, with a range between 0.4 and 400 per year.

  7. Gas ultrasonic flow rate measurement through genetic-ant colony optimization based on the ultrasonic pulse received signal model

    NASA Astrophysics Data System (ADS)

    Hou, Huirang; Zheng, Dandan; Nie, Laixiao

    2015-04-01

    For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.

  8. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Nutaro, James J

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigmmore » to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.« less

  9. INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...

    EPA Pesticide Factsheets

    Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  10. Magma ascent and lava flow emplacement rates during the 2011 Axial Seamount eruption based on CO2 degassing

    NASA Astrophysics Data System (ADS)

    Jones, M. R.; Soule, S. A.; Gonnermann, H. M.; Le Roux, V.; Clague, D. A.

    2018-07-01

    Quantitative metrics for eruption rates at mid-ocean ridges (MORs) would improve our understanding of the structure and formation of the uppermost oceanic crust and would provide a means to link volcanic processes with the conditions of the underlying magmatic system. However, these metrics remain elusive because no MOR eruptions have been directly observed. The possibility of disequilibrium degassing in mid-ocean ridge basalts (MORB), due to high eruptive depressurization rates, makes the analysis of volatile concentrations in MORB glass a promising method for evaluating eruption rates. In this study, we estimate magma ascent and lava flow emplacement rates during the 2011 eruption of Axial Seamount based on numerical modeling of diffusion-controlled bubble growth and new measurements of dissolved volatiles, vesicularity, and vesicle size distributions in erupted basalts. This dataset provides a unique view of the variability in magma ascent (∼0.02-1.2 m/s) and lava flow rates (∼0.1-0.7 m/s) during a submarine MOR eruption based on 50 samples collected from a >10 km long fissure system and three individual lava flow lobes. Samples from the 2011 eruption display an unprecedented range in dissolved CO2 concentrations, nearly spanning the full range observed on the global MOR system. The variable vesicularity and dissolved CO2 concentrations in these samples can be explained by differences in the extent of degassing, dictated by flow lengths and velocities during both vertical ascent and horizontal flow along the seafloor. Our results document, for the first time, the variability in magma ascent rates during a submarine eruption (∼0.02-1.2 m/s), which spans the global range previously proposed based on CO2 degassing. The slowest ascent rates are associated with hummocky flows while faster ascent rates produce channelized sheet flows. This study corroborates degassing-based models for eruption rates using comparisons with independent methods and documents the

  11. SLS Navigation Model-Based Design Approach

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  12. Predicting changes in volcanic activity through modelling magma ascent rate.

    NASA Astrophysics Data System (ADS)

    Thomas, Mark; Neuberg, Jurgen

    2013-04-01

    It is a simple fact that changes in volcanic activity happen and in retrospect they are easy to spot, the dissimilar eruption dynamics between an effusive and explosive event are not hard to miss. However to be able to predict such changes is a much more complicated process. To cause altering styles of activity we know that some part or combination of parts within the system must vary with time, as if there is no physical change within the system, why would the change in eruptive activity occur? What is unknown is which parts or how big a change is needed. We present the results of a suite of conduit flow models that aim to answer these questions by assessing the influence of individual model parameters such as the dissolved water content or magma temperature. By altering these variables in a systematic manner we measure the effect of the changes by observing the modelled ascent rate. We use the ascent rate as we believe it is a very important indicator that can control the style of eruptive activity. In particular, we found that the sensitivity of the ascent rate to small changes in model parameters surprising. Linking these changes to observable monitoring data in a way that these data could be used as a predictive tool is the ultimate goal of this work. We will show that changes in ascent rate can be estimated by a particular type of seismicity. Low frequency seismicity, thought to be caused by the brittle failure of melt is often linked with the movement of magma within a conduit. We show that acceleration in the rate of low frequency seismicity can correspond to an increase in the rate of magma movement and be used as an indicator for potential changes in eruptive activity.

  13. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  14. COMBINING RATE-BASED AND CAP-AND-TRADE EMISSIONS POLICIES. (R828628)

    EPA Science Inventory

    Rate-based emissions policies (like tradable performance standards, TPS) fix average emissions intensity, while cap-and-trade (CAT) policies fix total emissions. This paper shows that unfettered trade between rate-based and cap-and-trade programs always raises combined emissio...

  15. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates

    PubMed Central

    Wagner, Peter J.

    2012-01-01

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266

  16. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates.

    PubMed

    Wagner, Peter J

    2012-02-23

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.

  17. A Transfer Hamiltonian Model for Devices Based on Quantum Dot Arrays

    PubMed Central

    Illera, S.; Prades, J. D.; Cirera, A.; Cornet, A.

    2015-01-01

    We present a model of electron transport through a random distribution of interacting quantum dots embedded in a dielectric matrix to simulate realistic devices. The method underlying the model depends only on fundamental parameters of the system and it is based on the Transfer Hamiltonian approach. A set of noncoherent rate equations can be written and the interaction between the quantum dots and between the quantum dots and the electrodes is introduced by transition rates and capacitive couplings. A realistic modelization of the capacitive couplings, the transmission coefficients, the electron/hole tunneling currents, and the density of states of each quantum dot have been taken into account. The effects of the local potential are computed within the self-consistent field regime. While the description of the theoretical framework is kept as general as possible, two specific prototypical devices, an arbitrary array of quantum dots embedded in a matrix insulator and a transistor device based on quantum dots, are used to illustrate the kind of unique insight that numerical simulations based on the theory are able to provide. PMID:25879055

  18. A transfer hamiltonian model for devices based on quantum dot arrays.

    PubMed

    Illera, S; Prades, J D; Cirera, A; Cornet, A

    2015-01-01

    We present a model of electron transport through a random distribution of interacting quantum dots embedded in a dielectric matrix to simulate realistic devices. The method underlying the model depends only on fundamental parameters of the system and it is based on the Transfer Hamiltonian approach. A set of noncoherent rate equations can be written and the interaction between the quantum dots and between the quantum dots and the electrodes is introduced by transition rates and capacitive couplings. A realistic modelization of the capacitive couplings, the transmission coefficients, the electron/hole tunneling currents, and the density of states of each quantum dot have been taken into account. The effects of the local potential are computed within the self-consistent field regime. While the description of the theoretical framework is kept as general as possible, two specific prototypical devices, an arbitrary array of quantum dots embedded in a matrix insulator and a transistor device based on quantum dots, are used to illustrate the kind of unique insight that numerical simulations based on the theory are able to provide.

  19. Finite driving rate and anisotropy effects in landslide modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piegari, E.; Cataudella, V.; Di Maio, R.

    2006-02-15

    In order to characterize landslide frequency-size distributions and individuate hazard scenarios and their possible precursors, we investigate a cellular automaton where the effects of a finite driving rate and the anisotropy are taken into account. The model is able to reproduce observed features of landslide events, such as power-law distributions, as experimentally reported. We analyze the key role of the driving rate and show that, as it is increased, a crossover from power-law to non-power-law behaviors occurs. Finally, a systematic investigation of the model on varying its anisotropy factors is performed and the full diagram of its dynamical behaviors ismore » presented.« less

  20. Particle emission rates during electrostatic spray deposition of TiO2 nanoparticle-based photoactive coating.

    PubMed

    Koivisto, Antti J; Jensen, Alexander C Ø; Kling, Kirsten I; Kling, Jens; Budtz, Hans Christian; Koponen, Ismo K; Tuinman, Ilse; Hussein, Tareq; Jensen, Keld A; Nørgaard, Asger; Levin, Marcus

    2018-01-05

    Here, we studied the particle release rate during Electrostatic spray deposition of anatase-(TiO 2 )-based photoactive coating onto tiles and wallpaper using a commercially available electrostatic spray device. Spraying was performed in a 20.3m 3 test chamber while measuring concentrations of 5.6nm to 31μm-size particles and volatile organic compounds (VOC), as well as particle deposition onto room surfaces and on the spray gun user hand. The particle emission and deposition rates were quantified using aerosol mass balance modelling. The geometric mean particle number emission rate was 1.9×10 10 s -1 and the mean mass emission rate was 381μgs -1 . The respirable mass emission-rate was 65% lower than observed for the entire measured size-range. The mass emission rates were linearly scalable (±ca. 20%) to the process duration. The particle deposition rates were up to 15h -1 for <1μm-size and the deposited particles consisted of mainly TiO 2 , TiO 2 mixed with Cl and/or Ag, TiO 2 particles coated with carbon, and Ag particles with size ranging from 60nm to ca. 5μm. As expected, no significant VOC emissions were observed as a result of spraying. Finally, we provide recommendations for exposure model parameterization. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  1. A cure-rate model for the Shuttle filament-wound case

    NASA Technical Reports Server (NTRS)

    Cagliostro, D. E.; Islas, A.; Hsu, Ming-Ta

    1987-01-01

    An epoxy and carbon fiber composite has been used to produce a light-weight rocket case for the Space Shuttle. A kinetic model is developed which can predict the extent of epoxy conversion during the winding and curing of the case. The model accounts for both chemical and physical kinetics. In the model, chemical kinetics occur exclusively up to the time the transition temperature equals the reaction temperature. At this point the resin begins to solidify and the rate of this process limits the rate of epoxy conversion. A comparison of predicted and actual epoxy conversion is presented for isothermal and temperature programmed cure schedules.

  2. Polar Ozone Loss Rates: Comparison Of Match Observations With Simulations Of 3-D Chemical Transport Model And Box Model

    NASA Astrophysics Data System (ADS)

    Tripathi, O. P.; Godin-Beekmann, S.; Lefevre, F.; Marchand, M.; Pazmino, A.; Hauchecorne, A.

    2005-12-01

    Model simulations of ozone loss rates during recent arctic and Antarctic winters are compared with the observed ozone loss rates from the match technique. Arctic winters 1994/1995, 1999/2000, 2002/2003 and the Antarctic winter 2003 were considered for the analysis. We use a high resolution chemical transport model MIMOSA-CHIM and REPROBUS box model for the calculation of ozone loss rates. Trajectory model calculations show that the ozone loss rates are dependent on the initialization fields. On the one hand when chemical fields are initialized by UCAM (University of Cambridge SLIMCAT model simulated fields) the loss rates were underestimated by a factor of two whereas on the other hand when it is initialized by UL (University of Leeds) fields the model loss rates are in a very good agreement with match loss rates at lower levels. The study shows a very good agreement between MIMOSA-CHIM simulation and match observation in 1999/2000 winter at both levels, 450 and 500 K, except slight underestimation in March at 500 K. But in January we have a very good agreement. This is also true for 1994/1995 when we consider simulated ozone loss rate in view of the ECMWF wind deficiency assuming that match observations were not made on isolated trajectories. Sensitivity tests, by changing JCl2O2 value, particle number density and heating rates, performed for the arctic winter 1999/2000 shows that we need to improve our understanding of particle number density and heating rate calculation mechanism. Burkholder JCl2O2 has improved the comparison of MIMOSA-CHIM model results with observations (Tripathi et al., 2005). In the same study the comparison results were shown to improved by changing heating rates and number density through NAT particle sedimentation.

  3. Very narrow band model calculations of atmospheric fluxes and cooling rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, L.S.; Berk, A.; Acharya, P.K.

    1996-10-15

    A new very narrow band model (VNBM) approach has been developed and incorporated into the MODTRAN atmospheric transmittance-radiance code. The VNBM includes a computational spectral resolution of 1 cm{sup {minus}1}, a single-line Voigt equivalent width formalism that is based on the Rodgers-Williams approximation and accounts for the finite spectral width of the interval, explicit consideration of line tails, a statistical line overlap correction, a new sublayer integration approach that treats the effect of the sublayer temperature gradient on the path radiance, and the Curtis-Godson (CG) approximation for inhomogeneous paths. A modified procedure for determining the line density parameter 1/d ismore » introduced, which reduces its magnitude. This results in a partial correction of the VNBM tendency to overestimate the interval equivalent widths. The standard two parameter CG approximation is used for H{sub 2}O and CO{sub 2}, while the Goody three parameter CG approximation is used for O{sub 3}. Atmospheric flux and cooling rate predictions using a research version of MODTRAN, MODR, are presented for H{sub 2}O (with and without the continuum), CO{sub 2}, and O{sub 3} for several model atmospheres. The effect of doubling the CO{sub 2} concentration is also considered. These calculations are compared to line-by-line (LBL) model calculations using the AER, GLA, GFDL, and GISS codes. The MODR predictions fall within the spread of the LBL results. The effects of decreasing the band model spectral resolution are illustrated using CO{sub 2} cooling rate and flux calculations. 36 refs., 18 figs., 1 tab.« less

  4. Solar Energy Deposition Rates in the Mesosphere Derived from Airglow Measurements: Implications for the Ozone Model Deficit Problem

    NASA Technical Reports Server (NTRS)

    Mlynczak, Martin G.; Garcia, Rolando R.; Roble, Raymond G.; Hagan, Maura

    2000-01-01

    We derive rates of energy deposition in the mesosphere due to the absorption of solar ultraviolet radiation by ozone. The rates are derived directly from measurements of the 1.27-microns oxygen dayglow emission, independent of knowledge of the ozone abundance, the ozone absorption cross sections, and the ultraviolet solar irradiance in the ozone Hartley band. Fifty-six months of airglow data taken between 1982 and 1986 by the near-infrared spectrometer on the Solar-Mesosphere Explorer satellite are analyzed. The energy deposition rates exhibit altitude-dependent annual and semi-annual variations. We also find a positive correlation between temperatures and energy deposition rates near 90 km at low latitudes. This correlation is largely due to the semiannual oscillation in temperature and ozone and is consistent with model calculations. There is also a suggestion of possible tidal enhancement of this correlation based on recent theoretical and observational analyses. The airglow-derived rates of energy deposition are then compared with those computed by multidimensional numerical models. The observed and modeled deposition rates typically agree to within 20%. This agreement in energy deposition rates implies the same agreement exists between measured and modeled ozone volume mixing ratios in the mesosphere. Only in the upper mesosphere at midlatitudes during winter do we derive energy deposition rates (and hence ozone mixing ratios) consistently and significantly larger than the model calculations. This result is contrary to previous studies that have shown a large model deficit in the ozone abundance throughout the mesosphere. The climatology of solar energy deposition and heating presented in this paper is available to the community at the Middle Atmosphere Energy Budget Project web site at http://heat-budget.gats-inc.com.

  5. Predicting key malaria transmission factors, biting and entomological inoculation rates, using modelled soil moisture in Kenya.

    PubMed

    Patz, J A; Strzepek, K; Lele, S; Hedden, M; Greene, S; Noden, B; Hay, S I; Kalkstein, L; Beier, J C

    1998-10-01

    While malaria transmission varies seasonally, large inter-annual heterogeneity of malaria incidence occurs. Variability in entomological parameters, biting rates and entomological inoculation rates (EIR) have been strongly associated with attack rates in children. The goal of this study was to assess the weather's impact on weekly biting and EIR in the endemic area of Kisian, Kenya. Entomological data collected by the U.S. Army from March 1986 through June 1988 at Kisian, Kenya was analysed with concurrent weather data from nearby Kisumu airport. A soil moisture model of surface-water availability was used to combine multiple weather parameters with landcover and soil features to improve disease prediction. Modelling soil moisture substantially improved prediction of biting rates compared to rainfall; soil moisture lagged two weeks explained up to 45% of An. gambiae biting variability, compared to 8% for raw precipitation. For An. funestus, soil moisture explained 32% variability, peaking after a 4-week lag. The interspecies difference in response to soil moisture was significant (P < 0.00001). A satellite normalized differential vegetation index (NDVI) of the study site yielded a similar correlation (r = 0.42 An. gambiae). Modelled soil moisture accounted for up to 56% variability of An. gambiae EIR, peaking at a lag of six weeks. The relationship between temperature and An. gambiae biting rates was less robust; maximum temperature r2 = -0.20, and minimum temperature r2 = 0.12 after lagging one week. Benefits of hydrological modelling are compared to raw weather parameters and to satellite NDVI. These findings can improve both current malaria risk assessments and those based on El Niño forecasts or global climate change model projections.

  6. Model analysis of the link between interest rates and crashes

    NASA Astrophysics Data System (ADS)

    Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft

    2016-09-01

    We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.

  7. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    PubMed

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are

  8. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    NASA Astrophysics Data System (ADS)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and

  9. Estimating reaction rate coefficients within a travel-time modeling framework.

    PubMed

    Gong, R; Lu, C; Wu, W-M; Cheng, H; Gu, B; Watson, D; Jardine, P M; Brooks, S C; Criddle, C S; Kitanidis, P K; Luo, J

    2011-01-01

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.

  10. Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, R; Lu, C; Luo, Jian

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transportmore » over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.« less

  11. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets.

    PubMed

    Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong

    2017-01-01

    In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%.

  12. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets

    PubMed Central

    Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong

    2017-01-01

    In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%. PMID:29209363

  13. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network

    PubMed Central

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696

  14. A fault‐based model for crustal deformation in the western United States based on a combined inversion of GPS and geologic inputs

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2017-01-01

    We develop a crustal deformation model to determine fault‐slip rates for the western United States (WUS) using the Zeng and Shen (2014) method that is based on a combined inversion of Global Positioning System (GPS) velocities and geological slip‐rate constraints. The model consists of six blocks with boundaries aligned along major faults in California and the Cascadia subduction zone, which are represented as buried dislocations in the Earth. Faults distributed within blocks have their geometrical structure and locking depths specified by the Uniform California Earthquake Rupture Forecast, version 3 (UCERF3) and the 2008 U.S. Geological Survey National Seismic Hazard Map Project model. Faults slip beneath a predefined locking depth, except for a few segments where shallow creep is allowed. The slip rates are estimated using a least‐squares inversion. The model resolution analysis shows that the resulting model is influenced heavily by geologic input, which fits the UCERF3 geologic bounds on California B faults and ±one‐half of the geologic slip rates for most other WUS faults. The modeled slip rates for the WUS faults are consistent with the observed GPS velocity field. Our fit to these velocities is measured in terms of a normalized chi‐square, which is 6.5. This updated model fits the data better than most other geodetic‐based inversion models. Major discrepancies between well‐resolved GPS inversion rates and geologic‐consensus rates occur along some of the northern California A faults, the Mojave to San Bernardino segments of the San Andreas fault, the western Garlock fault, the southern segment of the Wasatch fault, and other faults. Off‐fault strain‐rate distributions are consistent with regional tectonics, with a total off‐fault moment rate of 7.2×1018">7.2×1018 and 8.5×1018  N·m/year">8.5×1018  N⋅m/year for California and the WUS outside California, respectively.

  15. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  16. Using variable rate models to identify genes under selection in sequence pairs: their validity and limitations for EST sequences.

    PubMed

    Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H

    2007-02-01

    Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.

  17. Clearance Rate and BP-ANN Model in Paraquat Poisoned Patients Treated with Hemoperfusion

    PubMed Central

    Hu, Lufeng; Hong, Guangliang; Ma, Jianshe; Wang, Xianqin; Lin, Guanyang; Zhang, Xiuhua; Lu, Zhongqiu

    2015-01-01

    In order to investigate the effect of hemoperfusion (HP) on the clearance rate of paraquat (PQ) and develop a clearance model, 41 PQ-poisoned patients who acquired acute PQ intoxication received HP treatment. PQ concentrations were determined by high performance liquid chromatography (HPLC). According to initial PQ concentration, study subjects were divided into two groups: Low-PQ group (0.05–1.0 μg/mL) and High-PQ group (1.0–10 μg/mL). After initial HP treatment, PQ concentrations decreased in both groups. However, in the High-PQ group, PQ levels remained in excess of 0.05 μg/mL and increased when the second HP treatment was initiated. Based on the PQ concentrations before and after HP treatment, the mean clearance rate of PQ calculated was 73 ± 15%. We also established a backpropagation artificial neural network (BP-ANN) model, which set PQ concentrations before HP treatment as input data and after HP treatment as output data. When it is used to predict PQ concentration after HP treatment, high prediction accuracy (R = 0.9977) can be obtained in this model. In conclusion, HP is an effective way to clear PQ from the blood, and the PQ concentration after HP treatment can be predicted by BP-ANN model. PMID:25695058

  18. Constitutive modeling of polycarbonate over a wide range of strain rates and temperatures

    NASA Astrophysics Data System (ADS)

    Wang, Haitao; Zhou, Huamin; Huang, Zhigao; Zhang, Yun; Zhao, Xiaoxuan

    2017-02-01

    The mechanical behavior of polycarbonate was experimentally investigated over a wide range of strain rates (10^{-4} to 5× 103 s^{-1}) and temperatures (293 to 353 K). Compression tests under these conditions were performed using a SHIMADZU universal testing machine and a split Hopkinson pressure bar. Falling weight impact testing was carried out on an Instron Dynatup 9200 drop tower system. The rate- and temperature-dependent deformation behavior of polycarbonate was discussed in detail. Dynamic mechanical analysis (DMA) tests were utilized to observe the glass (α ) transition and the secondary (β ) transition of polycarbonate. The DMA results indicate that the α and β transitions have a dramatic influence on the mechanical behavior of polycarbonate. The decompose/shift/reconstruct (DSR) method was utilized to decompose the storage modulus into the α and β components and extrapolate the entire modulus, the α-component modulus and the β-component modulus. Based on three previous models, namely, Mulliken-Boyce, G'Sell-Jonas and DSGZ, an adiabatic model is proposed to predict the mechanical behavior of polycarbonate. The model considers the contributions of both the α and β transitions to the mechanical behavior, and it has been implemented in ABAQUS/Explicit through a user material subroutine VUMAT. The model predictions are proven to essentially coincide with the experimental results during compression testing and falling weight impact testing.

  19. Identifying critical nitrogen application rate for maize yield and nitrate leaching in a Haplic Luvisol soil using the DNDC model.

    PubMed

    Zhang, Yitao; Wang, Hongyuan; Liu, Shen; Lei, Qiuliang; Liu, Jian; He, Jianqiang; Zhai, Limei; Ren, Tianzhi; Liu, Hongbin

    2015-05-01

    Identification of critical nitrogen (N) application rate can provide management supports for ensuring grain yield and reducing amount of nitrate leaching to ground water. A five-year (2008-2012) field lysimeter (1 m × 2 m × 1.2 m) experiment with three N treatments (0, 180 and 240 kg Nha(-1)) was conducted to quantify maize yields and amount of nitrate leaching from a Haplic Luvisol soil in the North China Plain. The experimental data were used to calibrate and validate the process-based model of Denitrification-Decomposition (DNDC). After this, the model was used to simulate maize yield production and amount of nitrate leaching under a series of N application rates and to identify critical N application rate based on acceptable yield and amount of nitrate leaching for this cropping system. The results of model calibration and validation indicated that the model could correctly simulate maize yield and amount of nitrate leaching, with satisfactory values of RMSE-observation standard deviation ratio, model efficiency and determination coefficient. The model simulations confirmed the measurements that N application increased maize yield compared with the control, but the high N rate (240 kg Nha(-1)) did not produce more yield than the low one (120 kg Nha(-1)), and that the amount of nitrate leaching increased with increasing N application rate. The simulation results suggested that the optimal N application rate was in a range between 150 and 240 kg ha(-1), which would keep the amount of nitrate leaching below 18.4 kg NO₃(-)-Nha(-1) and meanwhile maintain acceptable maize yield above 9410 kg ha(-1). Furthermore, 180 kg Nha(-1) produced the highest yields (9837 kg ha(-1)) and comparatively lower amount of nitrate leaching (10.0 kg NO₃(-)-Nha(-1)). This study will provide a valuable reference for determining optimal N application rate (or range) in other crop systems and regions in China. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Quantitative evaluation of specific vulnerability to nitrate for groundwater resource protection based on process-based simulation model.

    PubMed

    Huan, Huan; Wang, Jinsheng; Zhai, Yuanzheng; Xi, Beidou; Li, Juan; Li, Mingxiao

    2016-04-15

    It has been proved that groundwater vulnerability assessment is an effective tool for groundwater protection. Nowadays, quantitative assessment methods for specific vulnerability are scarce due to limited cognition of complicated contaminant fate and transport processes in the groundwater system. In this paper, process-based simulation model for specific vulnerability to nitrate using 1D flow and solute transport model in the unsaturated vadose zone is presented for groundwater resource protection. For this case study in Jilin City of northeast China, rate constants of denitrification and nitrification as well as adsorption constants of ammonium and nitrate in the vadose zone were acquired by laboratory experiments. The transfer time at the groundwater table t50 was taken as the specific vulnerability indicator. Finally, overall vulnerability was assessed by establishing the relationship between groundwater net recharge, layer thickness and t50. The results suggested that the most vulnerable regions of Jilin City were mainly distributed in the floodplain of Songhua River and Mangniu River. The least vulnerable areas mostly appear in the second terrace and back of the first terrace. The overall area of low, relatively low and moderate vulnerability accounted for 76% of the study area, suggesting the relatively low possibility of suffering nitrate contamination. In addition, the sensitivity analysis showed that the most sensitive factors of specific vulnerability in the vadose zone included the groundwater net recharge rate, physical properties of soil medium and rate constants of nitrate denitrification. By validating the suitability of the process-based simulation model for specific vulnerability and comparing with index-based method by a group of integrated indicators, more realistic and accurate specific vulnerability mapping could be acquired by the process-based simulation model acquiring. In addition, the advantages, disadvantages, constraint conditions and