Science.gov

Sample records for rate model based

  1. Model-based Heart rate prediction during Lokomat walking.

    PubMed

    Koenig, Alexander C; Somaini, Luca; Pulfer, Michael; Holenstein, Thomas; Omlin, Ximena; Wieser, Martin; Riener, Robert

    2009-01-01

    We implemented a model for prediction of heart rate during Lokomat walking. Using this model, we can predict potential overstressing of the patient and adapt the physical load accordingly. Current models for treadmill based heart rate control neglect the fact that the interaction torques between Lokomat and human can have a significant effect on heart rate. Tests with five healthy subjects lead to a model of sixth order with walking speed and power expenditure as inputs and heart rate prediction as output. Recordings with five different subjects were used for model validation. Future work includes model identification and predictive heart rate control with spinal cord injured and stroke patients. PMID:19963765

  2. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  3. Rate-based degradation modeling of lithium-ion cells

    SciTech Connect

    E.V. Thomas; I. Bloom; J.P. Christophersen; V.S. Battaglia

    2012-05-01

    Accelerated degradation testing is commonly used as the basis to characterize battery cell performance over a range of stress conditions (e.g., temperatures). Performance is measured by some response that is assumed to be related to the state of health of the cell (e.g., discharge resistance). Often, the ultimate goal of such testing is to predict cell life at some reference stress condition, where cell life is defined to be the point in time where performance has degraded to some critical level. These predictions are based on a degradation model that expresses the expected performance level versus the time and conditions under which a cell has been aged. Usually, the degradation model relates the accumulated degradation to the time at a constant stress level. The purpose of this article is to present an alternative framework for constructing a degradation model that focuses on the degradation rate rather than the accumulated degradation. One benefit of this alternative approach is that prediction of cell life is greatly facilitated in situations where the temperature exposure is not isothermal. This alternative modeling framework is illustrated via a family of rate-based models and experimental data acquired during calendar-life testing of high-power lithium-ion cells.

  4. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S

  5. Nonparametric Hammerstein model based model predictive control for heart rate regulation.

    PubMed

    Su, Steven W; Huang, Shoudong; Wang, Lu; Celler, Branko G; Savkin, Andrey V; Guo, Ying; Cheng, Teddy

    2007-01-01

    This paper proposed a novel nonparametric model based model predictive control approach for the regulation of heart rate during treadmill exercise. As the model structure of human cardiovascular system is often hard to determine, nonparametric modelling is a more realistic manner to describe complex behaviours of cardiovascular system. This paper presents a new nonparametric Hammerstein model identification approach for heart rate response modelling. Based on the pseudo-random binary sequence experiment data, we decouple the identification of linear dynamic part and input nonlinearity of the Hammerstein system. Correlation analysis is applied to acquire step response of linear dynamic component. Support Vector Regression is adopted to obtain a nonparametric description of the inverse of input static nonlinearity that is utilized to form an approximate linear model of the Hammerstein system. Based on the established model, a model predictive controller under predefined speed and acceleration constraints is designed to achieve safer treadmill exercise. Simulation results show that the proposed control algorithm can achieve optimal heart rate tracking performance under predefined constraints.

  6. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  7. Mixture of a seismicity model based on the rate-and-state friction and ETAS model

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2015-12-01

    Currently the ETAS model [Ogata, 1988, JASA] is considered to be a standard model of seismicity. However, because the ETAS model is a purely statistical one, the physics-based seismicity model derived from the rate-and-state friction (hereafter referred to as Dieterich model) [Dieterich, 1994, JGR] is frequently examined. However, the original version of the Dieterich model has several problems in the application to real earthquake sequences and therefore modifications have been conducted in previous studies. Iwata [2015, Pageoph] is one of such studies and shows that the Dieterich model is significantly improved as a result of the inclusion of the effect of secondary aftershocks (i.e., aftershocks caused by previous aftershocks). However, still the performance of the ETAS model is superior to that of the improved Dieterich model. For further improvement, the mixture of the Dieterich and ETAS models is examined in this study. To achieve the mixture, the seismicity rate is represented as a sum of the ETAS and Dieterich models of which weights are given as k and 1-k, respectively. This mixture model is applied to the aftershock sequences of the 1995 Kobe and 2004 Mid-Niigata sequences which have been analyzed in Iwata [2015]. Additionally, the sequence of the Matsushiro earthquake swarm in central Japan 1965-1970 is also analyzed. The value of k and parameters of the ETAS and Dieterich models are estimated by means of the maximum likelihood method, and the model performances are assessed on the basis of AIC. For the two aftershock sequences, the AIC values of the ETAS model are around 3-9 smaller (i.e., better) than those of the mixture model. On the contrary, for the Matsushiro swarm, the AIC value of the mixture model is 5.8 smaller than that of the ETAS model, indicating that the mixture of the two models results in significant improvement of the seismicity model.

  8. Rate-based process modeling study of CO{sub 2} capture with aqueous monoethanolamine solution

    SciTech Connect

    Zhang, Y.; Chen, H.; Chen, C.C.; Plaza, J.M.; Dugas, R.; Rochelle, G.T.

    2009-10-15

    Rate-based process modeling technology has matured and is increasingly gaining acceptance over traditional equilibrium-stage modeling approaches. Recently comprehensive pilot plant data for carbon dioxide (CO{sub 2}) capture with aqueous monoethanolamine (MEA) solution have become available from the University of Texas at Austin. The pilot plant data cover key process variables including CO{sub 2} concentration in the gas stream, CO{sub 2} loading in lean MEA solution, liquid to gas ratio, and packing type. In this study, we model the pilot plant operation with Aspen RateSep, a second generation rate-based multistage separation unit operation model in Aspen Plus. After a brief review of rate-based modeling, thermodynamic and kinetic models for CO{sub 2} absorption with the MEA solution, and transport property models, we show excellent match of the rate-based model predictions against the comprehensive pilot plant data and we validate the superiority of the rate-based models over the traditional equilibrium-stage models. We further examine the impacts of key rate-based modeling options, i.e., film discretization options and flow model options. The rate-based model provides excellent predictive capability, and it should be very useful for design and scale-up of CO{sub 2} capture processes.

  9. Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer.

    PubMed

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L; Eisele, Fred L; Siepmann, J Ilja; Hanson, David R; Zhao, Jun; McMurry, Peter H

    2012-11-13

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid-base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030

  10. Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer.

    PubMed

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L; Eisele, Fred L; Siepmann, J Ilja; Hanson, David R; Zhao, Jun; McMurry, Peter H

    2012-11-13

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid-base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta.

  11. A model-based technique for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1982-01-01

    A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat "unconventional" aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method was able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.

  12. A model-based technique for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1980-01-01

    A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat 'unconventional' aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method is able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.

  13. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  14. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-02-06

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively.

  15. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  16. Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns.

    PubMed

    Tang, Chong-Jian; He, Rui; Zheng, Ping; Chai, Li-Yuan; Min, Xiao-Bo

    2013-04-15

    A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50-55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor.

  17. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGES

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  18. Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate

    NASA Astrophysics Data System (ADS)

    Li, Chun-Hsien

    2015-06-01

    This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.

  19. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    SciTech Connect

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.

  20. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.

  1. Modelling heart rate kinetics.

    PubMed

    Zakynthinaki, Maria S

    2015-01-01

    The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual's cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women).

  2. Modelling Heart Rate Kinetics

    PubMed Central

    Zakynthinaki, Maria S.

    2015-01-01

    The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual’s cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164

  3. A multi-rate decay model to predict energy-based acoustic parameters in churches (L).

    PubMed

    Martellotta, Francesco

    2009-03-01

    Multi-rate decays are sometimes observed in room acoustics, appearing when markedly different volumes are coupled together and resulting in nonlinear decay curves. Such behavior appears in several churches at the very beginning of the decay process, although in conditions which cannot be explicitly referred to as coupling phenomena. Consequently, multi-rate exponential decays may be suitable to model energy distribution in this group of buildings, providing a more elegant and easily applicable set of equations in place of a previously defined "linear" model, used to adapt Barron's revised theory. The paper shows that the multi-rate approach ensures ease of calculation, without significant loss in accuracy in predicting energy-based acoustic parameters.

  4. Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.

    2013-05-01

    Measurements of aerosol number distributions down to one molecule have provided information that we've used to develop a new approach for modeling atmospheric nucleation rates. Measurements were carried out with the Cluster Chemical Ionization Mass Spectrometer (Cluster CIMS), the scanning mobility spectrometer using a diethylene glycol condensation particle counter as detector (DEG SMPS), and an ambient pressure proton transfer mass spectrometer for ammonia and amines (AmPMS). The model explains nucleation as a result of cluster evolution due to a sequence of acid-base reactions. We conclude that the smallest stable cluster contains four sulfuric acid molecules. The model leads to a simple analytic expression for nucleation rates that is reasonably consistent (i.e., ± 10x) with atmospheric observations. The model predicts that nucleation rates are equal to a prefactor, P<1, times the sulfuric acid vapor collision rate, (i.e., J=Pṡ0.5ṡk11 *[H2SO4]2).

  5. Time-dependent sleep stage transition model based on heart rate variability.

    PubMed

    Takeda, Toki; Mizuno, Osamu; Tanaka, Tomohiro

    2015-01-01

    A new model is proposed to automatically classify sleep stages using heart rate variability (HRV). The generative model, based on the characteristics that the distribution and the transition probabilities of sleep stages depend on the elapsed time from the beginning of sleep, infers the sleep stage with a Gibbs sampler. Experiments were conducted using a public data set consisting of 45 healthy subjects and the model's classification accuracy was evaluated for three sleep stages: wake state, rapid eye movement (REM) sleep, and non-REM sleep. Experimental results demonstrated that the model provides more accurate sleep stage classification than conventional (naive Bayes and Support Vector Machine) models that do not take the above characteristics into account. Our study contributes to improve the quality of sleep monitoring in the daily life using easy-to-wear HRV sensors. PMID:26736763

  6. MRI-based anatomical model of the human head for specific absorption rate mapping

    PubMed Central

    Makris, Nikos; Angelone, Leonardo; Tulloch, Seann; Sorg, Scott; Kaiser, Jonathan; Kennedy, David

    2009-01-01

    In this study, we present a magnetic resonance imaging (MRI)-based, high-resolution, numerical model of the head of a healthy human subject. In order to formulate the model, we performed quantitative volumetric segmentation on the human head, using T1-weighted MRI. The high spatial resolution used (1 × 1 × 1 mm3), allowed for the precise computation and visualization of a higher number of anatomical structures than provided by previous models. Furthermore, the high spatial resolution allowed us to study individual thin anatomical structures of clinical relevance not visible by the standard model currently adopted in computational bioelectromagnetics. When we computed the electromagnetic field and specific absorption rate (SAR) at 7 Tesla MRI using this high-resolution model, we were able to obtain a detailed visualization of such fine anatomical structures as the epidermis/dermis, bone structures, bone-marrow, white matter and nasal and eye structures. PMID:18985401

  7. Research and realization of ultrasonic gas flow rate measurement based on ultrasonic exponential model.

    PubMed

    Zheng, Dandan; Hou, Huirang; Zhang, Tao

    2016-04-01

    For ultrasonic gas flow rate measurement based on ultrasonic exponential model, when the noise frequency is close to that of the desired signals (called similar-frequency noise) or the received signal amplitude is small and unstable at big flow rate, local convergence of the algorithm genetic-ant colony optimization-3cycles may appear, and measurement accuracy may be affected. Therefore, an improved method energy genetic-ant colony optimization-3cycles (EGACO-3cycles) is proposed to solve this problem. By judging the maximum energy position of signal, the initial parameter range of exponential model can be narrowed and then the local convergence can be avoided. Moreover, a DN100 flow rate measurement system with EGACO-3cycles method is established based on NI PCI-6110 and personal computer. A series of experiments are carried out for testing the new method and the measurement system. It is shown that local convergence doesn't appear with EGACO-3cycles method when similar-frequency noises exist and flow rate is big. Then correct time of flight can be obtained. Furthermore, through flow calibration on this system, the measurement range ratio is achieved 500:1, and the measurement accuracy is 0.5% with a low transition velocity 0.3 m/s.

  8. Research and realization of ultrasonic gas flow rate measurement based on ultrasonic exponential model.

    PubMed

    Zheng, Dandan; Hou, Huirang; Zhang, Tao

    2016-04-01

    For ultrasonic gas flow rate measurement based on ultrasonic exponential model, when the noise frequency is close to that of the desired signals (called similar-frequency noise) or the received signal amplitude is small and unstable at big flow rate, local convergence of the algorithm genetic-ant colony optimization-3cycles may appear, and measurement accuracy may be affected. Therefore, an improved method energy genetic-ant colony optimization-3cycles (EGACO-3cycles) is proposed to solve this problem. By judging the maximum energy position of signal, the initial parameter range of exponential model can be narrowed and then the local convergence can be avoided. Moreover, a DN100 flow rate measurement system with EGACO-3cycles method is established based on NI PCI-6110 and personal computer. A series of experiments are carried out for testing the new method and the measurement system. It is shown that local convergence doesn't appear with EGACO-3cycles method when similar-frequency noises exist and flow rate is big. Then correct time of flight can be obtained. Furthermore, through flow calibration on this system, the measurement range ratio is achieved 500:1, and the measurement accuracy is 0.5% with a low transition velocity 0.3 m/s. PMID:26821309

  9. Gurson-type elastic-plastic damage model based on strain-rate plastic potential

    NASA Astrophysics Data System (ADS)

    Balan, Tudor; Cazacu, Oana

    2013-12-01

    Ductile damage is generally described by stress-space analytical potentials. In this contribution, it is shown that strain rate potentials, which are exact conjugate of the stress-based potentials, can be equally used to describe the dilatational response of porous metals. This framework is particularly appropriate for porous materials with matrix described by complex yield criteria for which a closed-form expression of the stress-based potential is not available. Illustration of the new approach is done for porous metals containing randomly distributed spherical voids in a von Mises elasto-plastic matrix. Furthermore, a general time integration algorithm for simulation of the mechanical response using this new formulation is developed and implemented in Abaqus/Standard. The proposed model and algorithm are validated with respect to the Abaqus built-in GTN model, which is based on a stress potential, through the simulation of a tensile test on a round bar.

  10. Model-based melt rate control during vacuum arc remelting of alloy 718

    NASA Astrophysics Data System (ADS)

    Williamson, Rodney L.; Melgaard, David K.; Shelmidine, Gregory J.; Beaman, Joseph J.; Morrison, Robert

    2004-02-01

    Vacuum arc remelting (VAR) is used widely throughout the specialty metals industry to produce superalloy and titanium alloy cast ingots. Optimum VAR casting requires that the electrode melting rate be controlled at all times during the process. This is especially difficult when process conditions are such that the temperature distribution in the electrode has not achieved, or has been driven away from, steady state. This condition is encountered during the beginning and closing stages of the VAR process, and also during some process disturbances such as when the melt zone passes through a transverse crack. To address these transient melting situations, a new method of VAR melt rate control has been developed that incorporates an accurate, low-order melting model to continually estimate the temperature distribution in the electrode. This method of model-based control was tested at Carpenter Technology Corporation. In the first test, two 0.43-m-diameter alloy 718 electrodes were melted into 0.51-m-diameter ingots. Aggressive start-up and hot-top procedures were used to test the dynamic capabilities of the control technique. Additionally, a transverse cut was placed in each electrode with an abrasive saw to mimic an electrode crack. Accurate melt rate control was demonstrated throughout each melt. The second test used an electrode size and grade proprietary to the host company. Because it was not stress relieved after the primary casting process, the electrode was known to possess multiple cracks that make accurate melt rate control impossible using standard VAR controller technology. This electrode was also successfully melted with good melt rate control using the model-based controller.

  11. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  12. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  13. Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis

    USGS Publications Warehouse

    Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John

    2009-01-01

    Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.

  14. An extension to the constructivist coding hypothesis as a learning model for selective feedback when the base rate is high.

    PubMed

    Ghaffarzadegan, Navid; Stewart, Thomas R

    2011-07-01

    Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the .5 base rate condition. This commentary argues that the constructivist coding hypothesis imposes an ever-declining selection rate and overestimates base rate bias for high base rate conditions. We provide support based on a simulation model of learning under selective feedback with different base rates. Then we discuss possible extensions to constructivist coding that can help overcome the problem. PMID:21728470

  15. Tests of Rating Models

    ERIC Educational Resources Information Center

    Masin, Sergio Cesare; Busetto, Martina

    2010-01-01

    The study reports empirical tests of Anderson's, Haubensak's, Helson's, and Parducci's rating models when two end anchors are used for rating. The results show that these models cannot predict the judgment effect called here the Dai Pra effect. It is shown that an extension of Anderson's model is consistent with this effect. The results confirm…

  16. Mechanical properties of methacrylate-based model dentin adhesives: effect of loading rate and moisture exposure.

    PubMed

    Singh, Viraj; Misra, Anil; Parthasarathy, Ranganathan; Ye, Qiang; Park, Jonggu; Spencer, Paulette

    2013-11-01

    The aim of this study is to investigate the mechanical behavior of model methacrylate-based dentin adhesives under conditions that simulate the wet oral environment. A series of monotonic and creep experiments were performed on rectangular beam samples of dentin adhesive in three-point bending configuration under different moisture conditions. The monotonic test results show a significant effect of loading rate on the failure strength and the linear limit (yield point) of the stress-strain response. In addition, these tests show that the failure strength is low, and the failure occurs at a smaller deformation when the test is performed under continuously changing moisture conditions. The creep test results show that under constant moisture conditions, the model dentin adhesives can have a viscoelastic response under certain low loading levels. However, when the moisture conditions vary under the same low loading levels, the dentin adhesives have an anomalous creep response accompanied by large secondary creep and high strain accumulation. PMID:23744598

  17. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  18. Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models

    NASA Astrophysics Data System (ADS)

    Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.

    2012-04-01

    The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.

  19. Learning to maximize reward rate: a model based on semi-Markov decision processes.

    PubMed

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R

    2014-01-01

    WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.

  20. Learning to maximize reward rate: a model based on semi-Markov decision processes

    PubMed Central

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.

    2014-01-01

    When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252

  1. Logarithmic rate based elasto-viscoplastic cyclic constitutive model for soft biological tissues.

    PubMed

    Zhu, Yilin; Kang, Guozheng; Yu, Chao; Poh, Leong Hien

    2016-08-01

    Based on the logarithmic rate and piecewise linearization theory, a thermodynamically consistent elasto-viscoplastic constitutive model is developed in the framework of finite deformations to describe the nonlinear time-dependent biomechanical performances of soft biological tissues, such as nonlinear anisotropic monotonic stress-strain responses, stress relaxation, creep and ratchetting. In the proposed model, the soft biological tissue is assumed as a typical composites consisting of an isotropic matrix and anisotropic fiber aggregation. Accordingly, the free energy function and stress tensor are divided into two parts related to the matrix and fiber aggregation, respectively. The nonlinear biomechanical responses of the tissues are described by the piecewise linearization theory with hypo-elastic relations of fiber aggregation. The evolution equations of viscoplasticity are formulated from the dissipation inequalities by the co-directionality hypotheses. The anisotropy is considered in the hypo-elastic relations and viscoplastic flow rules by introducing some material parameters dependent on the loading direction. Then the capability of the proposed model to describe the nonlinear time-dependent deformation of soft biological tissues is verified by comparing the predictions with the corresponding experimental results of three tissues. It is seen that the predicted monotonic stress-strain responses, stress relaxation, creep and ratchetting of soft biological tissues are in good agreement with the corresponding experimental ones. PMID:27108349

  2. Logarithmic rate based elasto-viscoplastic cyclic constitutive model for soft biological tissues.

    PubMed

    Zhu, Yilin; Kang, Guozheng; Yu, Chao; Poh, Leong Hien

    2016-08-01

    Based on the logarithmic rate and piecewise linearization theory, a thermodynamically consistent elasto-viscoplastic constitutive model is developed in the framework of finite deformations to describe the nonlinear time-dependent biomechanical performances of soft biological tissues, such as nonlinear anisotropic monotonic stress-strain responses, stress relaxation, creep and ratchetting. In the proposed model, the soft biological tissue is assumed as a typical composites consisting of an isotropic matrix and anisotropic fiber aggregation. Accordingly, the free energy function and stress tensor are divided into two parts related to the matrix and fiber aggregation, respectively. The nonlinear biomechanical responses of the tissues are described by the piecewise linearization theory with hypo-elastic relations of fiber aggregation. The evolution equations of viscoplasticity are formulated from the dissipation inequalities by the co-directionality hypotheses. The anisotropy is considered in the hypo-elastic relations and viscoplastic flow rules by introducing some material parameters dependent on the loading direction. Then the capability of the proposed model to describe the nonlinear time-dependent deformation of soft biological tissues is verified by comparing the predictions with the corresponding experimental results of three tissues. It is seen that the predicted monotonic stress-strain responses, stress relaxation, creep and ratchetting of soft biological tissues are in good agreement with the corresponding experimental ones.

  3. Evidence-Based Adequacy Model for School Funding: Success Rates in Illinois Schools that Meet Targets

    ERIC Educational Resources Information Center

    Murphy, Gregory J.

    2012-01-01

    This quantitative study explores the 2010 recommendation of the Educational Funding Advisory Board to consider the Evidence-Based Adequacy model of school funding in Illinois. This school funding model identifies and costs research based practices necessary in a prototypical school and sets funding levels based upon those practices. This study…

  4. Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique

    2016-05-01

    In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated

  5. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  6. Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?

    PubMed

    Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A

    2015-06-01

    Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.

  7. An Agent-Based Modeling Approach for Determining Corn Stover Removal Rate and Transboundary Effects

    NASA Astrophysics Data System (ADS)

    Gan, Jianbang; Langeveld, J. W. A.; Smith, C. T.

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  8. An agent-based modeling approach for determining corn stover removal rate and transboundary effects.

    PubMed

    Gan, Jianbang; Langeveld, J W A; Smith, C T

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  9. Meta-Analytic Connectivity Modelling Revisited: Controlling for Activation Base Rates

    PubMed Central

    Langner, Robert; Rottschy, Claudia; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.

    2014-01-01

    Co-activation of distinct brain regions is a measure of functional interaction, or connectivity, between those regions. The co-activation pattern of a given region can be investigated using seed-based activation likelihood estimation meta-analysis of functional neuroimaging data stored in databases such as BrainMap. This method reveals inter-regional functional connectivity by determining brain regions that are consistently co-activated with a given region of interest (the “seed”) across a broad range of experiments. In current implementations of this meta-analytic connectivity modelling (MACM), significant spatial convergence (i.e. consistent co-activation) is distinguished from noise by comparing it against an unbiased null-distribution of random spatial associations between experiments according to which all grey-matter voxels have the same chance of convergence. As the a priori probability of finding activation in different voxels markedly differs across the brain, computing such a quasi-rectangular null-distribution renders the detection of significant convergence more likely in those voxels that are frequently activated. Here, we propose and test a modified MACM approach that takes this activation frequency bias into account. In this new specific co-activation likelihood estimation (SCALE) algorithm, a null-distribution is generated that reflects the base rate of reporting activation in any given voxel and thus equalizes the a priori chance of finding across-study convergence in each voxel of the brain. Using four exemplary seed regions (right visual area V4, left anterior insula, right intraparietal sulcus, and subgenual cingulum), our tests corroborated the enhanced specificity of the modified algorithm, indicating that SCALE may be especially useful for delineating distinct core networks of co-activation. PMID:24945668

  10. Interest rates factor model

    NASA Astrophysics Data System (ADS)

    Lee, Sangwook; Kim, Min Jae; Kim, Soo Yong

    2011-07-01

    Interdependence of the interest rates of the US, the UK, and Japan is analyzed in this work by means of spectral analysis and network methods. A predominant effective factor in the interest rate market is which country floats a bond issue, and a minor effective factor is time to maturity of bonds. Power-law cross-correlation among different countries is analyzed by the detrended cross-correlation analysis method. Long-range cross-correlation is found between the first factors of interest rate, while there is no cross-correlation between some of the second factors. The tail dependency is indicated by tail indices from Archimedean copulas, including an empirical copula. In contrast to other pairs, the US-UK first factor pair has tail dependencies in both the upper-tail and lower-tail. Dynamic properties of interest rate are modeled by a stochastic volatility model. The properties of mean reverting and volatility clustering are observed and reflected in this model. The proposed simulation method combines the dependence structures and the factor dynamics model; it simultaneously describes the interest rates of different countries.

  11. A Self-Organized Model for Cell-Differentiation Based on Variations of Molecular Decay Rates

    PubMed Central

    Hanel, Rudolf; Pöchacker, Manfred; Schölling, Manuel; Thurner, Stefan

    2012-01-01

    Systemic properties of living cells are the result of molecular dynamics governed by so-called genetic regulatory networks (GRN). These networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems. At any point in time only small subsets of these networks are active. Any active subset of the GRN leads to the expression of particular sets of molecules (expression modes). The subsets of active networks change over time, leading to the observed complex dynamics of expression patterns. Understanding of these dynamics becomes increasingly important in systems biology and medicine. While the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems, the understanding of the role of degradation of biochemical agents (mRNA, protein) in regulatory dynamics remains limited. Recent experimental data suggests that there exists a functional relation between mRNA and protein decay rates and expression modes. In this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the GRN. The model is able to reproduce key characteristics of molecular dynamics, including homeostasis, multi-stability, periodic dynamics, alternating activity, differentiability, and self-organized critical dynamics. Moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes. The model explains recent experimental observations that decay-rates (or turnovers) vary between differentiated tissue-classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation. PMID:22693554

  12. Source mass eruption rate retrieved from satellite-based data using statistical modelling

    NASA Astrophysics Data System (ADS)

    Gouhier, Mathieu; Guillin, Arnaud; Azzaoui, Nourddine; Eychenne, Julia; Valade, Sébastien

    2015-04-01

    Ash clouds emitted during volcanic eruptions have long been recognized as a major hazard likely to have dramatic consequences on aircrafts, environment and people. Thus, the International Civil Aviation Organization (ICAO) established nine Volcanic Ash Advisory Centers (VAACs) around the world, whose mission is to forecast the location and concentration of ash clouds over hours to days, using volcanic ash transport and dispersion models (VATDs). Those models use input parameters such as plume height (PH), particle size distribution (PSD), and mass eruption rate (MER), the latter being a key parameter as it directly controls the amount of ash injected into the atmosphere. The MER can be obtained rather accurately from detailed ground deposit studies, but this method does not match the operational requirements in case of a volcanic crisis. Thus, VAACs use empirical laws to determine the MER from the estimation of the plume height. In some cases, this method can be difficult to apply, either because plume height data are not available or because uncertainties related to this method are too large. We propose here an alternative method based on the utilization of satellite data to assess the MER at the source, during explosive eruptions. Satellite-based techniques allow fine ash cloud loading to be quantitatively retrieved far from the source vent. Those measurements can be carried out in a systematic and real-time fashion using geostationary satellite, in particular. We tested here the relationship likely to exist between the amount of fine ash dispersed in the atmosphere and of coarser tephra deposited on the ground. The sum of both contributions yielding an estimate of the MER. For this purpose we examined 19 eruptions (of known duration) in detail for which both (i) the amount of fine ash dispersed in the atmosphere, and (ii) the mass of tephra deposited on the ground have been estimated and published. We combined these data with contextual information that may

  13. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  14. A metahillslope model based on an analytical solution to a linearized Boussinesq equation for temporally variable recharge rates

    NASA Astrophysics Data System (ADS)

    Pauwels, Valentijn R. N.; Verhoest, Niko E. C.; de Troch, FrançOis P.

    2002-12-01

    In hydrology the slow, subsurface component of the discharge is usually referred to as base flow. One method to model base flow is the conceptual approach, in which the complex physical reality is simplified using hypotheses and assumptions, and the various physical processes are described mathematically. The purpose of this paper is to develop and validate a conceptual method, based on hydraulic theory, to calculate the base flow of a catchment, under observed precipitation rates. The governing groundwater equation, the Boussinesq equation, valid for a unit width sloping aquifer, is linearized and solved for a temporally variable recharge rate. The solution allows the calculation of the transient water table profile in and the outflow from an aquifer under temporally variable recharge rates. When a catchment is considered a metahillslope, the solution can be used, when coupled to a routing model, to calculate the catchment base flow. The model is applied to the Zwalm catchment and four subcatchments in Belgium. The results suggest that it is possible to model base flow at the catchment scale, using a Boussinesq-based metahillslope model. The results further indicate that it is sufficient to use a relatively simple formulation of the infiltration, overland flow, and base flow processes to obtain reasonable estimates of the total catchment discharge.

  15. Evaluation of Finite-Rate GasSurface Interaction Models for a Carbon Based Ablator

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Goekcen, Tahir

    2015-01-01

    Two sets of finite-rate gas-surface interaction model between air and the carbon surface are studied. The first set is an engineering model with one-way chemical reactions, and the second set is a more detailed model with two-way chemical reactions. These two proposed models intend to cover the carbon surface ablation conditions including the low temperature rate-controlled oxidation, the mid-temperature diffusion-controlled oxidation, and the high temperature sublimation. The prediction of carbon surface recession is achieved by coupling a material thermal response code and a Navier-Stokes flow code. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and Ablation Program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting full Navier-Stokes equations using Data Parallel Line Relaxation method. Recession analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities with heat fluxes ranging from 45 to 1100 wcm2 are performed and compared with data for model validation. The ablating material used in these arc-jet tests is Phenolic Impregnated Carbon Ablator. Additionally, computational predictions of surface recession and shape change are in good agreement with measurement for arc-jet conditions of Small Probe Reentry Investigation for Thermal Protection System Engineering.

  16. Prediction of PWSCC in nickel base alloys using crack growth rate models

    SciTech Connect

    Thompson, C.D.; Krasodomski, H.T.; Lewis, N.; Makar, G.L.

    1995-12-31

    The Ford/Andresen slip-dissolution SCC model, originally developed for stainless steel components in BWR environments, has been applied to Alloy 600 and Alloy X-750 tested in deaerated pure water chemistry. A method is described whereby the crack growth rates measured in compact tension specimens can be used to estimate crack growth in a component. Good agreement was found between model prediction and measured SCC in X-750 threaded fasteners over a wide range of temperatures, stresses, and material conditions. Most data support the basic assumption of this model that cracks initiate early in life. The evidence supporting a particular SCC mechanism is mixed. Electrochemical repassivation data and estimates of oxide fracture strain indicate that the slip-dissolution model can account for the observed crack growth rates, provided primary rather than secondary creep rates are used. However, approximately 100 cross-sectional TEM foils of SCC cracks including crack tips reveal no evidence of enhanced plasticity or unique dislocation patterns at the crack tip or along the crack to support a classic slip-dissolution mechanism. No voids, hydrides, or microcracks are found in the vicinity of the crack tips creating doubt about classic hydrogen related mechanisms. The bulk oxide films exhibit a surface oxide which is often different than the oxides found within a crack. Although bulk chromium concentration affects the rate of SCC, analytical data indicates the mechanism does not result from chromium depletion at the grain boundaries. The overall findings support a corrosion/dissolution mechanism but not one necessarily related to slip at the crack tip.

  17. The effects of composition on glass dissolution rates: The application of four models to a data base

    SciTech Connect

    Geldart, R.W.; Kindle, C.H.

    1988-01-01

    Four models have been applied to a data base to relate glass dissolution in distilled water to composition. The data base is used to compare the precisions obtained from the models in fitting actual data. The usefulness of the data base in formulating a model is also demonstrated. Two related models in which the composite or pH-adjusted free energy of hydration of the glass is the correlating parameter are compared with experimental data. In a structural model, the nonbridging oxygen content of the glasses is used to correlate glass dissolution rate to composition. In a model formulated for this report, the cation valence and the oxygen content of the glass are compared with observed dissolution rates. The models were applied to the 28-day normalized silica release at 90/sup 0/C for over 285 glass compositions with surface area to volume ratios of 10 m/sup -1/ (Materials Characterization Center MCC-1 glass durability test using distilled water). These glasses included the nonradioactive analogs of WV205 and SRL-165, as well as SRL-131, PNL 76-68, and a European glass, UK209. Predicted glass dissolution rates show similar fits to the data for all four models. The predictions of the models were also plotted for two subsets of the glasses: waste glasses and Savannah River Laboratory glasses. The model predictions fit the data for these groups much better than they fit the data for the entire set of glasses. 14 refs., 12 figs., 7 tabs.

  18. Modeling High Rate Phosphorus and Nitrogen Removal in a Vertical Flow Alum Sludge based Constructed Wetlands

    NASA Astrophysics Data System (ADS)

    Jeyakumar, Lordwin; Zhao, Yaqian

    2014-05-01

    Increased awareness of the impacts of diffuse pollution and their intensification has pushed forward the need for the development of low-cost wastewater treatment techniques. One of such efforts is the use of novel DASC (Dewatered Alum Sludge Cakes) based constructed wetlands (CWs) for removing nutrients, organics, trace elements and other pollutants from wastewater. Understanding of the processes in CWs requires a numerical model that describes the biochemical transformation and degradation processes in subsurface vertical flow (VF) CWs. Therefore, this research focuses on the development of a process-based model for phosphorus (P) and nitrogen (N) removal to achieve a stable performance by using DASC as a substrate in CWs treatment system. An object-oriented modelling tool known as "STELLA" which works based on the principle of system dynamics is used for the development of P and N model. The core objective of the modelling work is oriented towards understanding the process in DASC-based CWs and optimizes design criteria. The P and N dynamic model is developed for DASC-based CWs. The P model developed exclusively for DASC-based CW was able to simulate the effluent P concentration leaving the system satisfactorily. Moreover, the developed P dynamic model has identified the major P pathways as adsorption (72%) followed by plant uptake (20%) and microbial uptake (7%) in single-stage laboratory scale DASC-based CW. Similarly, P dynamic simulation model was developed to simulate the four-stage laboratory scale DASC-based CWs. It was found that simulated and observed values of P removal were in good agreement. The fate of P in all the four stages clearly shows that adsorption played a pivotal role in each stage of the system due to the use of the DASC as a substrate. P adsorption by wetland substrate/DASC represents 59-75% of total P reduction. Subsequently, plant uptake and microbial uptake have lesser role regarding P removal (as compared to adsorption).With regard

  19. Aftershock triggering by postseismic stresses: A study based on Coulomb rate-and-state models

    NASA Astrophysics Data System (ADS)

    Cattania, Camilla; Hainzl, Sebastian; Wang, Lifeng; Enescu, Bogdan; Roth, Frank

    2015-04-01

    The spatiotemporal clustering of earthquakes is a feature of medium- and short-term seismicity, indicating that earthquakes interact. However, controversy exists about the physical mechanism behind aftershock triggering: static stress transfer and reloading by postseismic processes have been proposed as explanations. In this work, we use a Coulomb rate-and-state model to study the role of coseismic and postseismic stress changes on aftershocks and focus on two processes: creep on the main shock fault plane (afterslip) and secondary aftershock triggering by previous aftershocks. We model the seismic response to Coulomb stress changes using the Dieterich constitutive law and focus on two events: the Parkfield, Mw = 6.0, and the Tohoku, Mw = 9.0, earthquakes. We find that modeling secondary triggering systematically improves the maximum log likelihood fit of the sequences. The effect of afterslip is more subtle and difficult to assess for near-fault events, where model errors are largest. More robust conclusions can be drawn for off-fault aftershocks: following the Tohoku earthquake, afterslip promotes shallow crustal seismicity in the Fukushima region. Simple geometrical considerations indicate that afterslip-induced stress changes may have been significant on trench parallel crustal fault systems following several of the largest recorded subduction earthquakes. Moreover, the time dependence of afterslip strongly enhances its triggering potential: seismicity triggered by an instantaneous stress change decays more quickly than seismicity triggered by gradual loading, and as a result we find afterslip to be particularly important between few weeks and few months after the main shock.

  20. Boosting Degree Completion and Transfer Rates: An Examination of Counseling/Advising Using the Relationship-Based Model

    ERIC Educational Resources Information Center

    Jefferson, Trevina

    2010-01-01

    The purpose of this pilot study was to explore another way to boost degree completion and transfer rates through helping students develop informed decision-making power; specifically, counseling methods in Los Angeles Community Colleges using the Relationship-Based model was examined. As part of this study, students would develop their research…

  1. A physically based model of temperature and strain rate dependent yield in BCC metals: Implementation into crystal plasticity

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett C.; Carroll, Jay D.; Boyce, Brad L.; Weinberger, Christopher R.

    2015-01-01

    In this work, we develop a crystal plasticity finite element model (CP-FEM) that constitutively captures the temperature and strain rate dependent flow stresses in pure BCC refractory metals. This model is based on the kink-pair theory developed by Seeger (1981) and is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws that are implemented into a BCC crystal plasticity model. The model is then used to predict temperature and strain rate dependent yield stresses of single and polycrystal BCC refractory metals (molybdenum, tantalum, tungsten and niobium) and compared with existing experimental data. To connect to larger length scales, classical continuum-scale constitutive models are fit to the CP-FEM predictions of polycrystal yield stresses. The results produced by this model, based on kink-pair theory and with origins in dislocation mechanics, show excellent agreement with the Mechanical Threshold Stress (MTS) model for temperature and strain-rate dependent flow. This framework provides a method to bridge multiple length scales in modeling the deformation of BCC metals.

  2. An enhanced rate-based emission trading program for NOX: the Dutch model.

    PubMed

    Sholtz, A M; Van Amburg, B; Wochnick, V K

    2001-12-01

    Since 1997 government and industry in The Netherlands have been engaged in intensive policy discussions on how to design an emission trading program that would satisfy the Government's policy objectives within the national and international regulatory framework and accommodate industry's need for a flexible and cost-effective approach. Early on in the discussion the most promising solution was a rate-based approach, which dynamically allocated saleable emission credits based on a performance standard rate and actual energy used by facilities. All industrial facilities above a threshold of 20 MWth would be judged on their ability to meet this performance rate. Those "cleaner" than the standard can sell excess credits to others with an allocation that is less than their actual NOX emission. With some changes in law, such a design could be made to fit well into the national and EU legislative framework while at the same time uniquely meeting industry's requirement of flexibility toward economic growth and facility expansion. (An analysis of the legislative changes required will be given in a separate paper by Chris Dekkers.) However, the environmental outcome of such a system is not as certain as under an absolute emission cap. At the request of the Netherlands Ministry of Housing, Spatial Planning and the Environment (VROM), Automated Credit Exchange (ACE), in close cooperation with the working group of government and industry representatives introduced a number of features into the Dutch NOX program allowing full exploitation of market mechanisms while allowing intermediate adjustments in the performance standard rates. The design is geared toward meeting environmental targets without jeopardizing the trading market the program intends to create. The paper discusses the genesis of the two-tier credit system ACE helped to design, explains the differences between primary (fixed) and secondary (variable) credits, and outlines how the Dutch system is expected to

  3. An enhanced rate-based emission trading program for NOX: the Dutch model.

    PubMed

    Sholtz, A M; Van Amburg, B; Wochnick, V K

    2001-12-01

    Since 1997 government and industry in The Netherlands have been engaged in intensive policy discussions on how to design an emission trading program that would satisfy the Government's policy objectives within the national and international regulatory framework and accommodate industry's need for a flexible and cost-effective approach. Early on in the discussion the most promising solution was a rate-based approach, which dynamically allocated saleable emission credits based on a performance standard rate and actual energy used by facilities. All industrial facilities above a threshold of 20 MWth would be judged on their ability to meet this performance rate. Those "cleaner" than the standard can sell excess credits to others with an allocation that is less than their actual NOX emission. With some changes in law, such a design could be made to fit well into the national and EU legislative framework while at the same time uniquely meeting industry's requirement of flexibility toward economic growth and facility expansion. (An analysis of the legislative changes required will be given in a separate paper by Chris Dekkers.) However, the environmental outcome of such a system is not as certain as under an absolute emission cap. At the request of the Netherlands Ministry of Housing, Spatial Planning and the Environment (VROM), Automated Credit Exchange (ACE), in close cooperation with the working group of government and industry representatives introduced a number of features into the Dutch NOX program allowing full exploitation of market mechanisms while allowing intermediate adjustments in the performance standard rates. The design is geared toward meeting environmental targets without jeopardizing the trading market the program intends to create. The paper discusses the genesis of the two-tier credit system ACE helped to design, explains the differences between primary (fixed) and secondary (variable) credits, and outlines how the Dutch system is expected to

  4. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration rates

    PubMed Central

    Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James

    2014-01-01

    We build an agent-based model of incarceration based on the susceptible–infected–suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology. PMID:24966237

  5. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration rates.

    PubMed

    Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James

    2014-09-01

    We build an agent-based model of incarceration based on the susceptible-infected-suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology.

  6. Shell Model Based Reaction Rates for rp-PROCESS Nuclei in the Mass Range A=44-63

    NASA Astrophysics Data System (ADS)

    Fisker, J. L.; Barnard, V.; Görres, J.; Langanke, K.; Martínez-Pinedo, G.; Wiescher, M. C.

    2001-11-01

    We have used large-scale shell-model diagonalization calculations to determine the level spectra, proton spectroscopic factors, and electromagnetic transition probabilities for proton rich nuclei in the mass range A=44-63. Based on these results and the available experimental data, we calculated the resonances for proton capture reactions on neutron deficient nuclei in this mass range. We also calculated the direct capture processes on these nuclei in the framework of a Woods-Saxon potential model. Taking into account both resonant and direct contributions, we determined the ground-state proton capture reaction rates for these nuclei under hot hydrogen burning conditions for temperatures between 108 and 1010 K. The calculated compound-nucleus level properties and the reaction rates are presented here; the rates are also available in computer-readable format from the authors.

  7. A model-based evaluation of sedimentary reconstructions of 10Be production rates

    NASA Astrophysics Data System (ADS)

    Carney, Lewis; Plancherel, Yves; Khatiwala, Samar; Henderson, Gideon

    2016-04-01

    Atmospheric production of 10Be is small when solar activity and, therefore, solar magnetic field and total solar irradiance are strong. Variations in solar activity affect climate and the production of other climate-relevant isotopes, such as 14C. Solar activity is thus an important variable to constrain. Since 10Be production is clearly related to solar activity and the cycle of beryllium is simpler than that of carbon, 10Be records in ice cores have been used to reconstruct total solar irradiance variability. Unfortunately, 10Be records in ice cores are not only affected by variations in atmospheric production, but are also modulated by changes in wind patterns since spatiotemporal atmospheric 10Be gradients are quite large. In that context, sedimentary 10Be records from the abyssal ocean could be of great interest: since the residence time of 10Be in the ocean is thought to be comparable to the overturning time-scale of the ocean, spatial 10Be gradients may be relatively weaker than those in the atmosphere. Under these conditions, regional oceanic variability should only weakly affect the distribution of 10Be in the ocean and local sedimentary 10Be records are expected to represent the global average 10Be production better than 10Be measured in ice cores. We here show results from a global ocean model of 10Be that we use to investigate the spatial variability of simulated sedimentary 10Be records and test the sensitivity of the 10Be sedimentary flux to uncertainties in the circulation field and in the particle chemistry of beryllium. Our ocean model is based on the Transport Matrix method. The surface 10Be input fluxes are taken from atmospheric model simulations. Our model experiments, constrained by available dissolved 10Be data, show that there exist regions in the ocean where the sedimentary 10Be flux is relatively insensitive to changes in input patterns and magnitudes, assumed particle chemistry and flux patterns, and ocean circulation. We submit that

  8. Strain-rate sensitivity of foam materials: A numerical study using 3D image-based finite element model

    NASA Astrophysics Data System (ADS)

    Sun, Yongle; Li, Q. M.; Withers, P. J.

    2015-09-01

    Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band) of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi) may not be sufficiently representative to capture the actual structural effect. To overcome these limitations, the strain-rate sensitivity of the compressive and tensile properties of closed-cell aluminium Alporas foam is investigated in this study by means of meso-scale realistic finite element (FE) simulations. The FE modelling method based on X-ray computed tomography (CT) image is introduced first, as well as its applications to foam materials. Then the compression and tension of Alporas foam at a wide variety of applied nominal strain-rates are simulated using FE model constructed from the actual cell geometry obtained from the CT image. The stain-rate sensitivity of compressive strength (collapse stress) and tensile strength (0.2% offset yield point) are evaluated when considering different cell-wall material properties. The numerical results show that the rate dependence of cell-wall material is the main cause of the strain-rate hardening of the compressive and tensile strengths at low and intermediate strain-rates. When the strain-rate is sufficiently high, shock compression is initiated, which significantly enhances the stress at the loading end and has complicated effect on the stress at the supporting end. The plastic tensile wave effect is evident at high strain-rates, but shock tension cannot develop in Alporas foam due to the softening associated with single fracture process zone occurring in tensile response. In all cases the micro inertia of individual cell walls subjected to localised deformation is found to

  9. Toy Stories: Modeling Rates

    ERIC Educational Resources Information Center

    Swanson, Patricia E.

    2015-01-01

    Elementary school mathematics is increasingly recognized for its crucial role in developing the foundational skills and understandings for algebra. In this article, the author uses a lesson to introduce the concept of "rates"--comparing two different types and units of measure--and how to graph them. Described is the lesson and shared…

  10. Modelling temperature-compensated physiological rates, based on the co-ordination of responses to temperature of developmental processes.

    PubMed

    Parent, B; Turc, O; Gibon, Y; Stitt, M; Tardieu, F

    2010-05-01

    Temperature fluctuates rapidly and affects all developmental and metabolic processes. This often obscures the effects of developmental trends or of other environmental conditions when temperature fluctuates naturally. A method is proposed for modelling temperature-compensated rates, based on the coordination of temperature responses of developmental processes. In a data set comprising 41 experiments in the greenhouse, growth chamber, or the field, the temperature responses in the range of 6-36 degrees C for different processes were compared in three species, maize, rice, and Arabidopsis thaliana. Germination, cell division, expansive growth rate, leaf initiation, and phenology showed coordinated temperature responses and followed common laws within each species. The activities of 10 enzymes involved in carbon metabolism exhibited monotonous exponential responses across the whole range 10-40 degrees C. Hence, the temperature dependence of developmental processes is not explained by a simple relationship to central metabolism. Temperature-compensated rates of development were calculated from the equations of response curve, by expressing rates per unit equivalent time at 20 degrees C. This resulted in stable rates when temperatures fluctuated over a large range (for which classical thermal time was inefficient), and in time courses of leaf development which were common to several experiments with different temperature scenarios.

  11. Modelling temperature-compensated physiological rates, based on the co-ordination of responses to temperature of developmental processes.

    PubMed

    Parent, B; Turc, O; Gibon, Y; Stitt, M; Tardieu, F

    2010-05-01

    Temperature fluctuates rapidly and affects all developmental and metabolic processes. This often obscures the effects of developmental trends or of other environmental conditions when temperature fluctuates naturally. A method is proposed for modelling temperature-compensated rates, based on the coordination of temperature responses of developmental processes. In a data set comprising 41 experiments in the greenhouse, growth chamber, or the field, the temperature responses in the range of 6-36 degrees C for different processes were compared in three species, maize, rice, and Arabidopsis thaliana. Germination, cell division, expansive growth rate, leaf initiation, and phenology showed coordinated temperature responses and followed common laws within each species. The activities of 10 enzymes involved in carbon metabolism exhibited monotonous exponential responses across the whole range 10-40 degrees C. Hence, the temperature dependence of developmental processes is not explained by a simple relationship to central metabolism. Temperature-compensated rates of development were calculated from the equations of response curve, by expressing rates per unit equivalent time at 20 degrees C. This resulted in stable rates when temperatures fluctuated over a large range (for which classical thermal time was inefficient), and in time courses of leaf development which were common to several experiments with different temperature scenarios. PMID:20194927

  12. The effect of an instructional program based on health belief model in decreasing cesarean rate among primiparous pregnant mothers

    PubMed Central

    Hassani, Laleh; Aghamolaei, Teamur; Ghanbarnejad, Amin; Dadipoor, Sakineh

    2016-01-01

    Introduction: Although cesarean section has saved many mothers’ and infants’ lives, the problem is in its increasing prevalence. According to recent statistics, the current rate of cesarean in Iran is in fact 3–4 times as more than the standard rate defined by WHO. Therefore, the present study is aimed to estimate the effect of an instructional program based on health belief model on reducing cesarean rate among primiparous pregnant women. Materials and Methods: In this semi-experimental research, 60 primiparous women who had visited Bandar Abbas Healthcare Centers were selected as the subjects. They were in their 26–30th week of pregnancy. They were selected in a multi-stage cluster sampling method (a combination of clustering and simple randomization), and were divided into two groups, subjects and control group. The data were gathered using a valid and reliable questionnaire. The instructional intervention was done after the completion of the pretest questionnaire based on the sub-constructs of the health belief model in six instructional sessions. 1 month after the intervention, posttest questionnaires were completed by the subjects in both groups. The data were analyzed using descriptive statistics, standard deviation, independent t-test, and paired t-test. The significance level was set at <0.05. Results: Two groups had a significant difference between awareness score, perceived sensitivity, intensity, benefits, barriers, self-efficacy, and the performance (P < 0.001). In the experimental group, nine subjects (30%) had a natural delivery. Conclusion: According to the findings of the current research, an instructional program illuminated (designed) by the health belief model can significantly influence pregnant women's awareness, intention, and choice of delivery type. PMID:27512693

  13. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  14. Towards optimal design of sport footwear based on muscle activity and minimum loading rate using simplified model.

    PubMed

    Khassetarash, Arash; Hassannejad, Reza

    2015-08-01

    Recently, researchers have been looking for a model to relate the ground reaction force to the vibration parameters of sport shoes. A 4-degrees-of-freedom mass-spring-damper model and a dimensionless objective function have been used in the literature in order to simulate the functionality of the central nervous system based on muscle tuning paradigm. In this study, a two-phase optimization method was proposed in order to better simulate the functionality of the central nervous system. A comparison between the results of this method and those in the literature demonstrates more accuracy. Additionally, it has been reported that the loading rate in the time history of the ground reaction force has a role in overuse injuries in the tibia and tarsals; therefore, in order to introduce an optimum design for sport shoes, in this study, a new objective function was introduced and the proposed method was used in order to find the best possible range of shoe parameters for pre-fatigued and post-fatigued conditions. Results indicated that there is no possible solution which brings the loading rate to the desired value; however, the region in which the least values for the objective function achieved was introduced as the safe region. It was shown that the size of this safe region shrinks as muscles fatigue. The results also demonstrated high dependency of the safe region to the mass distribution and touchdown velocities of the lower extremities.

  15. The Effects of Base Rate, Selection Ratio, Sample Size, and Reliability of Predictors on Predictive Efficiency Indices Associated with Logistic Regression Models.

    ERIC Educational Resources Information Center

    Soderstrom, Irina R.; Leitner, Dennis W.

    While it is imperative that attempts be made to assess the predictive accuracy of any prediction model, traditional measures of predictive accuracy have been criticized as suffering from "the base rate problem." The base rate refers to the relative frequency of occurrence of the event being studied in the population of interest, and the problem…

  16. A Rasch Model for Continuous Ratings.

    ERIC Educational Resources Information Center

    Muller, Hans

    1987-01-01

    A unidimensional latent trait model for continuous ratings extends Andrich's rating formulation which assumes the response process at latent thresholds is based on the dichotomous Rasch model. The separability of the structural and incidental parameters is demonstrated and a procedure for estimating the parameters is outlined. (Author/GDC)

  17. Beyond The Blueprint: Development Of Genome-Informed Trait-Based Models For Prediction Of Microbial Dynamics And Biogeochemical Rates

    NASA Astrophysics Data System (ADS)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.

    2014-12-01

    In soils and sediments microorganisms perform essential ecosystem services through their roles in regulating the stability of carbon and the flux of nutrients, and the purification of water. But these are complex systems with the physical, chemical and biological components all intimately connected. Components of this complexity are gradually being uncovered and our understanding of the extent of microbial functional diversity in particular has been enhanced greatly with the development of cultivation independent approaches. However we have not moved far beyond a descriptive and correlative use of this powerful resource. As the ability to reconstruct thousands of genomes from microbial populations using metagenomic techniques gains momentum, the challenge will be to develop an understanding of how these metabolic blueprints serve to influence the fitness of organisms within these complex systems and how populations emerge and impact the physical and chemical properties of their environment. In the presentation we will discuss the development of a trait-based model of microbial activity that simulates coupled guilds of microorganisms that are parameterized including traits extracted from large-scale metagenomic data. Using a reactive transport framework we simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for respiration, biomass development and exo-enzyme production. Each group within a functional guild is parameterized with a unique combination of traits governing organism fitness under dynamic environmental conditions. This presentation will address our latest developments in the estimation of trait values related to growth rate and the identification and linkage of key fitness traits associated with respiratory and fermentative pathways, macromolecule depolymerization enzymes and nitrogen fixation from metagenomic data. We are testing model sensitivity to initial microbial composition and intra

  18. A physico-chemical properties based model for estimating evaporation and absorption rates of perfumes from skin.

    PubMed

    Kasting, G B; Saiyasombati, P

    2001-02-01

    Because of their potential for inducing allergic contact dermatitis (ACD) if used improperly, perfumes are carefully assessed for dermal safety prior to incorporation into cosmetic products. Exposure assessment for these materials often involves the conservative assumption of 100% absorption of each component. This report describes an improved method to estimate the absorption and evaporation of perfume ingredients from skin, based on their physico-chemical properties. The effect of environmental variables such as temperature and wind velocity can be accounted for in a logical way. This was accomplished using a first-order kinetic approach expected to be applicable for small doses applied to skin. Skin penetration rate was calculated as a fraction of the maximum flux estimated from the compound's lipid solubility, S(lip) (represented by the product of octanol/water partition coefficient, K(octt), and water solubility, S(w)), and molecular weight, MW. Evaporation rates were estimated from a modified Henry's Law approach with a stagnant boundary layer whose thickness is a function of surface airflow, v. At a given value of v, evaporation rate was assumed proportional to the ratio P(vp)/S(lip), where P(vp) is the vapour pressure of the ingredient at skin temperature, T. The model predicts a relationship for total evaporation from skin of the form %evap = 100x/(k+x) where x = P(vp)MW(2.7)/(K(oct)S(w)) and k is a parameter which depends only on v and T. Comparison with published data on perfume evaporation from human skin in vivo showed good agreement between theory and experiment for two closely related perfume mixtures (r(2) = 0.52-0.74, s = 12-14%, n = 10). Thus, the method would seem to have a good prospect of providing skin absorption estimates suitable for use in exposure assessment and improved understanding of dose-related contact allergy.

  19. Event-based minimum-time control of oscillatory neuron models: phase randomization, maximal spike rate increase, and desynchronization.

    PubMed

    Danzl, Per; Hespanha, João; Moehlis, Jeff

    2009-12-01

    We present an event-based feedback control method for randomizing the asymptotic phase of oscillatory neurons. Phase randomization is achieved by driving the neuron's state to its phaseless set, a point at which its phase is undefined and is extremely sensitive to background noise. We consider the biologically relevant case of a fixed magnitude constraint on the stimulus signal, and show how the control objective can be accomplished in minimum time. The control synthesis problem is addressed using the minimum-time-optimal Hamilton-Jacobi-Bellman framework, which is quite general and can be applied to any spiking neuron model in the conductance-based Hodgkin-Huxley formalism. We also use this methodology to compute a feedback control protocol for optimal spike rate increase. This framework provides a straightforward means of visualizing isochrons, without actually calculating them in the traditional way. Finally, we present an extension of the phase randomizing control scheme that is applied at the population level, to a network of globally coupled neurons that are firing in synchrony. The applied control signal desynchronizes the population in a demand-controlled way. PMID:19911192

  20. Investigation of the mechanical behavior of kangaroo humeral head cartilage tissue by a porohyperelastic model based on the strain-rate-dependent permeability.

    PubMed

    Thibbotuwawa, Namal; Oloyede, Adekunle; Senadeera, Wijitha; Li, Tong; Gu, YuanTong

    2015-11-01

    Solid-interstitial fluid interaction, which depends on tissue permeability, is significant to the strain-rate-dependent mechanical behavior of humeral head (shoulder) cartilage. Due to anatomical and biomechanical similarities to that of the human shoulder, kangaroos present a suitable animal model. Therefore, indentation experiments were conducted on kangaroo shoulder cartilage tissues from low (10(-4)/s) to moderately high (10(-2)/s) strain-rates. A porohyperelastic model was developed based on the experimental characterization; and a permeability function that takes into account the effect of strain-rate on permeability (strain-rate-dependent permeability) was introduced into the model to investigate the effect of rate-dependent fluid flow on tissue response. The prediction of the model with the strain-rate-dependent permeability was compared with those of the models using constant permeability and strain-dependent permeability. Compared to the model with constant permeability, the models with strain-dependent and strain-rate-dependent permeability were able to better capture the experimental variation at all strain-rates (p < 0.05). Significant differences were not identified between models with strain-dependent and strain-rate-dependent permeability at strain-rate of 5 × 10(-3)/s (p = 0.179). However, at strain-rate of 10(-2)/s, the model with strain-rate-dependent permeability was significantly better at capturing the experimental results (p < 0.005). The findings thus revealed the significance of rate-dependent fluid flow on tissue behavior at large strain-rates, which provides insights into the mechanical deformation mechanisms of cartilage tissues.

  1. Investigation of the mechanical behavior of kangaroo humeral head cartilage tissue by a porohyperelastic model based on the strain-rate-dependent permeability.

    PubMed

    Thibbotuwawa, Namal; Oloyede, Adekunle; Senadeera, Wijitha; Li, Tong; Gu, YuanTong

    2015-11-01

    Solid-interstitial fluid interaction, which depends on tissue permeability, is significant to the strain-rate-dependent mechanical behavior of humeral head (shoulder) cartilage. Due to anatomical and biomechanical similarities to that of the human shoulder, kangaroos present a suitable animal model. Therefore, indentation experiments were conducted on kangaroo shoulder cartilage tissues from low (10(-4)/s) to moderately high (10(-2)/s) strain-rates. A porohyperelastic model was developed based on the experimental characterization; and a permeability function that takes into account the effect of strain-rate on permeability (strain-rate-dependent permeability) was introduced into the model to investigate the effect of rate-dependent fluid flow on tissue response. The prediction of the model with the strain-rate-dependent permeability was compared with those of the models using constant permeability and strain-dependent permeability. Compared to the model with constant permeability, the models with strain-dependent and strain-rate-dependent permeability were able to better capture the experimental variation at all strain-rates (p < 0.05). Significant differences were not identified between models with strain-dependent and strain-rate-dependent permeability at strain-rate of 5 × 10(-3)/s (p = 0.179). However, at strain-rate of 10(-2)/s, the model with strain-rate-dependent permeability was significantly better at capturing the experimental results (p < 0.005). The findings thus revealed the significance of rate-dependent fluid flow on tissue behavior at large strain-rates, which provides insights into the mechanical deformation mechanisms of cartilage tissues. PMID:26275487

  2. Fault slip rates and initiation age based on diffusion equation modeling: Wasatch Fault Zone and eastern Great Basin

    NASA Astrophysics Data System (ADS)

    Mattson, Ann; Bruhn, Ronald L.

    2001-01-01

    Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faulted surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. To accomplish this task, linear and nonlinear models of sediment transport are calibrated from the morphology of Lake Bonneville shoreline scarps and fault scarps formed by multiple, surface-rupturing earthquakes along the Wasatch Fault Zone (WFZ). Profile modeling of scarps formed by several events distributed through time is done using a constant slip rate (CSR) solution and yields a value of A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on a fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling (κ0 = 2.8 ± 1.1 m2/kyr, WFZ) of faults along the west slope of the Oquirrh Mountains indicates a slip rate of ˜0.1 mm/yr since 50 to 65 ka, which is corroborated by cosmogenic dating (10Be/26Al age = 75 ka). The slip rate along the west flank of the Stansbury Mountains varies from 0.04 to 0.2 mm/yr for time frames of 10 to >100 ka, with the most recent rupture on the northern portion of the fault zone ˜10 ka. Scarp analysis of the southern end of the Nephi segment, WFZ, suggests either temporal clustering or variable slip rate as indicated by differences in the short-term (1.3 mm/yr for 4.3 ka) versus long-term (0.4 mm/yr for 70 ka) slip rates.

  3. Gas ultrasonic flow rate measurement through genetic-ant colony optimization based on the ultrasonic pulse received signal model

    NASA Astrophysics Data System (ADS)

    Hou, Huirang; Zheng, Dandan; Nie, Laixiao

    2015-04-01

    For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.

  4. The degree of heart rate asymmetry is crucial for the validity of the deceleration and acceleration capacity indices of heart rate: A model-based study.

    PubMed

    Pan, Qing; Zhou, Gongzhan; Wang, Ruofan; Yu, Yihua; Li, Feng; Fang, Luping; Yan, Jing; Ning, Gangmin

    2016-09-01

    The deceleration capacity (DC) and acceleration capacity (AC) of heart rate are a pair of indices used for evaluating the autonomic nervous system (ANS). We assessed the role of heart rate asymmetry (HRA) in defining the relative performance of DC and AC using a mathematical model, which is able to generate a realistic RR interval (RRI) time series with controlled ANS states. The simulation produced a set of RRI series with random sympathetic and vagal activities. The multi-scale DCs and ACs were computed from the RRI series, and the correlation of DC and AC with the ANS functions was analyzed to evaluate the performance of the indices. In the model, the HRA level was modified by changing the inspiration/expiration (I/E) ratio to examine the influence of HRA on the performances of DC and AC. The results show that on the conventional scales (T=1, s=2), an HRA level above 50% results in a stronger association of DC with the ANS, compared with AC. On higher scales (T=4, s=6), there was no HRA and DC showed a similar performance to AC for all I/E ratios. The data suggest that the HRA level determines which of DC or AC is the optimal index for expressing ANS functions. Future clinical applications of DC and AC should be accompanied by an HRA analysis to provide a better index for assessing ANS. PMID:27392228

  5. An Extension to the Constructivist Coding Hypothesis as a Learning Model for Selective Feedback when the Base Rate Is High

    ERIC Educational Resources Information Center

    Ghaffarzadegan, Navid; Stewart, Thomas R.

    2011-01-01

    Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…

  6. [Prediction model of net photosynthetic rate of ginseng under forest based on optimized parameters support vector machine].

    PubMed

    Wu, Hai-wei; Yu, Hai-ye; Zhang, Lei

    2011-05-01

    Using K-fold cross validation method and two support vector machine functions, four kernel functions, grid-search, genetic algorithm and particle swarm optimization, the authors constructed the support vector machine model of the best penalty parameter c and the best correlation coefficient. Using information granulation technology, the authors constructed P particle and epsilon particle about those factors affecting net photosynthetic rate, and reduced these dimensions of the determinant. P particle includes the percent of visible spectrum ingredients. Epsilon particle includes leaf temperature, scattering radiation, air temperature, and so on. It is possible to obtain the best correlation coefficient among photosynthetic effective radiation, visible spectrum and individual net photosynthetic rate by this technology. The authors constructed the training set and the forecasting set including photosynthetic effective radiation, P particle and epsilon particle. The result shows that epsilon-SVR-RBF-genetic algorithm model, nu-SVR-linear-grid-search model and nu-SVR-RBF-genetic algorithm model obtain the correlation coefficient of up to 97% about the forecasting set including photosynthetic effective radiation and P particle. The penalty parameter c of nu-SVR-linear-grid-search model is the minimum, so the model's generalization ability is the best. The authors forecasted the forecasting set including photosynthetic effective radiation, P particle and epsilon particle by the model, and the correlation coefficient is up to 96%.

  7. Biomineralization-inspired synthesis of chitosan/hydroxyapatite biocomposites based on a novel bilayer rate-controlling model.

    PubMed

    Hu, Jing-Xiao; Ran, Jia-Bing; Chen, Si; Shen, Xin-Yu; Tong, Hua

    2015-12-01

    In order to prepare sophisticated biomaterials using a biomimetic approach, a deeper understanding of biomineralization is needed. Of particular importance is the control and regulation of the mineralization process. In this study, a novel bilayer rate-controlling model was designed to investigate the factors potentially influencing mineralization. In the absence of a rate-controlling layer, nano-scale hydroxyapatite (HA) crystallites exhibited a spherical morphology, whereas, in the presence of a rate-controlling layer, HA crystallites were homogeneously dispersed and spindle-like in structure. The mineralization rate had a significant effect on controlling the morphology of crystals. Furthermore, in vitro tests demonstrated that the reaction layer containing spindle-like HA crystallites possessed superior biological properties. These results suggest that a slow mineralization rate is required for controlling the morphology of inorganic crystallites, and consumption by the rate-controlling layer ensured that the ammonia concentration remained low. This study demonstrates that a biomimetic approach can be used to prepare novel biomaterials containing HA crystallites that have different morphologies and biological properties.

  8. [NDVI difference rate recognition model of deciduous broad-leaved forest based on HJ-CCD remote sensing data].

    PubMed

    Wang, Yan; Tian, Qing-Jiu; Huang, Yan; Wei, Hong-Wei

    2013-04-01

    The present paper takes Chuzhou in Anhui Province as the research area, and deciduous broad-leaved forest as the research object. Then it constructs the recognition model about deciduous broad-leaved forest was constructed using NDVI difference rate between leaf expansion and flowering and fruit-bearing, and the model was applied to HJ-CCD remote sensing image on April 1, 2012 and May 4, 2012. At last, the spatial distribution map of deciduous broad-leaved forest was extracted effectively, and the results of extraction were verified and evaluated. The result shows the validity of NDVI difference rate extraction method proposed in this paper and also verifies the applicability of using HJ-CCD data for vegetation classification and recognition.

  9. Prediction of pure water stress corrosion cracking (PWSCC) in nickel base alloys using crack growth rate models

    SciTech Connect

    Thompson, C.D.; Krasodomski, H.T.; Lewis, N.; Makar, G.L.

    1995-02-22

    The Ford/Andresen slip dissolution SCC model, originally developed for stainless steel components in BWR environments, has been applied to Alloy 600 and Alloy X-750 tested in deaerated pure water chemistry. A method is described whereby the crack growth rates measured in compact tension specimens can be used to estimate crack growth in a component. Good agreement was found between model prediction and measured SCC in X-750 threaded fasteners over a wide range of temperatures, stresses, and material condition. Most data support the basic assumption of this model that cracks initiate early in life. The evidence supporting a particular SCC mechanism is mixed. Electrochemical repassivation data and estimates of oxide fracture strain indicate that the slip dissolution model can account for the observed crack growth rates, provided primary rather than secondary creep rates are used. However, approximately 100 cross-sectional TEM foils of SCC cracks including crack tips reveal no evidence of enhanced plasticity or unique dislocation patterns at the crack tip or along the crack to support a classic slip dissolution mechanism. No voids, hydrides, or microcracks are found in the vicinity of the crack tips creating doubt about classic hydrogen related mechanisms. The bulk oxide films exhibit a surface oxide which is often different than the oxides found within a crack. Although bulk chromium concentration affects the rate of SCC, analytical data indicates the mechanism does not result from chromium depletion at the grain boundaries. The overall findings support a corrosion/dissolution mechanism but not one necessarily related to slip at the crack tip.

  10. Reliable likelihood ratios for statistical model-based voice activity detector with low false-alarm rate

    NASA Astrophysics Data System (ADS)

    Kim, Younggwan; Suh, Youngjoo; Kim, Hoirin

    2011-12-01

    The role of the statistical model-based voice activity detector (SMVAD) is to detect speech regions from input signals using the statistical models of noise and noisy speech. The decision rule of SMVAD is based on the likelihood ratio test (LRT). The LRT-based decision rule may cause detection errors because of statistical properties of noise and speech signals. In this article, we first analyze the reasons why the detection errors occur and then propose two modified decision rules using reliable likelihood ratios (LRs). We also propose an effective weighting scheme considering spectral characteristics of noise and speech signals. In the experiments proposed in this study, with almost no additional computations, the proposed methods show significant performance improvement in various noise conditions. Experimental results also show that the proposed weighting scheme provides additional performance improvement over the two proposed SMVADs.

  11. Further tests of a model-based scheme for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Rickard, W. W.; Levison, W. H.

    1981-01-01

    A methodology was demonstrated for assessing longitudinal-axis handling qualities of transport aircraft on the basis of closed-loop criteria. Six longitudinal-axis approach configurations were studied covering a range of handling quality problems that included the presence of flexible aircraft modes. Using closed-loop performance requirements derived from task analyses and pilot interviews, predictions of performance/workload tradeoffs were obtained using an analytical pilot/vehicle model. A subsequent manned simulation study yielded objective performance measures and Cooper-Harper pilot ratings that were largely consistent with each other and with analytic predictions.

  12. A fuzzy-logic-based model to predict biogas and methane production rates in a pilot-scale mesophilic UASB reactor treating molasses wastewater.

    PubMed

    Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan

    2010-10-15

    A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98.

  13. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  14. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  15. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  16. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  17. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  18. Machine learning and linear regression models to predict catchment-level base cation weathering rates across the southern Appalachian Mountain region, USA

    NASA Astrophysics Data System (ADS)

    Povak, Nicholas A.; Hessburg, Paul F.; McDonnell, Todd C.; Reynolds, Keith M.; Sullivan, Timothy J.; Salter, R. Brion; Cosby, Bernard J.

    2014-04-01

    Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous map of BCw within the southern Appalachian Mountain region. More than 50 initial candidate predictor variables were submitted to a variety of conventional and machine learning regression models. Predictors included aspects of the underlying geology, soils, geomorphology, climate, topographic context, and acidic deposition rates. Low BCw rates were predicted in catchments with low precipitation, siliceous lithology, low soil clay, nitrogen and organic matter contents, and relatively high levels of canopy cover in mixed deciduous and coniferous forest types. Machine learning approaches, particularly random forest modeling, significantly improved model prediction of catchment-level BCw rates over traditional linear regression, with higher model accuracy and lower error rates. Our results confirmed findings from other studies, but also identified several influential climatic predictor variables, interactions, and nonlinearities among the predictors. Results reported here will be used to support regional sulfur critical loads modeling to identify areas impacted by industrially derived atmospheric S inputs. These methods are readily adapted to other regions where accurate CL estimates are required over broad spatial extents to inform policy and management decisions.

  19. A process-based model to estimate gas exchange and monoterpene emission rates in the mediterranean maquis - comparisons between modelled and measured fluxes at different scales

    NASA Astrophysics Data System (ADS)

    Vitale, M.; Matteucci, G.; Fares, S.; Davison, B.

    2009-02-01

    This paper concerns the application of a process-based model (MOCA, Modelling of Carbon Assessment) as an useful tool for estimating gas exchange, and integrating the empirical algorithms for calculation of monoterpene fluxes, in a Mediterranean maquis of central Italy (Castelporziano, Rome). Simulations were carried out for a range of hypothetical but realistic canopies of the evergreen Quercus ilex (holm oak), Arbutus unedo (strawberry tree) and Phillyrea latifolia. More, the dependence on total leaf area and leaf distribution of monoterpene fluxes at the canopy scale has been considered in the algorithms. Simulation of the gas exchange rates showed higher values for P. latifolia and A. unedo (2.39±0.30 and 3.12±0.27 gC m-2 d-1, respectively) with respect to Q. ilex (1.67±0.08 gC m-2 d-1) in the measuring campaign (May-June). Comparisons of the average Gross Primary Production (GPP) values with those measured by eddy covariance were well in accordance (7.98±0.20 and 6.00±1.46 gC m-2 d-1, respectively, in May-June), although some differences (of about 30%) were evident in a point-to-point comparison. These differences could be explained by considering the non uniformity of the measuring site where diurnal winds blown S-SW direction affecting thus calculations of CO2 and water fluxes. The introduction of some structural parameters in the algorithms for monoterpene calculation allowed to simulate monoterpene emission rates and fluxes which were in accord to those measured (6.50±2.25 vs. 9.39±4.5μg g-1DW h-1 for Q. ilex, and 0.63±0.207μg g-1DW h-1 vs. 0.98±0.30μg g-1DW h-1 for P. latifolia). Some constraints of the MOCA model are discussed, but it is demonstrated to be an useful tool to simulate physiological processes and BVOC fluxes in a very complicated plant distributions and environmental conditions, and necessitating also of a low number of input data.

  20. Base Rates: Both Neglected and Intuitive

    ERIC Educational Resources Information Center

    Pennycook, Gordon; Trippas, Dries; Handley, Simon J.; Thompson, Valerie A.

    2014-01-01

    Base-rate neglect refers to the tendency for people to underweight base-rate probabilities in favor of diagnostic information. It is commonly held that base-rate neglect occurs because effortful (Type 2) reasoning is required to process base-rate information, whereas diagnostic information is accessible to fast, intuitive (Type 1) processing…

  1. Disturbance Distance: Combining a process based ecosystem model and remote sensing data to map the vulnerability of U.S. forested ecosystems to potentially altered disturbance rates

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.

    2015-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. In addition, recent studies suggest that disturbance rates may increase in the future under altered climate and land use scenarios. Thus understanding how vulnerable forested ecosystems are to potential changes in disturbance rates is of high importance. This study calculated the theoretical threshold rate of disturbance for which forest ecosystems could no longer be sustained (λ*) across the Coterminous U.S. using an advanced process based ecosystem model (ED). Published rates of disturbance (λ) in 50 study sites were obtained from the North American Forest Disturbance (NAFD) program. Disturbance distance (λ* - λ) was calculated for each site by differencing the model based threshold under current climate conditions and average observed rates of disturbance over the last quarter century. Preliminary results confirm all sample forest sites have current average rates of disturbance below λ*, but there were interesting patterns in the recorded disturbance distances. In general western sites had much smaller disturbance distances, suggesting higher vulnerability to change, while eastern sites showed larger buffers. Ongoing work is being conducted to assess the vulnerability of these sites in the context of potential future changes by propagating scenarios of future climate and land-use change through the analysis.

  2. [A model-based meta-analysis to compare urate-lowering response rate of febuxostat and allopurinol in gout patient].

    PubMed

    Sun, Yi; Li, Liang; Zhou, Tian-Yan; Lu, Wei

    2014-12-01

    This study aims to compare the urate-lowering response rate of febuxostat and allopurinol in gout patient using a model-based meta-analysis. The literature search identified 22 clinical trials of gout with a total of 43 unique treatment arms that met our inclusion criteria, and a total of 6 365 gout patients were included in the study. The response rates of allopuriol and febuxostat were characterized by Tmax model and Emax model respectively, and the effect of baseline serum uric acid (sUA) and patient type on the drug effect was tested. The results showed that allopurinol can reach an average maximum response rate of 50.8% while febuxostat can reach a 100% response rate within a very short time, and the ED50 was 34.3 mg. Covariate analysis revealed that baseline sUA has a negative effect on response rate of allopurinol, and a positive effect on the predicted ED50 of febuxostat. For patients who had shown inadequate response to prior allopurinol treatment, the average response rate was about half that of the allopurinol responder patients.

  3. Isolating global and specific factors in developmental dyslexia: a study based on the rate and amount model (RAM).

    PubMed

    Zoccolotti, Pierluigi; De Luca, Maria; Judica, Anna; Spinelli, Donatella

    2008-04-01

    Using the reading-age match approach, research on developmental dyslexia focuses on specific (e.g., phonological) deficits and disregards the possible role of global influences in determining the disturbance. In the present study, we set out to investigate the role of both global and specific factors in Italian developmental dyslexics using the rate-amount model (RAM; Faust et al. in Psychol Bull 125:777-799, 1999). Vocal reaction times (RT) in naming pictures, words and non-words of varying length were measured in a group of 26 sixth- to eighth-grade dyslexics and 81 age-matched control readers. Dyslexics' raw RTs showed greater lexicality (longer RTs to non-words than words) and length (longer RTs to long stimuli than short ones) effects than controls'. We found that one global factor predicted most individual variation in naming words and non-words, but not pictures. When data transformations, effective in controlling for the global factor, were applied to the data, the greater lexicality effect in dyslexics vanished, due to the influence of the global factor and not a specific failure in the non-lexical reading procedure. Conversely, the greater length effect in dyslexics persisted. Overall, dyslexics' reading performance was best explained as due to the influence of both a global factor for processing orthographic material prelexically and to the specific influence of stimulus length. This conceptualisation appears more promising for bridging the gap between behavioural and functional imaging studies than traditional approaches, which focus on the detection of specific reading deficits. It is concluded that RAM is a useful tool for disentangling the components that are impaired in reading and for defining the characteristics of the global factor, because the paradigm is more powerful for studying developmental dyslexia than the reading-age match method.

  4. Simulating Brain Tumor Heterogeneity with a Multiscale Agent-Based Model: Linking Molecular Signatures, Phenotypes and Expansion Rate

    PubMed Central

    Zhang, Le; Strouthos, Costas G.; Wang, Zhihui; Deisboeck, Thomas S.

    2008-01-01

    We have extended our previously developed 3D multi-scale agent-based brain tumor model to simulate cancer heterogeneity and to analyze its impact across the scales of interest. While our algorithm continues to employ an epidermal growth factor receptor (EGFR) gene-protein interaction network to determine the cells’ phenotype, it now adds an implicit treatment of tumor cell adhesion related to the model’s biochemical microenvironment. We simulate a simplified tumor progression pathway that leads to the emergence of five distinct glioma cell clones with different EGFR density and cell ‘search precisions’. The in silico results show that microscopic tumor heterogeneity can impact the tumor system’s multicellular growth patterns. Our findings further confirm that EGFR density results in the more aggressive clonal populations switching earlier from proliferation-dominated to a more migratory phenotype. Moreover, analyzing the dynamic molecular profile that triggers the phenotypic switch between proliferation and migration, our in silico oncogenomics data display spatial and temporal diversity in documenting the regional impact of tumorigenesis, and thus support the added value of multi-site and repeated assessments in vitro and in vivo. Potential implications from this in silico work for experimental and computational studies are discussed. PMID:20047002

  5. Disturbance Distance: Using a process based ecosystem model to estimate and map potential thresholds in disturbance rates that would give rise to fundamentally altered ecosystems

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.; Hurtt, G. C.; Fisk, J.; Flanagan, S.; LePage, Y.; Sahajpal, R.

    2014-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. As recent studies highlight novel disturbance regimes resulting from pollution, invasive pests and climate change, there is a need to include these alterations in predictions of future forest function and structure. The Ecosystem Demography (ED) model is a mechanistic model of forest ecosystem dynamics in which individual-based forest dynamics can be efficiently implemented over regional to global scales due to advanced scaling methods. We utilize ED to characterize the sensitivity of potential vegetation structure and function to changes in rates of density independent mortality. Disturbance rate within ED can either be altered directly or through the development of sub-models. Disturbance sub-models in ED currently include fire, land use and hurricanes. We use a tiered approach to understand the sensitivity of North American ecosystems to changes in background density independent mortality. Our first analyses were conducted at half-degree spatial resolution with a constant rate of disturbance in space and time, which was altered between runs. Annual climate was held constant at the site level and the land use and fire sub-models were turned off. Results showed an ~ 30% increase in non-forest area across the US when disturbance rates were changed from 0.6% a year to 1.2% a year and a more than 3.5 fold increase in non-forest area when disturbance rates doubled again from 1.2% to 2.4%. Continued runs altered natural background disturbance rates with the existing fire and hurricane sub models turned on as well as historic and future land use. By quantify differences between model outputs that characterize ecosystem structure and function related to the carbon cycle across the US, we

  6. A flexible cure rate model for spatially correlated survival data based on generalized extreme value distribution and Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Dey, Dipak K

    2016-09-01

    Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa. PMID:27225466

  7. A flexible cure rate model for spatially correlated survival data based on generalized extreme value distribution and Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Dey, Dipak K

    2016-09-01

    Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa.

  8. QSAR models for oxidation of organic micropollutants in water based on ozone and hydroxyl radical rate constants and their chemical classification.

    PubMed

    Sudhakaran, Sairam; Amy, Gary L

    2013-03-01

    Ozonation is an oxidation process for the removal of organic micropollutants (OMPs) from water and the chemical reaction is governed by second-order kinetics. An advanced oxidation process (AOP), wherein the hydroxyl radicals (OH radicals) are generated, is more effective in removing a wider range of OMPs from water than direct ozonation. Second-order rate constants (k(OH) and k(O3) are good indices to estimate the oxidation efficiency, where higher rate constants indicate more rapid oxidation. In this study, quantitative structure activity relationships (QSAR) models for O(3) and AOP processes were developed, and rate constants, k(OH) and [Formula: see text] , were predicted based on target compound properties. The k(O3) and k(OH) values ranged from 5 * 10(-4) to 10(5) M(-1)s(-1) and 0.04 to 18 * (10(9)) M(-1) s(-1), respectively. Several molecular descriptors which potentially influence O(3) and OH radical oxidation were identified and studied. The QSAR-defining descriptors were double bond equivalence (DBE), ionisation potential (IP), electron-affinity (EA) and weakly-polar component of solvent accessible surface area (WPSA), and the chemical and statistical significance of these descriptors was discussed. Multiple linear regression was used to build the QSAR models, resulting in high goodness-of-fit, r(2) (>0.75). The models were validated by internal and external validation along with residual plots. PMID:23260175

  9. Medicare and Medicaid Programs; CY 2016 Home Health Prospective Payment System Rate Update; Home Health Value-Based Purchasing Model; and Home Health Quality Reporting Requirements. Final rule.

    PubMed

    2015-11-01

    This final rule will update Home Health Prospective Payment System (HH PPS) rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor under the Medicare prospective payment system for home health agencies (HHAs), effective for episodes ending on or after January 1, 2016. As required by the Affordable Care Act, this rule implements the 3rd year of the 4-year phase-in of the rebasing adjustments to the HH PPS payment rates. This rule updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking and provides a clarification regarding the use of the "initial encounter'' seventh character applicable to certain ICD-10-CM code categories. This final rule will also finalize reductions to the national, standardized 60-day episode payment rate in CY 2016, CY 2017, and CY 2018 of 0.97 percent in each year to account for estimated case-mix growth unrelated to increases in patient acuity (nominal case-mix growth) between CY 2012 and CY 2014. In addition, this rule implements a HH value-based purchasing (HHVBP) model, beginning January 1, 2016, in which all Medicare-certified HHAs in selected states will be required to participate. Finally, this rule finalizes minor changes to the home health quality reporting program and minor technical regulations text changes.

  10. An "Emergent Model" for Rate of Change

    ERIC Educational Resources Information Center

    Herbert, Sandra; Pierce, Robyn

    2008-01-01

    Does speed provide a "model for" rate of change in other contexts? Does JavaMathWorlds (JMW), animated simulation software, assist in the development of the "model for" rate of change? This project investigates the transference of understandings of rate gained in a motion context to a non-motion context. Students were 27 14-15 year old students at…

  11. Predicting energy expenditure from photo-plethysmographic measurements of heart rate under beta blocker therapy: Data driven personalization strategies based on mixed models.

    PubMed

    Bonomi, Alberto G; Goldenberg, Sharon; Papini, Gabriele; Kraal, Jos; Stut, Wim; Sartor, Francesco; Kemps, Hareld

    2015-08-01

    Energy expenditure have been often estimated using computational models based on heart rate (HR) and appropriate personalization strategies to account for users cardio-respiratory characteristics. However, medications like beta blockers which are prescribed to treat several cardiac conditions have a direct influence on the cardiovascular system and may impact the relationship between HR and energy expenditure during physical activity (AEE). This study proposes to estimate AEE from HR using mixed models (MIX-REG) by introducing a novel method to personalize the prediction equation. We selected as features to represent the individual random effect in the MIX-REG model those subject characteristics which minimized both estimation error (RMSE) and between-subjects error bias variability. Data from 17 patients post-myocardial infarction were collected during a laboratory protocol. AEE was measured using indirect calorimetry and HR using an innovative wrist worn activity monitor equipped with the Philips Cardio and Motion Monitoring Module (CM3-Generation-1), which is an integrated module including a photo-plethysmographic and accelerometer sensor. The presented method showed large AEE estimation accuracy (RMSE = 1.35 kcal/min) which was comparable to that of models personalized using data from laboratory calibration protocols (HR-FLEX) and was superior to multi-linear regression and MIX-REG models trained using a stepwise features selection procedure. PMID:26738062

  12. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  13. A Bottom-Up Whole-Body Physiologically Based Pharmacokinetic Model to Mechanistically Predict Tissue Distribution and the Rate of Subcutaneous Absorption of Therapeutic Proteins.

    PubMed

    Gill, Katherine L; Gardner, Iain; Li, Linzhong; Jamei, Masoud

    2016-01-01

    The ability to predict subcutaneous (SC) absorption rate and tissue distribution of therapeutic proteins (TPs) using a bottom-up approach is highly desirable early in the drug development process prior to clinical data being available. A whole-body physiologically based pharmacokinetic (PBPK) model, requiring only a few drug parameters, to predict plasma and interstitial fluid concentrations of TPs in humans after intravenous and subcutaneous dosing has been developed. Movement of TPs between vascular and interstitial spaces was described by considering both convection and diffusion processes using a 2-pore framework. The model was optimised using a variety of literature sources, such as tissue lymph/plasma concentration ratios in humans and animals, information on the percentage of dose absorbed following SC dosing via lymph in animals and data showing loss of radiolabelled IgG from the SC dosing site in humans. The resultant model was used to predict t max and plasma concentration profiles for 12 TPs (molecular weight 8-150 kDa) following SC dosing. The predicted plasma concentration profiles were generally comparable to observed data. t max was predicted within 3-fold of reported values, with one third of the predictions within 0.8-1.25-fold. There was no systematic bias in simulated C max values, although a general trend for underprediction of t max was observed. No clear trend between prediction accuracy of t max and TP isoelectric point or molecular size was apparent. The mechanistic whole-body PBPK model described here can be applied to predict absorption rate of TPs into blood and movement into target tissues following SC dosing.

  14. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    PubMed

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana. PMID:25741459

  15. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    PubMed

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.

  16. Associations of Perfluoroalkyl Substances (PFAS) with Lower Birth Weight: An Evaluation of Potential Confounding by Glomerular Filtration Rate Using a Physiologically Based Pharmacokinetic Model (PBPK)

    PubMed Central

    Loccisano, Anne E.; Morken, Nils-Halvdan; Yoon, Miyoung; Wu, Huali; McDougall, Robin; Maisonet, Mildred; Marcus, Michele; Kishi, Reiko; Miyashita, Chihiro; Chen, Mei-Huei; Hsieh, Wu-Shiun; Andersen, Melvin E.; Clewell, Harvey J.; Longnecker, Matthew P.

    2015-01-01

    Background Prenatal exposure to perfluoroalkyl substances (PFAS) has been associated with lower birth weight in epidemiologic studies. This association could be attributable to glomerular filtration rate (GFR), which is related to PFAS concentration and birth weight. Objectives We used a physiologically based pharmacokinetic (PBPK) model of pregnancy to assess how much of the PFAS–birth weight association observed in epidemiologic studies might be attributable to GFR. Methods We modified a PBPK model to reflect the association of GFR with birth weight (estimated from three studies of GFR and birth weight) and used it to simulate PFAS concentrations in maternal and cord plasma. The model was run 250,000 times, with variation in parameters, to simulate a population. Simulated data were analyzed to evaluate the association between PFAS levels and birth weight due to GFR. We compared simulated estimates with those from a meta-analysis of epidemiologic data. Results The reduction in birth weight for each 1-ng/mL increase in simulated cord plasma for perfluorooctane sulfonate (PFOS) was 2.72 g (95% CI: –3.40, –2.04), and for perfluorooctanoic acid (PFOA) was 7.13 g (95% CI: –8.46, –5.80); results based on maternal plasma at term were similar. Results were sensitive to variations in PFAS level distributions and the strength of the GFR–birth weight association. In comparison, our meta-analysis of epidemiologic studies suggested that each 1-ng/mL increase in prenatal PFOS and PFOA levels was associated with 5.00 g (95% CI: –21.66, –7.78) and 14.72 g (95% CI: –8.92, –1.09) reductions in birth weight, respectively. Conclusion Results of our simulations suggest that a substantial proportion of the association between prenatal PFAS and birth weight may be attributable to confounding by GFR and that confounding by GFR may be more important in studies with sample collection later in pregnancy. Citation Verner MA, Loccisano AE, Morken NH, Yoon M, Wu H, Mc

  17. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics.

    PubMed

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J; Rubio, Roberto F; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make "deadly force decisions" in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  18. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics

    PubMed Central

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  19. Rate-based modeling of reactive absorption of CO{sub 2} and H{sub 2}S into aqueous methyldiethanolamine

    SciTech Connect

    Pacheco, M.A.; Rochelle, G.T.

    1998-10-01

    A general framework was developed to model the transport processes that take place during reactive absorption when both rate- and equilibrium-controlled reactions occur in the liquid phase. This framework was applied to the selective absorption of H{sub 2}S from fuel gas containing CO{sub 2} using aqueous methyldiethanolamine. A rate-based distillation column module was used for the column integration. The Maxwell-Stefan and enhancement factor theories were utilized. In packed columns, CO{sub 2} absorption is controlled by diffusion with fast chemical reactions; in trayed columns it is controlled primarily by physical absorption. Gas-film resistance is never significant for CO{sub 2} absorption. For H{sub 2}S absorption, gas- and liquid-film resistances are important, and diffusion of bisulfide controls the liquid-film resistance. Heat effects produce temperatures bulges that can cause equilibrium pinches at the maximum temperature. This phenomenon gives an optimum packing height for the H{sub 2}S removal. Trayed columns are more selective than packed columns for H{sub 2}S removal, primarily because of the larger number of liquid-film mass transfer units.

  20. Minority Utility Rate Design Assessment Model

    SciTech Connect

    Poyer, David A.; Butler, John G.

    2003-01-20

    Econometric model simulates consumer demand response to various user-supplied, two-part tariff electricity rate designs and assesses their economic welfare impact on black, hispanic, poor and majority households.

  1. Reaction rate modeling of PBXN-110

    NASA Astrophysics Data System (ADS)

    Miller, P. J.; Sutherland, G. T.

    1996-05-01

    The reactive rate model for Navy explosive PBXN-110 has been determined. The rate parameters for the Lee-Tarver model were evaluated by comparing the results of DYNA2D hydrocode simulations to the embedded gauge data of gas-gun tests in which the shock loading is mostly one-dimensional. The model parameters were refined such that the failure diameter of the explosive could be reproduced in the calculations. The model was used to simulate a series of Navy sensitivity tests. These are reported here and include detonation curvature, detonation velocity dependency on charge diameter, Modified Gap, and Underwater Sensitivity tests.

  2. Single crystal plasticity by modeling dislocation density rate behavior

    SciTech Connect

    Hansen, Benjamin L; Bronkhorst, Curt; Beyerlein, Irene; Cerreta, E. K.; Dennis-Koller, Darcie

    2010-12-23

    The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. The elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.

  3. Collagen thermal denaturation study for thermal angioplasty based on modified kinetic model: relation between the artery mechanical properties and collagen denaturation rate

    NASA Astrophysics Data System (ADS)

    Shimazaki, N.; Hayashi, T.; Kunio, M.; Arai, T.

    2010-02-01

    We have been developing the novel short-term heating angioplasty in which sufficient artery lumen-dilatation was attained with thermal softening of collagen fiber in artery wall. In the present study, we investigated on the relation between the mechanical properties of heated artery and thermal denaturation fractures of arterial collagen in ex vivo. We employed Lumry-Eyring model to estimate temperature- and time-dependent thermal denaturation fractures of arterial collagen fiber during heating. We made a kinetic model of arterial collagen thermal denaturation by adjustment of K and k in this model, those were the equilibrium constant of reversible denaturation and the rate constant of irreversible denaturation. Meanwhile we demonstrated that the change of reduced scattering coefficient of whole artery wall during heating reflected the reversible denaturation of the collagen in artery wall. Based on this phenomenon, the K was determined experimentally by backscattered light intensity measurement (at 633nm) of extracted porcine carotid artery during temperature elevation and descending (25°C-->80°C-->25°C). We employed the value of according to our earlier report in which the time-and temperature- dependent irreversible denaturation amount of the artery collagen fiber that was assessed by the artery birefringence. Then, the time- and temperature- dependent reversible (irreversible) denaturation fraction defined as the reversible ((irreversible) denatured collagen amount) / (total collagen amount) was calculated by the model. Thermo-mechanical analysis of artery wall was performed to compare the arterial mechanical behaviors (softening, shrinkage) during heating with the calculated denaturation fraction with the model. In any artery temperature condition in 70-80°, the irreversible denaturation fraction at which the artery thermal shrinkage started was estimated to be around 20%. On the other hand, the calculated irreversible denaturation fraction remained below

  4. Development of experimental design approach and ANN-based models for determination of Cr(VI) ions uptake rate from aqueous solution onto the solid biodiesel waste residue.

    PubMed

    Shanmugaprakash, M; Sivakumar, V

    2013-11-01

    In the present work, the evaluation capacities of two optimization methodologies such as RSM and ANN were employed and compared for predication of Cr(VI) uptake rate using defatted pongamia oil cake (DPOC) in both batch and column mode. The influence of operating parameters was investigated through a central composite design (CCD) of RSM using Design Expert 8.0.7.1 software. The same data was fed as input in ANN to obtain a trained the multilayer feed-forward networks back-propagation algorithm using MATLAB. The performance of the developed ANN models were compared with RSM mathematical models for Cr(VI) uptake rate in terms of the coefficient of determination (R(2)), root mean square error (RMSE) and absolute average deviation (AAD). The estimated values confirm that ANN predominates RSM representing the superiority of a trained ANN models over RSM models in order to capture the non-linear behavior of the given system. PMID:24080294

  5. Validation of a transparent decision model to rate drug interactions

    PubMed Central

    2012-01-01

    Background Multiple databases provide ratings of drug-drug interactions. The ratings are often based on different criteria and lack background information on the decision making process. User acceptance of rating systems could be improved by providing a transparent decision path for each category. Methods We rated 200 randomly selected potential drug-drug interactions by a transparent decision model developed by our team. The cases were generated from ward round observations and physicians’ queries from an outpatient setting. We compared our ratings to those assigned by a senior clinical pharmacologist and by a standard interaction database, and thus validated the model. Results The decision model rated consistently with the standard database and the pharmacologist in 94 and 156 cases, respectively. In two cases the model decision required correction. Following removal of systematic model construction differences, the DM was fully consistent with other rating systems. Conclusion The decision model reproducibly rates interactions and elucidates systematic differences. We propose to supply validated decision paths alongside the interaction rating to improve comprehensibility and to enable physicians to interpret the ratings in a clinical context. PMID:22950884

  6. Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning

    ERIC Educational Resources Information Center

    Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus

    2008-01-01

    When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…

  7. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    SciTech Connect

    Ballester, Facundo; Carlsson Tedgren, Åsa; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Rivard, Mark J.; Siebert, Frank-André; Sloboda, Ron S.; and others

    2015-06-15

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by

  8. Modeling heart rate variability by stochastic feedback

    NASA Technical Reports Server (NTRS)

    Amaral, L. A.; Goldberger, A. L.; Stanley, H. E.

    1999-01-01

    We consider the question of how the cardiac rhythm spontaneously self-regulates and propose a new mechanism as a possible answer. We model the neuroautonomic regulation of the heart rate as a stochastic feedback system and find that the model successfully accounts for key characteristics of cardiac variability, including the 1/f power spectrum, the functional form and scaling of the distribution of variations of the interbeat intervals, and the correlations in the Fourier phases which indicate nonlinear dynamics.

  9. Evolution of migration rate in a spatially realistic metapopulation model.

    PubMed

    Heino, M; Hanski, I

    2001-05-01

    We use an individual-based, spatially realistic metapopulation model to study the evolution of migration rate. We first explore the consequences of habitat change in hypothetical patch networks on a regular lattice. If the primary consequence of habitat change is an increase in local extinction risk as a result of decreased local population sizes, migration rate increases. A nonmonotonic response, with migration rate decreasing at high extinction rate, was obtained only by assuming very frequent catastrophes. If the quality of the matrix habitat deteriorates, leading to increased mortality during migration, the evolutionary response is more complex. As long as habitat patch occupancy does not decrease markedly with increased migration mortality, reduced migration rate evolves. However, once mortality becomes so high that empty patches remain uncolonized for a long time, evolution tends to increase migration rate, which may lead to an "evolutionary rescue" in a fragmented landscape. Kin competition has a quantitative effect on the evolution of migration rate in our model, but these patterns in the evolution of migration rate appear to be primarily caused by spatiotemporal variation in fitness and mortality during migration. We apply the model to real habitat patch networks occupied by two checkerspot butterfly (Melitaea) species, for which sufficient data are available to estimate rigorously most of the model parameters. The model-predicted migration rate is not significantly different from the empirically observed one. Regional variation in patch areas and connectivities leads to regional variation in the optimal migration rate, predictions that can be tested empirically. PMID:18707258

  10. The relationship between specific absorption rate and temperature elevation in anatomically based human body models for plane wave exposure from 30 MHz to 6 GHz

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Laakso, Ilkka; Oizumi, Takuya; Hanatani, Ryuto; Chan, Kwok Hung; Wiart, Joe

    2013-02-01

    According to the international safety guidelines/standard, the whole-body-averaged specific absorption rate (Poljak et al 2003 IEEE Trans. Electromagn. Compat. 45 141-5) and the peak spatial average SAR are used as metrics for human protection from whole-body and localized exposures, respectively. The IEEE standard (IEEE 2006 IEEE C95.1) indicates that the upper boundary frequency, over which the whole-body-averaged SAR is deemed to be the basic restriction, has been reduced from 6 to 3 GHz, because radio-wave energy is absorbed around the body surface when the frequency is increased. However, no quantitative discussion has been provided to support this description especially from the standpoint of temperature elevation. It is of interest to investigate the maximum temperature elevation in addition to the core temperature even for a whole-body exposure. In the present study, using anatomically based human models, we computed the SAR and the temperature elevation for a plane-wave exposure from 30 MHz to 6 GHz, taking into account the thermoregulatory response. As the primary result, we found that the ratio of the core temperature elevation to the whole-body-averaged SAR is almost frequency independent for frequencies below a few gigahertz; the ratio decreases above this frequency. At frequencies higher than a few gigahertz, core temperature elevation for the same whole-body averaged SAR becomes lower due to heat convection from the skin to air. This lower core temperature elevation is attributable to skin temperature elevation caused by the power absorption around the body surface. Then, core temperature elevation even for whole-body averaged SAR of 4 W kg-1 with the duration of 1 h was at most 0.8 °C, which is smaller than a threshold considered in the safety guidelines/standard. Further, the peak 10 g averaged SAR is correlated with the maximum body temperature elevations without extremities and pinna over the frequencies considered. These findings were confirmed

  11. Destructive weighted Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; de Castro, Mário; Balakrishnan, N; Cancho, Vicente G

    2011-07-01

    In this paper, we develop a flexible cure rate survival model by assuming the number of competing causes of the event of interest to follow a compound weighted Poisson distribution. This model is more flexible in terms of dispersion than the promotion time cure model. Moreover, it gives an interesting and realistic interpretation of the biological mechanism of the occurrence of event of interest as it includes a destructive process of the initial risk factors in a competitive scenario. In other words, what is recorded is only from the undamaged portion of the original number of risk factors.

  12. Modeling of Kashmir Aftershock Decay Based on Static Coulomb Stress Changes and Laboratory-Derived Rate-and-State Dependent Friction Law

    NASA Astrophysics Data System (ADS)

    Javed, F.; Hainzl, S.; Aoudia, A.; Qaisar, M.

    2016-05-01

    We model the spatial and temporal evolution of October 8, 2005 Kashmir earthquake's aftershock activity using the rate-and-state dependent friction model incorporating uncertainties in computed coseismic stress perturbations. We estimated the best possible value for frictional resistance " Aσ n", background seismicity rate " r" and coefficient of stress variation "CV" using maximum log-likelihood method. For the whole Kashmir earthquake sequence, we measure a frictional resistance Aσ n ~ 0.0185 MPa, r ~ 20 M3.7+ events/year and CV = 0.94 ± 0.01. The spatial and temporal forecasted seismicity rate of modeled aftershocks fits well with the spatial and temporal distribution of observed aftershocks that occurred in the regions with positive static stress changes as well as in the apparent stress shadow region. To quantify the effect of secondary aftershock triggering, we have re-run the estimations for 100 stochastically declustered catalogs showing that the effect of aftershock-induced secondary stress changes is obviously minor compared to the overall uncertainties, and that the stress variability related to uncertain slip model inversions and receiver mechanisms remains the major factor to provide a reasonable data fit.

  13. Actuarial models of life insurance with stochastic interest rate

    NASA Astrophysics Data System (ADS)

    Wei, Xiang; Hu, Ping

    2009-07-01

    On the basis of general actuarial model of life insurance, this article has carried on research to continuous life insurance actuarial models under the stochastic interest rate separately. And it provide net single premium for life insurance and life annuity due over a period based on that de Moivre law of mortality and Makeham's law of mortality separately.

  14. Modeling rates of bank erosion in sinuous tidal channel derived from event-based terrestrial lidar surveys in the Mont Saint Michel Bay

    NASA Astrophysics Data System (ADS)

    Leroux, J.; Lague, D.

    2012-12-01

    The Mont-Saint-Michel (MSM) bay is characterized by a semi-diurnal regime with a tidal range of 14 meter. Understanding river bank migration of tidal channels in such mega tidal salt marshes requires a precise quantification of the relative contribution of frequent and infrequent bank erosion events to the longer term dynamics. We use terrestrial lidar scanner (TLS) which overcomes the limitations of traditional bank measurement approaches (e.g. aerial photography, GPS measurements) with high resolution and high precision topographic data. We use 30 TLS measurements and traditional data sources to quantify the annual and daily dynamics of bank erosion for a sinuous salt marsh channel near the island of the MSM. We present the results of a 2 years study that begun in September 2010. We compare annual bank retreat with daily surveys focused on spring tides in order to calculate "event-based" volume of bank erosion. For active steep banks, the volume of sediment eroded is computed between 2 set of point cloud that are classified by the CANUPO algorithm to remove vegetation (Brodu and Lague, 2012). A new algorithm allows a direct comparison of point clouds in 3D based on surface normal computation and measurement of mean surface change along the normal direction. On a 5 centimeter resolution grid, the changes between 2 banks point cloud is computed and used to calculate volume of eroded bank. Measured rates of bank retreat varied between no detectable change to 2 m/tide, which correspond roughly to 100 cubic meters/tide. We also document a non-homothetic pattern of bank erosion during spring tides : erosion is focused in narrow zones of the meander and shifts spatially at daily timescales. To relate bank erosion to hydraulic characteristics of the channel, an ADCP was used to measure flow velocity during tides. The measurements highlights two main points that only occurs when tides overcomes the salt marsh: (i) the ebb flow is stronger than flood flow with velocities

  15. Stochastic model for heart-rate fluctuations

    NASA Astrophysics Data System (ADS)

    Kuusela, Tom; Shepherd, Tony; Hietarinta, Jarmo

    2003-06-01

    A normal human heart rate shows complex fluctuations in time, which is natural, because the heart rate is controlled by a large number of different feedback control loops. These unpredictable fluctuations have been shown to display fractal dynamics, long-term correlations, and 1/f noise. These characterizations are statistical and they have been widely studied and used, but much less is known about the detailed time evolution (dynamics) of the heart-rate control mechanism. Here we show that a simple one-dimensional Langevin-type stochastic difference equation can accurately model the heart-rate fluctuations in a time scale from minutes to hours. The model consists of a deterministic nonlinear part and a stochastic part typical to Gaussian noise, and both parts can be directly determined from the measured heart-rate data. Studies of 27 healthy subjects reveal that in most cases, the deterministic part has a form typically seen in bistable systems: there are two stable fixed points and one unstable one.

  16. Sphaleron rate in the minimal standard model.

    PubMed

    D'Onofrio, Michela; Rummukainen, Kari; Tranberg, Anders

    2014-10-01

    We use large-scale lattice simulations to compute the rate of baryon number violating processes (the sphaleron rate), the Higgs field expectation value, and the critical temperature in the standard model across the electroweak phase transition temperature. While there is no true phase transition between the high-temperature symmetric phase and the low-temperature broken phase, the crossover is sharp and located at temperature T(c) = (159.5 ± 1.5)  GeV. The sphaleron rate in the symmetric phase (T>T(c)) is Γ/T(4) = (18 ± 3)α(W)(5), and in the broken phase in the physically interesting temperature range 130 GeV < T < T(c) it can be parametrized as log(Γ/T(4)) = (0.83 ± 0.01)T/GeV-(147.7 ± 1.9). The freeze-out temperature in the early Universe, where the Hubble rate wins over the baryon number violation rate, is T* = (131.7 ± 2.3) GeV. These values, beyond being intrinsic properties of the standard model, are relevant for, e.g., low-scale leptogenesis scenarios. PMID:25325629

  17. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  18. Liver cancer mortality rate model in Thailand

    NASA Astrophysics Data System (ADS)

    Sriwattanapongse, Wattanavadee; Prasitwattanaseree, Sukon

    2013-09-01

    Liver Cancer has been a leading cause of death in Thailand. The purpose of this study was to model and forecast liver cancer mortality rate in Thailand using death certificate reports. A retrospective analysis of the liver cancer mortality rate was conducted. Numbering of 123,280 liver cancer causes of death cases were obtained from the national vital registration database for the 10-year period from 2000 to 2009, provided by the Ministry of Interior and coded as cause-of-death using ICD-10 by the Ministry of Public Health. Multivariate regression model was used for modeling and forecasting age-specific liver cancer mortality rates in Thailand. Liver cancer mortality increased with increasing age for each sex and was also higher in the North East provinces. The trends of liver cancer mortality remained stable in most age groups with increases during ten-year period (2000 to 2009) in the Northern and Southern. Liver cancer mortality was higher in males and increase with increasing age. There is need of liver cancer control measures to remain on a sustained and long-term basis for the high liver cancer burden rate of Thailand.

  19. The Piecewise Linear Reactive Flow Rate Model

    SciTech Connect

    Vitello, P; Souers, P C

    2005-07-22

    Conclusions are: (1) Early calibrations of the Piece Wise Linear reactive flow model have shown that it allows for very accurate agreement with data for a broad range of detonation wave strengths. (2) The ability to vary the rate at specific pressures has shown that corner turning involves competition between the strong wave that travels roughly in a straight line and growth at low pressure of a new wave that turns corners sharply. (3) The inclusion of a low pressure de-sensitization rate is essential to preserving the dead zone at large times as is observed.

  20. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  1. What Will It Take to Eliminate Pediatric HIV? Reaching WHO Target Rates of Mother-to-Child HIV Transmission in Zimbabwe: A Model-Based Analysis

    PubMed Central

    Ciaranello, Andrea L.; Perez, Freddy; Keatinge, Jo; Park, Ji-Eun; Engelsmann, Barbara; Maruva, Matthews; Walensky, Rochelle P.; Dabis, Francois; Chu, Jennifer; Rusibamayila, Asinath; Mushavi, Angela; Freedberg, Kenneth A.

    2012-01-01

    Background The World Health Organization (WHO) has called for the “virtual elimination” of pediatric HIV: a mother-to-child HIV transmission (MTCT) risk of less than 5%. We investigated uptake of prevention of MTCT (PMTCT) services, infant feeding recommendations, and specific drug regimens necessary to achieve this goal in Zimbabwe. Methods and Findings We used a computer model to simulate a cohort of HIV-infected, pregnant/breastfeeding women (mean age, 24 y; mean CD4, 451/µl; breastfeeding duration, 12 mo). Three PMTCT regimens were evaluated: (1) single-dose nevirapine (sdNVP), (2) WHO 2010 guidelines' “Option A” (zidovudine in pregnancy, infant nevirapine throughout breastfeeding for women without advanced disease, lifelong combination antiretroviral therapy for women with advanced disease), and (3) WHO “Option B” (pregnancy/breastfeeding-limited combination antiretroviral drug regimens without advanced disease; lifelong antiretroviral therapy with advanced disease). We examined four levels of PMTCT uptake (proportion of pregnant women accessing and adhering to PMTCT services): reported rates in 2008 and 2009 (36% and 56%, respectively) and target goals in 2008 and 2009 (80% and 95%, respectively). The primary model outcome was MTCT risk at weaning. The 2008 sdNVP-based National PMTCT Program led to a projected 12-mo MTCT risk of 20.3%. Improved uptake in 2009 reduced projected risk to 18.0%. If sdNVP were replaced by more effective regimens, with 2009 (56%) uptake, estimated MTCT risk would be 14.4% (Option A) or 13.4% (Option B). Even with 95% uptake of Option A or B, projected transmission risks (6.1%–7.7%) would exceed the WHO goal of less than 5%. Only if the lowest published transmission risks were used for each drug regimen, or breastfeeding duration were shortened, would MTCT risks at 95% uptake fall below 5%. Conclusions Implementation of the WHO PMTCT guidelines must be accompanied by efforts to improve access to PMTCT services, retain

  2. Field Based Constraints on Reaction Rates in the Crust

    NASA Astrophysics Data System (ADS)

    Baxter, E. F.

    2004-12-01

    Modern research in plate boundary processes involving metamorphism frequently employs complex physical models. Such models require some quantification (or assumption) of the rate at which metamorphic reactions, or chemical exchange, proceed in natural systems. Here, a compilation of available quantitative field-based constraints on high temperature reaction rates will be presented. These include quantifications based on isotopic exchange, porphyroblast and reaction corona growth models, geochronology, and textural analysis. Additionally, natural strain rates provide an important upper bound on simultaneous reaction rates by virtue of a direct mechanistic link between reaction and strain that applies in most situations within the deforming crust. These data show that reaction rates attending regional metamorphism are 4-7 orders of magnitude slower than most laboratory-based predictions. A general rate law for regional metamorphic reactions has been derived which best describes these field-based data: log10(Rnet) = .0029T-9.6±1, where Rnet is the net reaction rate in g/cm2/yr and T is temperature (C) (Baxter 2003, JGSL). Reaction rates attending contact metamorphism differ from laboratory-based predictions by less than 2 orders of magnitude, and are in closest agreement at higher temperatures. Regional metamorphic reaction rates may be limited by comparatively lesser (or transient) availability of aqueous fluid in the intergranular medium, slower heat input, and smaller deviations from equilibrium. Implications of slow natural metamorphic reaction rates may include a delay in the completion of metamorphic reactions which release (or take in) volatiles, and transform the mineralogy of the crust in dynamic plate boundary settings such as subduction zones.

  3. Acoustically based fetal heart rate monitor

    NASA Technical Reports Server (NTRS)

    Baker, Donald A.; Zuckerwar, Allan J.

    1991-01-01

    The acoustically based fetal heart rate monitor permits an expectant mother to perform the fetal Non-Stress Test in her home. The potential market would include the one million U.S. pregnancies per year requiring this type of prenatal surveillance. The monitor uses polyvinylidene fluoride (PVF2) piezoelectric polymer film for the acoustic sensors, which are mounted in a seven-element array on a cummerbund. Evaluation of the sensor ouput signals utilizes a digital signal processor, which performs a linear prediction routine in real time. Clinical tests reveal that the acoustically based monitor provides Non-Stress Test records which are comparable to those obtained with a commercial ultrasonic transducer.

  4. Modeling of asteroidal dust production rates

    NASA Technical Reports Server (NTRS)

    Durda, Daniel D.; Dermott, Stanley F.; Gustafson, Bo A. S.

    1992-01-01

    The production rate of dust associated with the prominent Hirayama asteroid families and the background asteroidal population are modeled with the intent of using the families as a calibrator of mainbelt dust production. However, the dust production rates of asteroid families may be highly stochastic; there is probably more than an order of magnitude variation in the total area of dust associated with a family. Over 4.5 x 10(exp 9) years of collisional evolution, the volume (mass) of a family is ground down by an order of magnitude, suggesting a similar loss from the entire mainbelt population. Our collisional models show that the number of meteoroids deliverable to Earth also varies stochastically, but only by a factor of 2 to 3.

  5. Functional response models to estimate feeding rates of wading birds

    USGS Publications Warehouse

    Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.

    2010-01-01

    Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P < 0.05). Substantial discrepancies between the CM and HoII models were possible depending on flock sizes used to model feeding rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.

  6. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  7. A theory of exchange rate modeling

    SciTech Connect

    Alekseev, A.A.

    1995-09-01

    The article examines exchange rate modeling for two cases: (a) when the trading partners have mutual interests and (b) when the trading partners have antogonistic interests. Exchange rates in world markets are determined by supply and demand for the currency of each state, and states may control the exchange rate of their currency by changing the interest rate, the volume of credit, and product prices in both domestic and export markets. Abstracting from issues of production and technology in different countries and also ignoring various trade, institutional, and other barriers, we consider in this article only the effect of export and import prices on the exchange rate, we propose a new criterion of external trade activity: each trading partner earns a profit which is proportional to the volume of benefits enjoyed by the other partner. We consider a trading cycle that consists of four stages: (a) purchase of goods in the domestic market with the object of selling them abroad; (b) sale of the goods in foreign markets; (c) purchase of goods abroad with the object of selling them in the domestic market; (d) sale of the goods domestically.

  8. Modelling rating curves using remotely sensed LiDAR data

    USGS Publications Warehouse

    Nathanson, Marcus; Kean, Jason W.; Grabs, Thomas J.; Seibert, Jan; Laudon, Hjalmar; Lyon, Steve W.

    2012-01-01

    Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage-discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics-based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m-wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90-m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a 'hybrid model' rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see 'below' the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics-based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote

  9. Empirical analysis of quantum finance interest rates models

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yang, Cao

    2009-07-01

    Empirical forward interest rates drive the debt markets. Libor and Euribor futures data is used to calibrate and test models of interest rates based on the formulation of quantum finance. In particular, all the model parameters, including interest rate volatilities, are obtained from market data. The random noise driving the forward interest rates is taken to be a Euclidean two dimension quantum field. We analyze two models, namely the bond forward interest rates, which is a linear theory and the Libor Market Model, which is a nonlinear theory. Both the models are analyzed using Libor and Euribor data, with various approximations to match the linear and nonlinear models. The results are quite good, with the linear model having an accuracy of about 99% and the nonlinear model being slightly less accurate. We extend our analysis by directly using the Zero Coupon Yield Curve (ZCYC) data for Libor and for bonds; but due to some technical difficulties we could not derive the models parameters directly from the ZCYC data.

  10. The relationship between specific absorption rate and temperature elevation in anatomically based human body models for plane wave exposure from 30 MHz to 6 GHz.

    PubMed

    Hirata, Akimasa; Laakso, Ilkka; Oizumi, Takuya; Hanatani, Ryuto; Chan, Kwok Hung; Wiart, Joe

    2013-02-21

    According to the international safety guidelines/standard, the whole-body-averaged specific absorption rate (Poljak et al 2003 IEEE Trans. Electromagn. Compat. 45 141-5) and the peak spatial average SAR are used as metrics for human protection from whole-body and localized exposures, respectively. The IEEE standard (IEEE 2006 IEEE C95.1) indicates that the upper boundary frequency, over which the whole-body-averaged SAR is deemed to be the basic restriction, has been reduced from 6 to 3 GHz, because radio-wave energy is absorbed around the body surface when the frequency is increased. However, no quantitative discussion has been provided to support this description especially from the standpoint of temperature elevation. It is of interest to investigate the maximum temperature elevation in addition to the core temperature even for a whole-body exposure. In the present study, using anatomically based human models, we computed the SAR and the temperature elevation for a plane-wave exposure from 30 MHz to 6 GHz, taking into account the thermoregulatory response. As the primary result, we found that the ratio of the core temperature elevation to the whole-body-averaged SAR is almost frequency independent for frequencies below a few gigahertz; the ratio decreases above this frequency. At frequencies higher than a few gigahertz, core temperature elevation for the same whole-body averaged SAR becomes lower due to heat convection from the skin to air. This lower core temperature elevation is attributable to skin temperature elevation caused by the power absorption around the body surface. Then, core temperature elevation even for whole-body averaged SAR of 4 W kg(-1) with the duration of 1 h was at most 0.8 °C, which is smaller than a threshold considered in the safety guidelines/standard. Further, the peak 10 g averaged SAR is correlated with the maximum body temperature elevations without extremities and pinna over the frequencies considered. These findings

  11. AORN's Revised Model for Evidence Appraisal and Rating.

    PubMed

    Spruce, Lisa; Van Wicklin, Sharon A; Wood, Amber

    2016-01-01

    AORN perioperative practice specialists responsible for authoring the Guidelines for Perioperative Practice have spent several years using evidence-based practice tools adapted from other organizations. AORN now has its own evidence appraisal tools and model for evidence-based practice that can be used by nurses and students to appraise research and nonresearch articles and assign an evidence rating to help inform perioperative practice decisions. The new and revised tools include a new evidence rating model, hierarchy of evidence, and expanded appraisal tools.

  12. High strain-rate model for fiber-reinforced composites

    SciTech Connect

    Aidun, J.B.; Addessio, F.L.

    1995-07-01

    Numerical simulations of dynamic uniaxial strain loading of fiber-reinforced composites are presented that illustrate the wide range of deformation mechanisms that can be captured using a micromechanics-based homogenization technique as the material model in existing continuum mechanics computer programs. Enhancements to the material model incorporate high strain-rate plastic response, elastic nonlinearity, and rate-dependent strength degradation due to material damage, fiber debonding, and delamination. These make the model relevant to designing composite structural components for crash safety, armor, and munitions applications.

  13. Diagnosis of Photochemical Ozone Production Rates and Limiting Factors based on Observation-based Modeling Approach over East Asia: Impact of Radical Chemistry Mechanism and Ozone-Control Implications

    NASA Astrophysics Data System (ADS)

    Kanaya, Y.

    2015-12-01

    Growth of tropospheric ozone, causing health and climate impacts, is concerned over East Asia, because emissions of precursors have dramatically increased. Photochemical production rates of ozone and limiting factors, primarily studied for urban locations, have been poorly assessed within a perspective of regional-scale air pollution over East Asia. We performed comprehensive observations of ozone precursors at several locations with regional representativeness and made such assessment based on the observation-based modeling approach. Here, diagnosis at Fukue Island (32.75°N, 128.68°E) remotely located in western Japan (May 2009) is highlighted, where the highest 10% of hourly ozone concentrations reached 72‒118 ppb during May influenced by Asian continental outflow. The average in-situ ozone production rate was estimated to be 6.8 ppb per day, suggesting that in-travel production was still active, while larger buildup must have occurred beforehand. Information on the chemical status of the air mass arriving in Japan is important, because it affects how further ozone production occurs after precursor addition from Japanese domestic emissions. The main limiting factor of ozone production was usually NOx, suggesting that domestic NOx emission control is important in reducing further ozone production and the incidence of warning issuance (>120 ppb). VOCs also increased the ozone production rate, and occasionally (14% of time) became dominant. This analysis implies that the VOC reduction legislation recently enacted should be effective. The uncertainty in the radical chemistry mechanism governing ozone production had a non-negligible impact, but the main conclusion relevant to policy was not altered. When chain termination was augmented by HO2-H2O + NO/NO2 reactions and by heterogeneous loss of HO2 on aerosol particle surfaces, the daily ozone production rate decreased by <24%, and the fraction of hours when the VOC-limited condition occurred varied from 14% to 13

  14. A Symmetric Time-Varying Cluster Rate of Descent Model

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2015-01-01

    A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.

  15. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  16. Towards a Model for Protein Production Rates

    NASA Astrophysics Data System (ADS)

    Dong, J. J.; Schmittmann, B.; Zia, R. K. P.

    2007-07-01

    In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two "bottlenecks" (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel "edge" effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.

  17. A model of clearance rate regulation in mussels

    NASA Astrophysics Data System (ADS)

    Fréchette, Marcel

    2012-10-01

    Clearance rate regulation has been modelled as an instantaneous response to food availability, independent of the internal state of the animals. This view is incompatible with latent effects during ontogeny and phenotypic flexibility in clearance rate. Internal-state regulation of clearance rate is required to account for these patterns. Here I develop a model of internal-state based regulation of clearance rate. External factors such as suspended sediments are included in the model. To assess the relative merits of instantaneous regulation and internal-state regulation, I modelled blue mussel clearance rate and growth using a DEB model. In the usual standard feeding module, feeding is governed by a Holling's Type II response to food concentration. In the internal-state feeding module, gill ciliary activity and thus clearance rate are driven by internal reserve level. Factors such as suspended sediments were not included in the simulations. The two feeding modules were compared on the basis of their ability to capture the impact of latent effects, of environmental heterogeneity in food abundance and of physiological flexibility on clearance rate and individual growth. The Holling feeding module was unable to capture the effect of any of these sources of variability. In contrast, the internal-state feeding module did so without any modification or ad hoc calibration. Latent effects, however, appeared transient. With simple annual variability in temperature and food concentration, the relationship between clearance rate and food availability predicted by the internal-state feeding module was quite similar to that observed in Norwegian fjords. I conclude that in contrast with the usual Holling feeding module, internal-state regulation of clearance rate is consistent with well-documented growth and clearance rate patterns.

  18. Modeled Estimates of Soil and Dust Ingestion Rates for Children

    EPA Science Inventory

    Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust inge...

  19. Predicting Buoyant Shear Flows Using Anisotropic Dissipation Rate Models

    NASA Technical Reports Server (NTRS)

    So, R. M. C.; Zhao, C. Y.; Gatski, T. B.

    1999-01-01

    This paper examines the modeling of two-dimensional homogeneous stratified turbulent shear flows using the Reynolds-stress and Reynolds-heat-flux equations. Several closure models have been investigated-, the emphasis is placed on assessing the effect of modeling the dissipation rate tensor in the Reynolds-stress equation. Three different approaches are considered: one is an isotropic approach while the other two are anisotropic approaches. The isotropic approach is based on Kolmogorov's hypothesis and a dissipation rate equation modified to account for vortex stretching. One of the anisotropic approaches is based on an algebraic representation of the dissipation rate tensor, while another relies on solving a modeled transport equation for this tensor. In addition, within the former anisotropic approach, two different algebraic representations are examined one is a function of the Reynolds-stress anisotropy tensor, and the other is a function of' the mean velocity gradients. The performance of these closure models is evaluated against experimental and direct numerical simulation data of pure shear flows. pure buoyant flows and buoyant shear flows. Calculations have been carried out over a range of Richardson numbers (Ri) and two different Prandtl numbers (Pr); thus the effect of Pr on the development of counter-gradient heat flux in a stratified shear flow can be assessed. At low Ri, the isotropic model performs well in the predictions of stratified shear flows; however, its performance deteriorates as Ri increases. At high Ri, the transport equation model for the dissipation rate tensor gives the best result. Furthermore, the results also lend credence to the algebraic dissipation rate model based on the Reynolds stress anisotropy tensor. Finally, it is found that Pr has an effect on the development of counter-gradient heat flux. The calculations show that, under the action of shear, counter-gradient heat flux does not occur even at Ri = 1 in an air flow.

  20. Statistical inference for extinction rates based on last sightings.

    PubMed

    Nakamura, Miguel; Del Monte-Luna, Pablo; Lluch-Belda, Daniel; Lluch-Cota, Salvador E

    2013-09-21

    Rates of extinction can be estimated from sighting records and are assumed to be implicitly constant by many data analysis methods. However, historical sightings are scarce. Frequently, the only information available for inferring extinction is the date of the last sighting. In this study, we developed a probabilistic model and a corresponding statistical inference procedure based on last sightings. We applied this procedure to data on recent marine extirpations and extinctions, seeking to test the null hypothesis of a constant extinction rate. We found that over the past 500 years extirpations in the ocean have been increasing but at an uncertain rate, whereas a constant rate of global marine extinctions is statistically plausible. The small sample sizes of marine extinction records generate such high uncertainty that different combinations of model inputs can yield different outputs that fit the observed data equally well. Thus, current marine extinction trends may be idiosyncratic.

  1. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  2. Justification of Drug Product Dissolution Rate and Drug Substance Particle Size Specifications Based on Absorption PBPK Modeling for Lesinurad Immediate Release Tablets.

    PubMed

    Pepin, Xavier J H; Flanagan, Talia R; Holt, David J; Eidelman, Anna; Treacy, Don; Rowlings, Colin E

    2016-09-01

    In silico absorption modeling has been performed, to assess the impact of in vitro dissolution on in vivo performance for ZURAMPIC (lesinurad) tablets. The dissolution profiles of lesinurad tablets generated using the quality control method were used as an input to a GastroPlus model to estimate in vivo dissolution in the various parts of the GI tract and predict human exposure. A model was set up, which accounts for differences of dosage form transit, dissolution, local pH in the GI tract, and fluid volumes available for dissolution. The predictive ability of the model was demonstrated by confirming that it can reproduce the Cmax observed for independent clinical trial. The model also indicated that drug product batches that pass the proposed dissolution specification of Q = 80% in 30 min are anticipated to be bioequivalent to the clinical reference batch. To further explore the dissolution space, additional simulations were performed using a theoretical dissolution profile below the proposed specification. The GastroPlus modeling indicates that such a batch will also be bioequivalent to standard clinical batches despite having a dissolution profile, which would fail the proposed dissolution specification of Q = 80% in 30 min. This demonstrates that the proposed dissolution specification sits comfortably within a region of dissolution performance where bioequivalence is anticipated and is not near an edge of failure for dissolution, providing additional confidence to the proposed specifications. Finally, simulations were performed using a virtual drug substance batch with a particle size distribution at the limit of the proposed specification for particle size. Based on these simulations, such a batch is also anticipated to be bioequivalent to clinical reference, demonstrating that the proposed specification limits for particle size distribution would give products bioequivalent to the pivotal clinical batches. PMID:27438964

  3. Justification of Drug Product Dissolution Rate and Drug Substance Particle Size Specifications Based on Absorption PBPK Modeling for Lesinurad Immediate Release Tablets.

    PubMed

    Pepin, Xavier J H; Flanagan, Talia R; Holt, David J; Eidelman, Anna; Treacy, Don; Rowlings, Colin E

    2016-09-01

    In silico absorption modeling has been performed, to assess the impact of in vitro dissolution on in vivo performance for ZURAMPIC (lesinurad) tablets. The dissolution profiles of lesinurad tablets generated using the quality control method were used as an input to a GastroPlus model to estimate in vivo dissolution in the various parts of the GI tract and predict human exposure. A model was set up, which accounts for differences of dosage form transit, dissolution, local pH in the GI tract, and fluid volumes available for dissolution. The predictive ability of the model was demonstrated by confirming that it can reproduce the Cmax observed for independent clinical trial. The model also indicated that drug product batches that pass the proposed dissolution specification of Q = 80% in 30 min are anticipated to be bioequivalent to the clinical reference batch. To further explore the dissolution space, additional simulations were performed using a theoretical dissolution profile below the proposed specification. The GastroPlus modeling indicates that such a batch will also be bioequivalent to standard clinical batches despite having a dissolution profile, which would fail the proposed dissolution specification of Q = 80% in 30 min. This demonstrates that the proposed dissolution specification sits comfortably within a region of dissolution performance where bioequivalence is anticipated and is not near an edge of failure for dissolution, providing additional confidence to the proposed specifications. Finally, simulations were performed using a virtual drug substance batch with a particle size distribution at the limit of the proposed specification for particle size. Based on these simulations, such a batch is also anticipated to be bioequivalent to clinical reference, demonstrating that the proposed specification limits for particle size distribution would give products bioequivalent to the pivotal clinical batches.

  4. Nonlinear modelling and control for heart rate response to exercise.

    PubMed

    Zhang, Y; Chen, W; Su, S W; Celler, B

    2012-01-01

    In order to accurately regulate cardiovascular response to exercise for individual exerciser, this study proposes a modelling and control integrated approach based on ε-insensitive Support Vector Regression (SVR) and switching control strategy. Firstly, a control oriented modelling approach is proposed to depict nonlinear behaviours of cardiovascular response at both onset and offset of treadmill exercises by using support vector machine regression. Then, based on the established nonlinear time-variant model, a novel switching Model Predictive Control (MPC) algorithm has been proposed for the optimisation of exercise efforts. The designed controller can take into account both coefficient drifting and parameter jump by embedding the identified model coefficient into the optimiser and adopting switching strategy during the transfer between onset and offset of exercises. The effectiveness of the proposed modelling and control approach was shown from the regulation of dynamical heart rate response to exercise through simulation using MATLAB.

  5. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J

    2016-09-15

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit. PMID:27558073

  6. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J

    2016-09-15

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.

  7. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM. PMID:26737125

  8. A generalized Prandtl-Ishlinskii model for characterizing the rate-independent and rate-dependent hysteresis of piezoelectric actuators.

    PubMed

    Gan, Jinqiang; Zhang, Xianmin; Wu, Heng

    2016-03-01

    In this paper, a generalized hysteresis model is developed to describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. Based on the classical Prandtl-Ishlinskii (P-I) model, the developed model adds a quadratic polynomial and makes other small changes. When it is used to describe rate-independent hysteresis, the parameters of the model are constants, which can be identified by self-adaptive particle swarm optimization. The effectiveness of this rate-independent modified P-I model is demonstrated by comparing simulation results of the developed model and the classic Prandtl-Ishlinskii model. Simulation results suggest that the rate-independent modified P-I model can describe hysteresis more precisely. Compared with the classical P-I model, the rate-independent modified P-I model reduces modeling error by more than 50%. When it is used to describe rate-independent hysteresis, a one-side operator is adopted and the parameters are functions with input frequency. The results of the experiments and simulations have shown that the proposed models can accurately describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. PMID:27036808

  9. A generalized Prandtl-Ishlinskii model for characterizing the rate-independent and rate-dependent hysteresis of piezoelectric actuators.

    PubMed

    Gan, Jinqiang; Zhang, Xianmin; Wu, Heng

    2016-03-01

    In this paper, a generalized hysteresis model is developed to describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. Based on the classical Prandtl-Ishlinskii (P-I) model, the developed model adds a quadratic polynomial and makes other small changes. When it is used to describe rate-independent hysteresis, the parameters of the model are constants, which can be identified by self-adaptive particle swarm optimization. The effectiveness of this rate-independent modified P-I model is demonstrated by comparing simulation results of the developed model and the classic Prandtl-Ishlinskii model. Simulation results suggest that the rate-independent modified P-I model can describe hysteresis more precisely. Compared with the classical P-I model, the rate-independent modified P-I model reduces modeling error by more than 50%. When it is used to describe rate-independent hysteresis, a one-side operator is adopted and the parameters are functions with input frequency. The results of the experiments and simulations have shown that the proposed models can accurately describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators.

  10. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  11. Ground-Based Remote Retrievals of Cumulus Entrainment Rates

    SciTech Connect

    Wagner, Timothy J.; Turner, David D.; Berg, Larry K.; Krueger, Steven K.

    2013-07-26

    While fractional entrainment rates for cumulus clouds have typically been derived from airborne observations, this limits the size and scope of available data sets. To increase the number of continental cumulus entrainment rate observations available for study, an algorithm for retrieving them from ground-based remote sensing observations has been developed. This algorithm, called the Entrainment Rate In Cumulus Algorithm (ERICA), uses the suite of instruments at the Southern Great Plains (SGP) site of the United States Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility as inputs into a Gauss-Newton optimal estimation scheme, in which an assumed guess of the entrainment rate is iteratively adjusted through intercomparison of modeled liquid water path and cloud droplet effective radius to their observed counterparts. The forward model in this algorithm is the Explicit Mixing Parcel Model (EMPM), a cloud parcel model that treats entrainment as a series of discrete entrainment events. A quantified value for measurement uncertainty is also returned as part of the retrieval. Sensitivity testing and information content analysis demonstrate the robust nature of this method for retrieving accurate observations of the entrainment rate without the drawbacks of airborne sampling. Results from a test of ERICA on three months of shallow cumulus cloud events show significant variability of the entrainment rate of clouds in a single day and from one day to the next. The mean value of 1.06 km-¹ for the entrainment rate in this dataset corresponds well with prior observations and simulations of the entrainment rate in cumulus clouds.

  12. Modeling Equity for Alternative Water Rate Structures

    NASA Astrophysics Data System (ADS)

    Griffin, R.; Mjelde, J.

    2011-12-01

    The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to

  13. Factors relating to poor survival rates of aged cervical cancer patients: a population-based study with the relative survival model in Osaka, Japan.

    PubMed

    Ioka, Akiko; Ito, Yuri; Tsukuma, Hideaki

    2009-01-01

    Poor survival of older cervical cancer patients has been reported; however, related factors, such as the extent of disease and the competitive risk by aging have not been well evaluated. We applied the relative survival model developed by Dickman et al to resolve this issue. Study subjects were cervical cancer patients retrieved from the Osaka Cancer Registry. They were limited to the 10,048 reported cases diagnosed from 1975 to 1999, based on the quality of data collection on vital status. Age at diagnosis was categorized into <30, 30-54, 55-64, and > or = 65 years. The impact of prognostic factors on 5-year survival was evaluated with the relative survival model, incorporating patients' expected survival in multivariate analysis. The age-specific relative excess risk (RER) of death was significantly higher for older groups as compared with women aged 30-54 years (RER, 1.58 at 55-64 and 2.51 at > or = 65 years). The RER was decreased by 64.8% among the 55-64 year olds as an effect of cancer stage at diagnosis, and by 43.4% among those 65 years old and over. After adding adjustment for treatment modalities, the RER was no longer significantly higher among 55-64 year olds; however, it was still higher among 65 year olds and over. Advanced stage at diagnosis was the main determinant of poor survival among the aged cervical cancer patients, although other factors such as limitations on the combination of treatment were also suggested to have an influence in those aged 65 years and over.

  14. Modeling Seismicity Rate Changes in Oklahoma and Arkansas

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2012-12-01

    The rate of M≥3 earthquakes in the central and eastern US increased beginning in 2009, particularly in regions such as Oklahoma and central Arkansas where fluid injection has occurred (Ellsworth et al., SSA abs, 2012; Horton, SRL, 2012). We compare rate changes observed in Oklahoma, which had a low background seismicity rate before 2009, to rate changes observed in central Arkansas, which had swarms prior to the start of wastewater injection (Chiu et al., BSSA, 1984; Horton, SRL, 2012). In both cases, stochastic Epidemic-Type Aftershock Sequence (ETAS) models (Ogata, JASA, 1988) and statistical tests demonstrate that the background rate of independent events and the aftershock productivity must increase in 2009 in order to explain the observed increase in seismicity. Productivity is lower during the earlier tectonic swarms in Arkansas. The change in aftershock productivity may provide a way to distinguish manmade from natural earthquake rate changes and could provide insights into the physical mechanisms of induced seismicity. We fit the ETAS model, which is based on empirical aftershock scaling laws such as Omori's Law and the Gutenberg-Richter magnitude distribution, to a 1973-2011 USGS PDE catalog of M≥3 Oklahoma earthquakes and a 1982-2012 ANSS catalog of M≥2.2 Arkansas earthquakes. To determine whether a rate increase is due to a change in background seismicity rate, aftershock productivity, or some combination of the two, we do the following: 1) fit the model parameters to the data, 2) convert origin times to transformed times (Ogata, JGR, 1992), and 3) use Runs and autocorrelation function tests to test the null hypothesis that the transformed times are drawn from a Poisson distribution with constant rate (as expected when no external processes trigger earthquakes besides a constant tectonic loading rate). In both cases a single set of parameters cannot fit the entire time period, suggesting that significant changes in the underlying process occurred

  15. An empirical model of H2O, CO2 and CO coma distributions and production rates for comet 67P/Churyumov-Gerasimenko based on ROSINA/DFMS measurements and AMPS-DSMC simulations

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA team

    2016-10-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.

  16. Model-based estimates of annual survival rate are preferable to observed maximum lifespan statistics for use in comparative life-history studies

    USGS Publications Warehouse

    Krementz, D.G.; Sauer, J.R.; Nichols, J.D.

    1989-01-01

    Estimates of longevity are available for many animals, and are commonly used in comparative life-history analyses. We suggest that annual survival rate is more appropriate life history parameter for most comparative life history analyses. Observed maximum longevities were not correlated with the annual survival rate estimates and appear to be unstable over time. We recommend that observed maximum lifespans not be used in life history analyses.

  17. Strain Rate Dependant Material Model for Orthotropic Metals

    NASA Astrophysics Data System (ADS)

    Vignjevic, Rade

    2016-08-01

    In manufacturing processes anisotropic metals are often exposed to the loading with high strain rates in the range from 102 s-1 to 106 s-1 (e.g. stamping, cold spraying and explosive forming). These types of loading often involve generation and propagation of shock waves within the material. The material behaviour under such a complex loading needs to be accurately modelled, in order to optimise the manufacturing process and achieve appropriate properties of the manufactured component. The presented research is related to development and validation of a thermodynamically consistent physically based constitutive model for metals under high rate loading. The model is capable of modelling damage, failure and formation and propagation of shock waves in anisotropic metals. The model has two main parts: the strength part which defines the material response to shear deformation and an equation of state (EOS) which defines the material response to isotropic volumetric deformation [1]. The constitutive model was implemented into the transient nonlinear finite element code DYNA3D [2] and our in house SPH code. Limited model validation was performed by simulating a number of high velocity material characterisation and validation impact tests. The new damage model was developed in the framework of configurational continuum mechanics and irreversible thermodynamics with internal state variables. The use of the multiplicative decomposition of deformation gradient makes the model applicable to arbitrary plastic and damage deformations. To account for the physical mechanisms of failure, the concept of thermally activated damage initially proposed by Tuller and Bucher [3], Klepaczko [4] was adopted as the basis for the new damage evolution model. This makes the proposed damage/failure model compatible with the Mechanical Threshold Strength (MTS) model Follansbee and Kocks [5], 1988; Chen and Gray [6] which was used to control evolution of flow stress during plastic deformation. In

  18. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND... ACCREDITATION § 391.2 Base time rate. The base time rate for inspection services provided pursuant to §§...

  19. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 592.510 Section 592... PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate. The base time rate for voluntary inspection services for egg products is $47.79 per hour per...

  20. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND... ACCREDITATION § 391.2 Base time rate. The base time rate for inspection services provided pursuant to §§...

  1. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 592.510 Section 592... PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate. The base time rate for voluntary inspection services for egg products is $47.79 per hour per...

  2. Modeling the nonradiative decay rate of electronically excited thioflavin T.

    PubMed

    Erez, Yuval; Liu, Yu-Hui; Amdursky, Nadav; Huppert, Dan

    2011-08-01

    A computational model of nonradiative decay is developed and applied to explain the time-dependent emission spectrum of thioflavin T (ThT). The computational model is based on a previous model developed by Glasbeek and co-workers (van der Meer, M. J.; Zhang, H.; Glasbeek, M. J. Chem. Phys. 2000, 112, 2878) for auramine O, a molecule that, like ThT, exhibits a high nonradiative rate. The nonradiative rates of both auramine O and ThT are inversely proportional to the solvent viscosity. The Glasbeek model assumes that the excited state consists of an adiabatic potential surface constructed by adiabatic coupling of emissive and dark states. For ThT, the twist angle between the benzothiazole and the aniline is responsible for the extensive mixing of the two excited states. At a twist angle of 90°, the S(1) state assumes a charge-transfer-state character with very small oscillator strength, which causes the emission intensity to be very small as well. In the ground state, the twist angle of ThT is rather small. The photoexcitation leads first to a strongly emissive state (small twist angle). As time progresses, the twist angle increases and the oscillator strength decreases. The fit of the experimental results by the model calculations is good for times longer than 3 ps. When a two-coordinate model is invoked or a solvation spectral-shift component is added, the fit to the experimental results is good at all times. PMID:21711024

  3. Development of an absorbance-based response model for monitoring the growth rates of Arcobacter butzleri as a function of temperature, pH, and NaCl concentration.

    PubMed

    Park, Shin Young; Ha, Sang-Do

    2015-01-01

    In this study, the growth of Arcobacter butzleri in poultry was evaluated as a function of storage temperature (5, 22.5, and 40°C), pH (5, 7, and 9), and NaCl concentration (0, 4, and 8%). A predictive model was developed using the absorbance-based response surface methodology to describe the growth rate. The primary model was obtained to predict a growth rate with a good fit (R2≥0.95), and the secondary model was obtained by nonlinear regression analysis and calculated as follows: Growth rate=-2.267274-0.024181 (Temp)+0.6459384 (pH)+0.1926227 (NaCl)+0.0024661 (Temp×pH)-0.001312 (Temp×NaCl)-0.018802 (pH×NaCl)+0.000467 (Temp2)-0.041711 (pH2)- 0.007426 (NaCl2). Our data showed that the growth of A. butzleri can be completely inhibited at a pH of 5 (in the absence of NaCl, at 5°C) and at a pH of 9 (in the presence of 8% NaCl, at 5°C). The surface response model was statistically significant, with P<0.0001, as evident from the Fisher F test and from coefficient determination (R2, 0.95). This model was also verified by the bias factor (Bf, 0.839), accuracy factor (Af, 1.343), and mean square error (MSE, 0.0138). The newly developed secondary models of growth rate for A. butzleri could possibly be incorporated into a tertiary modeling program such as Pathogen Modeling Program (U.S. Department of Agriculture [USDA]) and Food Micro Model (in the United Kingdom). As a result, they could be used to predict the growth kinetics of A. butzleri as a function of a combination of environmental factors. Ultimately, the developed model can be used to reduce A. butzleri in poultry production, processing, and distribution, thereby enhancing food safety.

  4. Tantalum strength model incorporating temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  6. A simple reaction-rate model for turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.

    1975-01-01

    A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.

  7. Triple-{alpha} reaction rate constrained by stellar evolution models

    SciTech Connect

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-12

    We investigate the quantitative constraint on the triple-{alpha} reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}25 and in the metallicity range between Z= 0 and Z= 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10M{sub Circled-Dot-Operator }) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-{alpha} reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least {nu} > 10 at T = 1-1.2 Multiplication-Sign 10{sup 8}K where the cross section is proportional to T{sup {nu}}. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than {approx} 10{sup -29} cm{sup 6} s{sup -1} mole{sup -2} at Almost-Equal-To 10{sup 7.8} K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  8. Triple-α reaction rate constrained by stellar evolution models

    NASA Astrophysics Data System (ADS)

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-01

    We investigate the quantitative constraint on the triple-α reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8<=M/Msolar<=25 and in the metallicity range between Z = 0 and Z = 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10Msolar) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8<=M/Msolar<=6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-α reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least ν > 10 at T = 1-1.2×108K where the cross section is proportional to Tν. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than ~ 10-29 cm6 s-1 mole-2 at ~ 107.8 K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  9. Empirical rate equation model and rate calculations of hydrogen generation for Hanford tank waste

    SciTech Connect

    HU, T.A.

    1999-07-13

    Empirical rate equations are derived to estimate hydrogen generation based on chemical reactions, radiolysis of water and organic compounds, and corrosion processes. A comparison of the generation rates observed in the field with the rates calculated for twenty eight tanks shows agreement with in a factor of two to three.

  10. Rate-based screening of pressure-dependent reaction networks

    NASA Astrophysics Data System (ADS)

    Matheu, David M.; Lada, Thomas A.; Green, William H.; Dean, Anthony M.; Grenda, Jeffrey M.

    2001-08-01

    Computer tools to automatically generate large gas-phase kinetic models find increasing use in industry. Until recently, mechanism generation algorithms have been restricted to generating kinetic models in the high-pressure limit, unless special adjustments are made for particular cases. A new approach, recently presented, allows the automated generation of pressure-dependent reaction networks for chemically and thermally activated reactions (Grenda et al., 2000; Grenda and Dean, in preparation; Grenda et al., 1998; see Refs. [1-3]). These pressure-dependent reaction networks can be quite large and can contain a large number of unimportant pathways. We thus present an algorithm for the automated screening of pressure-dependent reaction networks. It allows a computer to discover and incorporate pressure-dependent reactions in a manner consistent with the existing rate-based model generation method. The new algorithm works by using a partially-explored (or "screened") pressure-dependent reaction network to predict rate constants, and updating predictions as more parts of the network are discovered. It requires only partial knowledge of the network connectivity, and allows the user to explore only the important channels at a given temperature and pressure. Applications to vinyl + O 2, 1-naphthyl + acetylene and phenylvinyl radical dissociation are presented. We show that the error involved in using a truncated pressure-dependent network to predict a rate constant is insignificant, for all channels whose yields are significantly greater than a user-specified tolerance. A bound for the truncation error is given. This work demonstrates the feasibility of using screened networks to predict pressure-dependent rate constants k(T,P).

  11. Comparison of field theory models of interest rates with market data.

    PubMed

    Baaquie, Belal E; Srikant, Marakani

    2004-03-01

    We calibrate and test various variants of field theory models of the interest rate with data from Eurodollar futures. Models based on psychological factors are seen to provide the best fit to the market. We make a model independent determination of the volatility function of the forward rates from market data.

  12. Comparison of field theory models of interest rates with market data

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Srikant, Marakani

    2004-03-01

    We calibrate and test various variants of field theory models of the interest rate with data from Eurodollar futures. Models based on psychological factors are seen to provide the best fit to the market. We make a model independent determination of the volatility function of the forward rates from market data.

  13. Eruption rates in explosive eruptions: Ground truth and models

    NASA Astrophysics Data System (ADS)

    Tumi Gudmundsson, Magnus; Durig, Tobias; Höskuldsson, Ármann; Thordarson, Thorvaldur; Larsen, Gudrún; Óladóttir, Bergrún A.; Högnadóttir, Thórdís; Oddsson, Björn; Björnsson, Halldór; Gudmundsdóttir, Esther R.

    2015-04-01

    Estimations of eruption rates in explosive eruptions are difficult and error margins are invariably high. In small to moderate sized eruptions effects of wind on plume height can be large and in larger eruptions observations are often difficult due to masking of source by low cloud, pyroclastic density currents and monitoring system saturation. Several medium-sized explosive eruptions in recent years have been an important in sparking off intense research on e.g. atmosphere-plume interaction and associated effects of wind on plume height. Other methods that do not rely on plume height are e.g. infrared satellite monitoring of atmospheric loading of fine tephra, infrasound, analysis of video recordings from vents, and it has been suggested that co-eruptive tilt-meter deformation data can predict eruption intensity. The eruptions of Eyjafjallajökull in 2010 and Grímsvötn in 2011 provided a wealth of data that potentially can be of use in developing constraints of eruption rates in explosive eruptions. A key parameter in all such comparisons between models and data is as detailed knowledge as possible on tephra fallout. For both Eyjafjallajökull and Grímsvötn intensive field efforts took place to map out the deposits during and immediately after the eruptions. The resulting maps cover both individual phases as well as total fallout. Comparison of these data with plume-based and other models of mass discharge rates is presently work in progress. A desirable future aim is near real time estimates of mass eruption rates based several of the parameters mentioned above. This type of work is currently ongoing within the framework of the EU-funded supersite project FUTUREVOLC.

  14. Rate equation modelling and investigation of quantum cascade detector characteristics

    NASA Astrophysics Data System (ADS)

    Saha, Sumit; Kumar, Jitendra

    2016-10-01

    A simple precise transport model has been proposed using rate equation approach for the characterization of a quantum cascade detector. The resonant tunneling transport is incorporated in the rate equation model through a resonant tunneling current density term. All the major scattering processes are included in the rate equation model. The effect of temperature on the quantum cascade detector characteristics has been examined considering the temperature dependent band parameters and the carrier scattering processes. Incorporation of the resonant tunneling process in the rate equation model improves the detector performance appreciably and reproduces the detector characteristics within experimental accuracy.

  15. Individual-Based Completion Rates for Apprentices. Technical Paper

    ERIC Educational Resources Information Center

    Karmel, Tom

    2011-01-01

    Low completion rates for apprentices and trainees have received considerable attention recently and it has been argued that NCVER seriously understates completion rates. In this paper Tom Karmel uses NCVER data on recommencements to estimate individual-based completion rates. It is estimated that around one-quarter of trade apprentices swap…

  16. Item Response Models for Local Dependence among Multiple Ratings

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Su, Chi-Ming; Qiu, Xue-Lan

    2014-01-01

    Ratings given to the same item response may have a stronger correlation than those given to different item responses, especially when raters interact with one another before giving ratings. The rater bundle model was developed to account for such local dependence by forming multiple ratings given to an item response as a bundle and assigning…

  17. Solutions of two-factor models with variable interest rates

    NASA Astrophysics Data System (ADS)

    Li, Jinglu; Clemons, C. B.; Young, G. W.; Zhu, J.

    2008-12-01

    The focus of this work is on numerical solutions to two-factor option pricing partial differential equations with variable interest rates. Two interest rate models, the Vasicek model and the Cox-Ingersoll-Ross model (CIR), are considered. Emphasis is placed on the definition and implementation of boundary conditions for different portfolio models, and on appropriate truncation of the computational domain. An exact solution to the Vasicek model and an exact solution for the price of bonds convertible to stock at expiration under a stochastic interest rate are derived. The exact solutions are used to evaluate the accuracy of the numerical simulation schemes. For the numerical simulations the pricing solution is analyzed as the market completeness decreases from the ideal complete level to one with higher volatility of the interest rate and a slower mean-reverting environment. Simulations indicate that the CIR model yields more reasonable results than the Vasicek model in a less complete market.

  18. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  19. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  20. Model based manipulator control

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1989-01-01

    The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.

  1. 76 FR 4569 - Market-Based Rate Affiliate Restrictions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ..., fuel procurement or resource planning may not be shared under the market- based rate affiliate..., 75 FR 20796 (Apr. 21, 2010), Notice of Proposed Rulemaking, FERC Stats. & Regs. ] 32,567 (2010). I... affiliates, i.e., affiliates whose power sales are regulated in whole or in part on a market-based rate...

  2. Bioremediation of oil-contaminated soil -- A rate model

    SciTech Connect

    Li, K.Y.; Zhang, Y.; Xu, T.

    1995-12-31

    Three rate equations, a modified Monod equation and two mass transfer rate equations, were used to calculate the biodegradation rate, oxygen transfer rate and oil transfer rate during a bioremediation process of oil-contaminated soil. Based on experimental rate constants, these three rates were calculated and compared. It was found the bioremediation rate of oil-contaminated soil could be controlled by the mass transfer process of oil into aqueous solution (0.12 mg BOD/(1-h)). When the oil transfer rate is enhanced by at least 10 times, the oxygen transfer process (0.1--1.0 mg BOD/(1-h)) becomes the rate-controlling step. For most of the cases, the biodegradation of oil in aqueous solution is not the limiting step unless the microbial population in the aqueous solution is less than 100 mg VSS/1.

  3. A model for the burning rates of composite propellants

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.; Strand, L. D.

    1980-01-01

    An analytical model of the steady-state burning of composite solid propellants is presented. An improved burning rate model is achieved by incorporating an improved AP monopropellant model, a separate energy balance for the binder in which a portion of the diffusion flame is used to heat the binder, proper use of the binder regression rate in the model, and a model for the combustion of the energetic binder component of CMDB propellants. Also, an improved correlation and model of aluminum agglomeration is developed which properly describes compositional trends.

  4. Satellite altimetry based rating curves throughout the entire Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present

  5. Rate quantization modeling for rate control of MPEG video coding and recording

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Liu, Bede

    1995-04-01

    For MPEG video coding and recording applications, it is important to select quantization parameters at slice and macroblock levels to produce nearly constant quality image for a given bit count budget. A well designed rate control strategy can improve overall image quality for video transmission over a constant-bit-rate channel and fulfill editing requirement of video recording, where a certain number of new pictures are encoded to replace consecutive frames on the storage media using at most the same number of bits. In this paper, we developed a feedback method with a rate-quantization model, which can be adapted to changes in picture activities. The model is used for quantization parameter selection at the frame and slice level. Extra computations needed are modest. Experiments show the accuracy of the model and the effectiveness of the proposed rate control method. A new bit allocation algorithm is then proposed for MPEG video coding.

  6. Markov models and the ensemble Kalman filter for estimation of sorption rates.

    SciTech Connect

    Vugrin, Eric D.; McKenna, Sean Andrew; Vugrin, Kay White

    2007-09-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

  7. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  8. A Modeling Perspective on Interpreting Rates of Change in Context

    ERIC Educational Resources Information Center

    Ärlebäck, Jonas B.; Doerr, Helen M.; O'Neil, AnnMarie H.

    2013-01-01

    Functions provide powerful tools for describing change, but research has shown that students find difficulty in using functions to create and interpret models of changing phenomena. In this study, we drew on a models and modeling perspective to design an instructional approach to develop students' abilities to describe and interpret rates of…

  9. Model analysis of the link between interest rates and crashes

    NASA Astrophysics Data System (ADS)

    Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft

    2016-09-01

    We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.

  10. Surface analyses and modelling of rate multiplicity and instabilities

    SciTech Connect

    Harold, M.P.; Conner, W.C.

    1990-11-01

    Catalytic partial and complete oxidations of chemical species are a quite important class of reactions in the production of many commercial chemicals and in the elimination of environmental pollutants. This research focuses on a sub-class of oxidation reactions in which CO is a key player -- be it a reactant, intermediate, or product -- in the catalytic sequence and chemistry. The first three years of our research has followed two parallel paths which have a common destination and which together provide a framework for the proposed new research. The first path has involved the development of a combined experimental/modelling and analysis methodology for constructing feasible mechanistic sequences and their corresponding kinetic models of catalytic reactions that exhibit multiple rate behavior. The rather well-studied Pt catalyzed CO oxidation served as the test reaction. Our approach involves the measurement of basic kinetic features (apparent reaction orders, activation energy) and multiplicity features (location of ignition and extinction points) over a wide range of conditions (catalyst temperature, total pressure, feed composition), and a kinetic modelling part, in which potential reaction sequences are constructed and screened based on their ability to predict the experimentally observed kinetic and multiplicity features over a wide range of conditions. The second path has involved the development of an under-utilized (in catalysis) spectroscopic technique, Fourier transform infrared emission spectroscopy (FTIRES), to monitor the surface of a catalyst during reaction. Main accomplishments from both studies are summarized.

  11. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  12. 75 FR 72581 - Assessments, Assessment Base and Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ...\\ The 2009 assessments rule established the following initial base assessment rate schedule: \\3\\ 74 FR... for Comment on Assessment Dividends, Assessment Rates and Designated Reserve Ratio, 75 FR 66271. III... adjustments. \\7\\ 74 FR 9525. Unsecured Debt Adjustment All other things equal, greater amounts of...

  13. The Random-Effect Generalized Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wu, Shiu-Lien

    2011-01-01

    Rating scale items have been widely used in educational and psychological tests. These items require people to make subjective judgments, and these subjective judgments usually involve randomness. To account for this randomness, Wang, Wilson, and Shih proposed the random-effect rating scale model in which the threshold parameters are treated as…

  14. Modeled estimates of soil and dust ingestion rates for children.

    PubMed

    Ozkaynak, Halûk; Xue, Jianping; Zartarian, Valerie G; Glen, Graham; Smith, Luther

    2011-04-01

    Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust ingestion via hand and object mouthing of children, using EPA's SHEDS model. Results for children 3 to <6 years old show that mean and 95th percentile total ingestion of soil and dust values are 68 and 224 mg/day, respectively; mean from soil ingestion, hand-to-mouth dust ingestion, and object-to-mouth dust ingestion are 41 mg/day, 20 mg/day, and 7 mg/day, respectively. In general, hand-to-mouth soil ingestion was the most important pathway, followed by hand-to-mouth dust ingestion, then object-to-mouth dust ingestion. The variability results are most sensitive to inputs on surface loadings, soil-skin adherence, hand mouthing frequency, and hand washing frequency. The predicted total soil and dust ingestion fits a lognormal distribution with geometric mean = 35.7 and geometric standard deviation = 3.3. There are two uncertainty distributions, one below the 20th percentile and the other above. Modeled uncertainties ranged within a factor of 3-30. Mean modeled estimates for soil and dust ingestion are consistent with past information but lower than the central values recommended in the 2008 EPA Child-Specific Exposure Factors Handbook. This new modeling approach, which predicts soil and dust ingestion by pathway, source type, population group, geographic location, and other factors, offers a better characterization of exposures relevant to health risk assessments as compared to using a single value.

  15. Modeled estimates of soil and dust ingestion rates for children.

    PubMed

    Ozkaynak, Halûk; Xue, Jianping; Zartarian, Valerie G; Glen, Graham; Smith, Luther

    2011-04-01

    Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust ingestion via hand and object mouthing of children, using EPA's SHEDS model. Results for children 3 to <6 years old show that mean and 95th percentile total ingestion of soil and dust values are 68 and 224 mg/day, respectively; mean from soil ingestion, hand-to-mouth dust ingestion, and object-to-mouth dust ingestion are 41 mg/day, 20 mg/day, and 7 mg/day, respectively. In general, hand-to-mouth soil ingestion was the most important pathway, followed by hand-to-mouth dust ingestion, then object-to-mouth dust ingestion. The variability results are most sensitive to inputs on surface loadings, soil-skin adherence, hand mouthing frequency, and hand washing frequency. The predicted total soil and dust ingestion fits a lognormal distribution with geometric mean = 35.7 and geometric standard deviation = 3.3. There are two uncertainty distributions, one below the 20th percentile and the other above. Modeled uncertainties ranged within a factor of 3-30. Mean modeled estimates for soil and dust ingestion are consistent with past information but lower than the central values recommended in the 2008 EPA Child-Specific Exposure Factors Handbook. This new modeling approach, which predicts soil and dust ingestion by pathway, source type, population group, geographic location, and other factors, offers a better characterization of exposures relevant to health risk assessments as compared to using a single value. PMID:21039709

  16. Stable large-scale CO2 storage in defiance of an energy system based on renewable energy - Modelling the impact of varying CO2 injection rates on reservoir behavior

    NASA Astrophysics Data System (ADS)

    Bannach, Andreas; Hauer, Rene; Martin, Streibel; Stienstra, Gerard; Kühn, Michael

    2015-04-01

    The IPCC Report 2014 strengthens the need for CO2 storage as part of CCS or BECCS to reach ambitious climate goals despite growing energy demand in the future. The further expansion of renewable energy sources is a second major pillar. As it is today in Germany the weather becomes the controlling factor for electricity production by fossil fuelled power plants which lead to significant fluctuations of CO2-emissions which can be traced in injection rates if the CO2 were captured and stored. To analyse the impact of such changing injection rates on a CO2 storage reservoir. two reservoir simulation models are applied: a. An (smaller) reservoir model approved by gas storage activities for decades, to investigate the dynamic effects in the early stage of storage filling (initial aquifer displacement). b. An anticline structure big enough to accommodate a total amount of ≥ 100 Mega tons CO2 to investigate the dynamic effects for the entire operational life time of the storage under particular consideration of very high filling levels (highest aquifer compression). Therefore a reservoir model was generated. The defined yearly injection rate schedule is based on a study performed on behalf of IZ Klima (DNV GL, 2014). According to this study the exclusive consideration of a pool of coal-fired power plants causes the most intensive dynamically changing CO2 emissions and hence accounts for variations of a system which includes industry driven CO2 production. Besides short-term changes (daily & weekly cycles) seasonal influences are also taken into account. Simulation runs cover a variation of injection points (well locations at the top vs. locations at the flank of the structure) and some other largely unknown reservoir parameters as aquifer size and aquifer mobility. Simulation of a 20 year storage operation is followed by a post-operational shut-in phase which covers approximately 500 years to assess possible effects of changing injection rates on the long-term reservoir

  17. First principles modeling of nonlinear incidence rates in seasonal epidemics.

    PubMed

    Ponciano, José M; Capistrán, Marcos A

    2011-02-01

    In this paper we used a general stochastic processes framework to derive from first principles the incidence rate function that characterizes epidemic models. We investigate a particular case, the Liu-Hethcote-van den Driessche's (LHD) incidence rate function, which results from modeling the number of successful transmission encounters as a pure birth process. This derivation also takes into account heterogeneity in the population with regard to the per individual transmission probability. We adjusted a deterministic SIRS model with both the classical and the LHD incidence rate functions to time series of the number of children infected with syncytial respiratory virus in Banjul, Gambia and Turku, Finland. We also adjusted a deterministic SEIR model with both incidence rate functions to the famous measles data sets from the UK cities of London and Birmingham. Two lines of evidence supported our conclusion that the model with the LHD incidence rate may very well be a better description of the seasonal epidemic processes studied here. First, our model was repeatedly selected as best according to two different information criteria and two different likelihood formulations. The second line of evidence is qualitative in nature: contrary to what the SIRS model with classical incidence rate predicts, the solution of the deterministic SIRS model with LHD incidence rate will reach either the disease free equilibrium or the endemic equilibrium depending on the initial conditions. These findings along with computer intensive simulations of the models' Poincaré map with environmental stochasticity contributed to attain a clear separation of the roles of the environmental forcing and the mechanics of the disease transmission in shaping seasonal epidemics dynamics.

  18. The Wealth Distribution Model with the Kickback Rate

    NASA Astrophysics Data System (ADS)

    Zhang, Yujie; He, Mingfeng

    We define an asset exchange model by adding the kickback rate to the trade, and discuss the Gini index with different kickback rates. It is found that for every kickback rate, the related Gini index tends to be steady; thus, the kickback rate — Gini index curve may be obtained. Furthermore, it is shown that the Gini index decreases when the kickback rate increases, so that the fair degree of social wealth distribution gets better. The Gini index reaches a minimum when the kickback rate is 0.58, and then it increases, as the accretion of the kickback rate destroys the fair degree of social wealth distribution. However, in all situations, the Gini index with kickback rate is less than the one without kickback. This means that the introduction of kickback rate is favorable to the raising of the fair degree of wealth distribution. We also define a moral index similar to the Gini index to weigh the differences of social moral level, and find that the differences of social moral level increase with time for the model with kickback rate.

  19. Photosynthetic rates derived from satellite-based chlorophyll concentration

    SciTech Connect

    Behrenfeld, M.J.; Falkowski, P.G.

    1997-01-01

    We assembled a dataset of C-based productivity measurements to understand the critical variables required for accurate assessment of daily depth-integrated phytoplankton carbon fixation (PP{sub eu}) from measurements of sea surface pigment concentrations (C{sub sat}). From this dataset, we developed a light-dependent, depth-resolved model for carbon fixation (VGPM) that partitions environmental factors affecting primary production into those that influence the relative vertical distribution of primary production (P{sub z}) and those that control the optimal assimilation efficiency of the productivity profile (P{sub opt}{sup B}). The VGPM accounted for 79% of the observed variability in P{sub z} and 86% of the variability in PP{sub eu} by using measured values of P{sub opt}{sup B}. Our results indicate that the accuracy of productivity algorithms in estimating PP{sub eu} is dependent primarily upon the ability to accurately represent variability in P{sub opt}{sup B}. We developed a temperature-dependent P{sub opt}{sup B} model that was used in conjunction with monthly climatological images of C{sub sat}, sea surface temperature, and cloud-corrected estimates of surface irradiance to calculate a global annual phytoplankton carbon fixation (PP{sub annu}) rate of 43.5 Pg C yr{sup {minus}1}. The geographical distribution of PP{sub annu} was distinctly different than results from previous models. Our results illustrate the importance of focusing P{sub opt}{sup B} model development on temporal and spatial, rather than the vertical, variability. 87 refs., 9 figs., 2 tabs.

  20. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  1. Equivalence of interest rate models and lattice gases

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t1,t2)=-Cov[x(t1),x(t2)]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e-γ|x-y|-e-γ(x+y)). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  2. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified

  3. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  4. Model Based Definition

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  5. A rumor spreading model with variable forgetting rate

    NASA Astrophysics Data System (ADS)

    Zhao, Laijun; Xie, Wanlin; Gao, H. Oliver; Qiu, Xiaoyan; Wang, Xiaoli; Zhang, Shuhai

    2013-12-01

    A rumor spreading model with the consideration of forgetting rate changing over time is examined in small-world networks. The mean-field equations are derived to describe the dynamics of rumor spreading in small-world networks. Further, numerical solutions are conducted on LiveJournal, an online social blogging platform, to better understand the performance of the model. Results show that the forgetting rate has a significant impact on the final size of rumor spreading: the larger the initial forgetting rate or the faster the forgetting speed, the smaller the final size of the rumor spreading. Numerical solutions also show that the final size of rumor spreading is much larger under a variable forgetting rate compared to that under a constant forgetting rate.

  6. Spray Combustion Modeling with VOF and Finite-Rate Chemistry

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Wang, Ten-See

    1996-01-01

    A spray atomization and combustion model is developed based on the volume-of-fluid (VOF) transport equation with finite-rate chemistry model. The gas-liquid interface mass, momentum and energy conservation laws are modeled by continuum surface force mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed range flows. The objectives of the present study are: (1) to develop and verify the fractional volume-of-fluid (VOF) cell partitioning approach into a predictor-corrector algorithm to deal with multiphase (gas-liquid) free surface flow problems; (2) to implement the developed unified algorithm in a general purpose computational fluid dynamics (CFD) code, Finite Difference Navier-Stokes (FDNS), with droplet dynamics and finite-rate chemistry models; and (3) to demonstrate the effectiveness of the present approach by simulating benchmark problems of jet breakup/spray atomization and combustion. Modeling multiphase fluid flows poses a significant challenge because a required boundary must be applied to a transient, irregular surface that is discontinuous, and the flow regimes considered can range from incompressible to highspeed compressible flows. The flow-process modeling is further complicated by surface tension, interfacial heat and mass transfer, spray formation and turbulence, and their interactions. The major contribution of the present method is to combine the novel feature of the Volume of Fluid (VOF) method and the Eulerian/Lagrangian method into a unified algorithm for efficient noniterative, time-accurate calculations of multiphase free surface flows valid at all speeds. The proposed method reformulated the VOF equation to strongly couple two distinct phases (liquid and gas), and tracks droplets on a Lagrangian frame when spray model is required, using a unified predictor-corrector technique to account for the non-linear linkages through the convective contributions of VOF. The discontinuities within the

  7. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  8. Molecule-based approach for computing chemical-reaction rates in upper atmosphere hypersonic flows.

    SciTech Connect

    Gallis, Michail A.; Bond, Ryan Bomar; Torczynski, John Robert

    2009-08-01

    This report summarizes the work completed during FY2009 for the LDRD project 09-1332 'Molecule-Based Approach for Computing Chemical-Reaction Rates in Upper-Atmosphere Hypersonic Flows'. The goal of this project was to apply a recently proposed approach for the Direct Simulation Monte Carlo (DSMC) method to calculate chemical-reaction rates for high-temperature atmospheric species. The new DSMC model reproduces measured equilibrium reaction rates without using any macroscopic reaction-rate information. Since it uses only molecular properties, the new model is inherently able to predict reaction rates for arbitrary nonequilibrium conditions. DSMC non-equilibrium reaction rates are compared to Park's phenomenological non-equilibrium reaction-rate model, the predominant model for hypersonic-flow-field calculations. For near-equilibrium conditions, Park's model is in good agreement with the DSMC-calculated reaction rates. For far-from-equilibrium conditions, corresponding to a typical shock layer, the difference between the two models can exceed 10 orders of magnitude. The DSMC predictions are also found to be in very good agreement with measured and calculated non-equilibrium reaction rates. Extensions of the model to reactions typically found in combustion flows and ionizing reactions are also found to be in very good agreement with available measurements, offering strong evidence that this is a viable and reliable technique to predict chemical reaction rates.

  9. Continuum modeling of rate-dependent granular flows in SPH

    NASA Astrophysics Data System (ADS)

    Hurley, Ryan C.; Andrade, José E.

    2016-09-01

    We discuss a constitutive law for modeling rate-dependent granular flows that has been implemented in smoothed particle hydrodynamics (SPH). We model granular materials using a viscoplastic constitutive law that produces a Drucker-Prager-like yield condition in the limit of vanishing flow. A friction law for non-steady flows, incorporating rate-dependence and dilation, is derived and implemented within the constitutive law. We compare our SPH simulations with experimental data, demonstrating that they can capture both steady and non-steady dynamic flow behavior, notably including transient column collapse profiles. This technique may therefore be attractive for modeling the time-dependent evolution of natural and industrial flows.

  10. A frictional population model of seismicity rate change

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.

    2005-01-01

    We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.

  11. On Estimation of GPS-based Indonesian Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Susilo, Susilo; Abidin, Hasanuddin Z.; Meilano, Irwan; Sapiie, Benyamin; Wijanarto, Antonius B.

    2016-04-01

    Using the GPS-derived rates at survey mode (sGPS) stations and continuous GPS stations across Indonesian region, covering the 22 years period from 1993 to 2014, the linear deformation velocities with an accuracy of about 2 to 3 mm/year level are derived. These velocities are corrected to the coseismic and postseismic deformation caused by significant earthquakes in that period. In this study, we use this GPS velocities field to construct a crustal strain rate map without including the physical model yet. An interpolation method was used to compute the velocity model. By differentiation of the continuous velocity model, we derive the strain rate map of Indonesia. At present, our result is only the magnitude of the strain rate. The Indonesian strain rate map is very important for studying the deformation characteristics in the region and to establish a deformation (velocity) model for supporting the implementation of the Indonesian Geospatial Reference System 2013 (IGRS 2013). This is a new semi-dynamic geocentric datum of Indonesia, which uses the global ITRF2008 reference frame, with a reference epoch of 1 January 2012. A deformation (velocity) model is required to transform coordinates from an observation epoch to or from this reference epoch.

  12. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  13. Modeling rate sensitivity of exercise transient responses to limb motion.

    PubMed

    Yamashiro, Stanley M; Kato, Takahide

    2014-10-01

    Transient responses of ventilation (V̇e) to limb motion can exhibit predictive characteristics. In response to a change in limb motion, a rapid change in V̇e is commonly observed with characteristics different than during a change in workload. This rapid change has been attributed to a feed-forward or adaptive response. Rate sensitivity was explored as a specific hypothesis to explain predictive V̇e responses to limb motion. A simple model assuming an additive feed-forward summation of V̇e proportional to the rate of change of limb motion was studied. This model was able to successfully account for the adaptive phase correction observed during human sinusoidal changes in limb motion. Adaptation of rate sensitivity might also explain the reduction of the fast component of V̇e responses previously reported following sudden exercise termination. Adaptation of the fast component of V̇e response could occur by reduction of rate sensitivity. Rate sensitivity of limb motion was predicted by the model to reduce the phase delay between limb motion and V̇e response without changing the steady-state response to exercise load. In this way, V̇e can respond more quickly to an exercise change without interfering with overall feedback control. The asymmetry between responses to an incremental and decremental ramp change in exercise can also be accounted for by the proposed model. Rate sensitivity leads to predicted behavior, which resembles responses observed in exercise tied to expiratory reserve volume.

  14. Converting copepod vital rates into units appropriate for biogeochemical models

    NASA Astrophysics Data System (ADS)

    Frangoulis, C.; Carlotti, F.; Eisenhauer, L.; Zervoudaki, S.

    2010-01-01

    The conversion of units is one of the difficulties of model parameterisation. Conversion errors may result not only from incorrect choices of conversion factors, but also from incorrect choices of the value itself. In biogeochemical models, mesozooplankton, is the highest trophic level of the food web, and it is very often reduced to a single variable generally considered as a representation of the copepod community, the dominant taxa in mesozooplankton. If this simplifies the information to be obtained for the stock, a correct parameterisation of the processes related to the copepod community is already a tricky task due to the wide range of copepod species, sizes, stages and behaviour. The goal of this paper is to improve the communication between experimentalists and modellers by giving indications for the conversion of copepod vital rates from experimental to biogeochemical model units. This includes the choice of values, conversion factors, terminology distinction and the scale transfer. To begin with, we briefly address the common problem of the conversion of a rate per individual to a rate per mass. Then, we focus on unit conversion problems for each specific rate and give recommendations. Finally, we discuss the problem of scale transfer between the level of organisation at which the rate value is measured at characteristic time and space-scales versus the level of representation of the corresponding process in the model, with its different characteristic time and space-scales .

  15. Cosmogenic Ne-21 Production Rates in H-Chondrites Based on Cl-36 - Ar-36 Ages

    NASA Technical Reports Server (NTRS)

    Leya, I.; Graf, Th.; Nishiizumi, K.; Guenther, D.; Wieler, R.

    2000-01-01

    We measured Ne-21 production rates in 14 H-chondrites in good agreement with model calculations. The production rates are based on Ne-21 concentrations measured on bulk samples or the non-magnetic fraction and Cl-36 - Ar-36 ages determined from the metal phase.

  16. A model for reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Chinitz, W.; Evans, J. S.

    1984-01-01

    To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.

  17. Modeling the intracellular pathogen-immune interaction with cure rate

    NASA Astrophysics Data System (ADS)

    Dubey, Balram; Dubey, Preeti; Dubey, Uma S.

    2016-09-01

    Many common and emergent infectious diseases like Influenza, SARS, Hepatitis, Ebola etc. are caused by viral pathogens. These infections can be controlled or prevented by understanding the dynamics of pathogen-immune interaction in vivo. In this paper, interaction of pathogens with uninfected and infected cells in presence or absence of immune response are considered in four different cases. In the first case, the model considers the saturated nonlinear infection rate and linear cure rate without absorption of pathogens into uninfected cells and without immune response. The next model considers the effect of absorption of pathogens into uninfected cells while all other terms are same as in the first case. The third model incorporates innate immune response, humoral immune response and Cytotoxic T lymphocytes (CTL) mediated immune response with cure rate and without absorption of pathogens into uninfected cells. The last model is an extension of the third model in which the effect of absorption of pathogens into uninfected cells has been considered. Positivity and boundedness of solutions are established to ensure the well-posedness of the problem. It has been found that all the four models have two equilibria, namely, pathogen-free equilibrium point and pathogen-present equilibrium point. In each case, stability analysis of each equilibrium point is investigated. Pathogen-free equilibrium is globally asymptotically stable when basic reproduction number is less or equal to unity. This implies that control or prevention of infection is independent of initial concentration of uninfected cells, infected cells, pathogens and immune responses in the body. The proposed models show that introduction of immune response and cure rate strongly affects the stability behavior of the system. Further, on computing basic reproduction number, it has been found to be minimum for the fourth model vis-a-vis other models. The analytical findings of each model have been exemplified by

  18. The Impact of Statistical Choices on Neonatal Intensive Care Unit Quality Ratings Based on Nosocomial Infection Rates

    PubMed Central

    Lee, Henry Chong; Chien, Alyna T.; Bardach, Naomi S.; Clay, Ted; Gould, Jeffrey B.; Dudley, R. Adams

    2014-01-01

    Objective To examine the extent to which performance assessment methodologies affect the percent of neonatal intensive care units (NICUs) and very low birth weight (VLBW) infants included in performance assessments, distribution of NICU performance ratings, and level of agreement in those ratings. Design Cross-sectional study based on risk-adjusted nosocomial infection rates. Setting NICUs belonging to the California Perinatal Quality Care Collaborative 2007–2008. Participants 126 California NICUs and 10,487 VLBW infants. Main Exposure Three performance assessment choices: 1. Excluding “low-volume” NICUs (those caring for < 30 VLBW infants in a year) vs. a criterion based on confidence intervals, 2. Using Bayesian vs. frequentist hierarchical models, and 3. Pooling data across one vs. two years. Main Outcome Measures Proportion of NICUs and patients included in quality assessment, distribution of ratings for NICUs, and agreement between methods using the kappa statistic. Results Depending on the methods applied, between 51% and 85% of NICUs were included in performance assessment, the percent of VLBW infants included in performance assessment ranged from 72% to 96%, between 76–87% NICUs were considered “average,” and the level of agreement between NICU ratings ranged from 0.26 to 0.89. Conclusions The percent of NICUs included in performance assessment and their ratings can shift dramatically depending on performance measurement methodology. Physicians, payers, and policymakers should continue to closely examine which existing performance assessment methodologies are most appropriate for evaluating pediatric care quality. PMID:21536958

  19. Lifetime models of female labor supply, wage rates, and fertility.

    PubMed

    Carliner, G; Robinson, C; Tomes, N

    1984-01-01

    A simple 1 period lifetime model is specified in which schooling is part of the lifetime period. This implies that an adding-up constraint is imposed on the uses of time in the lifetime including schooling, which may induce a negative correlation between years of schooling and years in the market, while producing a positive correlation between years of schooling and the fraction of the postschool lifetime spent in the market. The model is used to interpret empirical analyses based on alternative measures of lifetime labor supply and on alternative specifications of which variables may be treated as exogenous. In the empirical analysis the retrospective and longitudinal aspects of the newly available National Longitudinal Survey of Women is used to construct a measure of the fraction of the lifetime supplied to the market and measures of the lifetime wage rates of both the husband and the wife. The empirical results take the lifetime model of labor supply seriously in that the empirical measures of labor supply and wage rates bear a much closer resemblance to the theoretical concepts than measures typically employed in the literature. The estimates indicate that the "plausible assumptions" required for the true coefficient on fertility in a labor supply equation to be zero are fulfilled. These estimates are compared with those obtained using current measures as proxies for lifetime variables. Based on these estimates, an explanation is offered for the apparent contradiction between the findings of studies using a simultaneous equations approach that report no effect of fertility on female labor supply and the strong depressing effect of children on (current) labor supply obtained from research that treats children as exogenous. Current female hours appear more responsive to husbands' current earnings and female education than is the case with the lifetime variables. There are marked differences in the effects of race. The lifetime hours of white women are only some

  20. Finite driving rate and anisotropy effects in landslide modeling

    SciTech Connect

    Piegari, E.; Cataudella, V.; Di Maio, R.; Milano, L.; Nicodemi, M.

    2006-02-15

    In order to characterize landslide frequency-size distributions and individuate hazard scenarios and their possible precursors, we investigate a cellular automaton where the effects of a finite driving rate and the anisotropy are taken into account. The model is able to reproduce observed features of landslide events, such as power-law distributions, as experimentally reported. We analyze the key role of the driving rate and show that, as it is increased, a crossover from power-law to non-power-law behaviors occurs. Finally, a systematic investigation of the model on varying its anisotropy factors is performed and the full diagram of its dynamical behaviors is presented.

  1. Modeling heart rate variability including the effect of sleep stages

    NASA Astrophysics Data System (ADS)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  2. Modeling heart rate variability including the effect of sleep stages.

    PubMed

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that-in comparison with real data-the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed. PMID:26931582

  3. Evidence for rule-based processes in the inverse base-rate effect.

    PubMed

    Winman, Anders; Wennerholm, Pia; Juslin, Peter; Shanks, David R

    2005-07-01

    Three studies provide convergent evidence that the inverse base-rate effect (Medin & Edelson, 1988) is mediated by rule-based cognitive processes. Experiment 1 shows that, in contrast to adults, prior to the formal operational stage most children do not exhibit the inverse base-rate effect. Experiments 2 and 3 demonstrate that an adult sample is a mix of participants relying on associative processes who categorize according to the base-rate and participants relying on rule-based processes who exhibit a strong inverse base-rate effect. The distribution of the effect is bimodal, and removing participants independently classified as prone to rule-based processing effectively eliminates the inverse base-rate effect. The implications for current explanations of the inverse base-rate effect are discussed. PMID:16194936

  4. Conservation laws and unidentifiability of rate expressions in biochemical models.

    PubMed

    Anguelova, M; Cedersund, G; Johansson, M; Franzén, C J; Wennberg, B

    2007-07-01

    New experimental techniques in bioscience provide us with high-quality data allowing quantitative mathematical modelling. Parameter estimation is often necessary and, in connection with this, it is important to know whether all parameters can be uniquely estimated from available data, (i.e. whether the model is identifiable). Dealing essentially with models for metabolism, we show how the assumption of an algebraic relation between concentrations may cause parameters to be unidentifiable. If a sufficient data set is available, the problem with unidentifiability arises locally in individual rate expressions. A general method for reparameterisation to identifiable rate expressions is provided, together with a Mathematica code to help with the calculations. The general results are exemplified by four well-cited models for glycolysis.

  5. FMO-based H.264 frame layer rate control for low bit rate video transmission

    NASA Astrophysics Data System (ADS)

    Cajote, Rhandley D.; Aramvith, Supavadee; Miyanaga, Yoshikazu

    2011-12-01

    The use of flexible macroblock ordering (FMO) in H.264/AVC improves error resiliency at the expense of reduced coding efficiency with added overhead bits for slice headers and signalling. The trade-off is most severe at low bit rates, where header bits occupy a significant portion of the total bit budget. To better manage the rate and improve coding efficiency, we propose enhancements to the H.264/AVC frame layer rate control, which take into consideration the effects of using FMO for video transmission. In this article, we propose a new header bits model, an enhanced frame complexity measure, a bit allocation and a quantization parameter adjustment scheme. Simulation results show that the proposed improvements achieve better visual quality compared with the JM 9.2 frame layer rate control with FMO enabled using a different number of slice groups. Using FMO as an error resilient tool with better rate management is suitable in applications that have limited bandwidth and in error prone environments such as video transmission for mobile terminals.

  6. A constitutive model for ballistic gelatin at surgical strain rates.

    PubMed

    Ravikumar, Nishant; Noble, Christopher; Cramphorn, Edward; Taylor, Zeike A

    2015-07-01

    This paper describes a constitutive model for ballistic gelatin at the low strain rates experienced, for example, by soft tissues during surgery. While this material is most commonly associated with high speed projectile penetration and impact investigations, it has also been used extensively as a soft tissue simulant in validation studies for surgical technologies (e.g. surgical simulation and guidance systems), for which loading speeds and the corresponding mechanical response of the material are quite different. We conducted mechanical compression experiments on gelatin specimens at strain rates spanning two orders of magnitude (~0.001-0.1s(-1)) and observed a nonlinear load-displacement history and strong strain rate-dependence. A compact and efficient visco-hyperelastic constitutive model was then formulated and found to fit the experimental data well. An Ogden type strain energy density function was employed for the elastic component. A single Prony exponential term was found to be adequate to capture the observed rate-dependence of the response over multiple strain rates. The model lends itself to immediate use within many commercial finite element packages. PMID:25863009

  7. Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2015-12-01

    The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.

  8. A microphysical model explains rate-and-state friction

    NASA Astrophysics Data System (ADS)

    Chen, Jianye; Spiers, Christopher J.

    2015-04-01

    The rate-and-state friction (RSF) laws were originally developed as a phenomenological description of the frictional behavior observed in lab experiments. In previous studies, the empirical RSF laws have been extensively and quite successfully applied to fault mechanisms. However, these laws can not readily be envisioned in terms of the underlying physics. There are several critical discrepancies between seismological constraints on RSF behavior associated with earthquakes and lab-derived RSF parameters, in particular regarding the static stress drop and characteristic slip distance associated with seismic events. Moreover, lab friction studies can address only limited fault topographies, displacements, experimental durations and P-T conditions, which means that scale issues, and especially processes like dilatation and fluid-rock interaction, cannot be fully taken into account. Without a physical basis accounting for such effects, extrapolation of lab-derived RSF data to nature involves significant, often unknown uncertainties. In order to more reliably apply experimental results to natural fault zones, and notably to extrapolate lab data beyond laboratory pressure, temperature and velocity conditions, an understanding of the microphysical mechanisms governing fault frictional behavior is required. Here, following some pioneering efforts (e.g. Niemeijer and Spiers, 2007; Den Hartog and Spiers, 2014), a mechanism-based microphysical model is developed for describing the frictional behavior of carbonate fault gouge, assuming that the frictional behavior seen in lab experiments is controlled by competing processes of intergranular slip versus contact creep by pressure solution. The model basically consists of two governing equations derived from energy/entropy balance considerations and the kinematic relations that apply to a granular fault gouge undergoing shear and dilation/compaction. These two equations can be written as ˙τ/K = Vimp- Lt[λ˙γsbps +(1-

  9. High rate constitutive modeling of aluminium alloy tube

    NASA Astrophysics Data System (ADS)

    Salisbury, C. P.; Worswick, M. J.; Mayer, R.

    2006-08-01

    As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.

  10. Increasing Response Rates to Web-Based Surveys

    ERIC Educational Resources Information Center

    Monroe, Martha C.; Adams, Damian C.

    2012-01-01

    We review a popular method for collecing data--Web-based surveys. Although Web surveys are popular, one major concern is their typically low response rates. Using the Dillman et al. (2009) approach, we designed, pre-tested, and implemented a survey on climate change with Extension professionals in the Southeast. The Dillman approach worked well,…

  11. Stochastic heart-rate model can reveal pathologic cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kuusela, Tom

    2004-03-01

    A simple one-dimensional Langevin-type stochastic difference equation can simulate the heart-rate fluctuations in a time scale from minutes to hours. The model consists of a deterministic nonlinear part and a stochastic part typical of Gaussian noise, and both parts can be directly determined from measured heart-rate data. Data from healthy subjects typically exhibit the deterministic part with two or more stable fixed points. Studies of 15 congestive heart-failure subjects reveal that the deterministic part of pathologic heart dynamics has no clear stable fixed points. Direct simulations of the stochastic model for normal and pathologic cases can produce statistical parameters similar to those of real subjects. Results directly indicate that pathologic situations simplify the heart-rate control system.

  12. Mixed Rasch Modeling of the Self-Rating Depression Scale

    ERIC Educational Resources Information Center

    Hong, Sehee; Min, Sae-Young

    2007-01-01

    In this study, mixed Rasch modeling was used on the Self-Rating Depression Scale (SDS), a widely used measure of depression, among a non-Western sample of 618 Korean college students. The results revealed three latent classes and confirmed the unidimensionality of the SDS. In addition, there was a significant effect for gender in terms of class…

  13. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a

  14. Characterization of infiltration rates from landfills: supporting groundwater modeling efforts.

    PubMed

    Moo-Young, Horace; Johnson, Barnes; Johnson, Ann; Carson, David; Lew, Christine; Liu, Salley; Hancocks, Katherine

    2004-01-01

    The purpose of this paper is to review the literature to characterize infiltration rates from landfill liners to support groundwater modeling efforts. The focus of this investigation was on collecting studies that describe the performance of liners 'as installed' or 'as operated'. This document reviews the state of the science and practice on the infiltration rate through compacted clay liner (CCL) for 149 sites and geosynthetic clay liner (GCL) for 1 site. In addition, it reviews the leakage rate through geomembrane (GM) liners and composite liners for 259 sites. For compacted clay liners (CCL), there was limited information on infiltration rates (i.e., only 9 sites reported infiltration rates.), thus, it was difficult to develop a national distribution. The field hydraulic conductivities for natural clay liners range from 1 x 10(-9) cm s(-1) to 1 x 10(-4) cm s(-1), with an average of 6.5 x 10(-8) cm s(-1). There was limited information on geosynthetic clay liner. For composite lined and geomembrane systems, the leak detection system flow rates were utilized. The average monthly flow rate for composite liners ranged from 0-32 lphd for geomembrane and GCL systems to 0 to 1410 lphd for geomembrane and CCL systems. The increased infiltration for the geomembrane and CCL system may be attributed to consolidation water from the clay.

  15. Finite hedging in field theory models of interest rates.

    PubMed

    Baaquie, Belal E; Srikant, Marakani

    2004-03-01

    We use path integrals to calculate hedge parameters and efficacy of hedging in a quantum field theory generalization of the Heath, Jarrow, and Morton [Robert Jarrow, David Heath, and Andrew Morton, Econometrica 60, 77 (1992)] term structure model, which parsimoniously describes the evolution of imperfectly correlated forward rates. We calculate, within the model specification, the effectiveness of hedging over finite periods of time, and obtain the limiting case of instantaneous hedging. We use empirical estimates for the parameters of the model to show that a low-dimensional hedge portfolio is quite effective.

  16. Exploring Latent Class Based on Growth Rates in Number Sense Ability

    ERIC Educational Resources Information Center

    Kim, Dongil; Shin, Jaehyun; Lee, Kijyung

    2013-01-01

    The purpose of this study was to explore latent class based on growth rates in number sense ability by using latent growth class modeling (LGCM). LGCM is one of the noteworthy methods for identifying growth patterns of the progress monitoring within the response to intervention framework in that it enables us to analyze latent sub-groups based not…

  17. Estimation of Eddy Dissipation Rates from Mesoscale Model Simulations

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2012-01-01

    The Eddy Dissipation Rate is an important metric for representing the intensity of atmospheric turbulence and is used as an input parameter for predicting the decay of aircraft wake vortices. In this study, the forecasts of eddy dissipation rates obtained from the current state-of-the-art mesoscale model are evaluated for terminal area applications. The Weather Research and Forecast mesoscale model is used to simulate the planetary boundary layer at high horizontal and vertical mesh resolutions. The Bougeault-Lacarrer and the Mellor-Yamada-Janji schemes implemented in the Weather Research and Forecast model are evaluated against data collected during the National Aeronautics and Space Administration s Memphis Wake Vortex Field Experiment. Comparisons with other observations are included as well.

  18. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  19. Micromechanics-Based Permeability Evolution in Brittle Materials at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Perol, Thibaut; Bhat, Harsha S.

    2016-08-01

    We develop a micromechanics-based permeability evolution model for brittle materials at high strain rates (≥ 100 s^{-1}). Extending for undrained deformation the mechanical constitutive description of brittle solids, whose constitutive response is governed by micro-cracks, we now relate the damage-induced strains to micro-crack aperture. We then use an existing permeability model to evaluate the permeability evolution. This model predicts both the percolative and connected regime of permeability evolution of Westerly Granite during triaxial loading at high strain rate. This model can simulate pore pressure history during earthquake coseismic dynamic ruptures under undrained conditions.

  20. Performance Invalidity Base Rates Among Healthy Undergraduate Research Participants.

    PubMed

    Ross, Thomas P; Poston, Ashley M; Rein, Patricia A; Salvatore, Andrew N; Wills, Nathan L; York, Taylor M

    2016-02-01

    Few studies have examined base rates of suboptimal effort among healthy, undergraduate students recruited for neuropsychological research. An and colleagues (2012, Conducting research with non-clinical healthy undergraduates: Does effort play a role in neuropsychological test performance? Archives of Clinical Neuropsychology, 27, 849-857) reported high rates of performance invalidity (30.8%-55.6%), calling into question the validity of findings generated from samples of college students. In contrast, subsequent studies have reported much lower base rates ranging from 2.6% to 12%. The present study replicated and extended previous work by examining the performance of 108 healthy undergraduates on the Dot Counting Test, Victoria Symptom Validity Test, Word Memory Test, and a brief battery of neuropsychological measures. During initial testing, 8.3% of the sample scored below cutoffs on at least one Performance Validity Test, while 3.7% were classified as invalid at Time 2 (M interval = 34.4 days). The present findings add to a growing number of studies that suggest performance invalidity base rates in samples of non-clinical, healthy college students are much lower than An and colleagues initial findings. Although suboptimal effort is much less problematic than suggested by An and colleagues, recent reports as high as 12% indicate including measures of effort may be of value when using college students as participants. Methodological issues and recommendations for future research are presented.

  1. A model for predicting wear rates in tooth enamel.

    PubMed

    Borrero-Lopez, Oscar; Pajares, Antonia; Constantino, Paul J; Lawn, Brian R

    2014-09-01

    It is hypothesized that wear of enamel is sensitive to the presence of sharp particulates in oral fluids and masticated foods. To this end, a generic model for predicting wear rates in brittle materials is developed, with specific application to tooth enamel. Wear is assumed to result from an accumulation of elastic-plastic micro-asperity events. Integration over all such events leads to a wear rate relation analogous to Archard׳s law, but with allowance for variation in asperity angle and compliance. The coefficient K in this relation quantifies the wear severity, with an arbitrary distinction between 'mild' wear (low K) and 'severe' wear (high K). Data from the literature and in-house wear-test experiments on enamel specimens in lubricant media (water, oil) with and without sharp third-body particulates (silica, diamond) are used to validate the model. Measured wear rates can vary over several orders of magnitude, depending on contact asperity conditions, accounting for the occurrence of severe enamel removal in some human patients (bruxing). Expressions for the depth removal rate and number of cycles to wear down occlusal enamel in the low-crowned tooth forms of some mammals are derived, with tooth size and enamel thickness as key variables. The role of 'hard' versus 'soft' food diets in determining evolutionary paths in different hominin species is briefly considered. A feature of the model is that it does not require recourse to specific material removal mechanisms, although processes involving microplastic extrusion and microcrack coalescence are indicated.

  2. 77 FR 24198 - Notice of Revocation of Market-Based Rate Authority and Termination of Market-Based Rate Tariffs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-23

    ... ER05-419-000 ] Verde Renewable Energy, Inc ER07-48-000 Vesta Capital Partners LP ER05-1434-000 Vesta... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Notice of Revocation of Market-Based Rate Authority and Termination...

  3. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  4. Predation rates by North Sea cod (Gadus morhua) - Predictions from models on gastric evacuation and bioenergetics

    USGS Publications Warehouse

    Hansson, S.; Rudstam, L. G.; Kitchell, J.F.; Hilden, M.; Johnson, B.L.; Peppard, P.E.

    1996-01-01

    We compared four different methods for estimating predation rates by North Sea cod (Gadus moi hua). Three estimates, based on gastric evacuation rates, came from an ICES multispecies working group and the fourth from a bioenergetics model. The bioenergetics model was developed from a review of literature on cod physiology. The three gastric evacuation rate models produced very different prey consumption estimates for small (2 kg) fish. For most size and age classes, the bioenergetics model predicted food consumption rates intermediate to those predicted by the gastric evacuation models. Using the standard ICES model and the average population abundance and age structure for 1974-1989, annual, prey consumption by the North Sea cod population (age greater than or equal to 1) was 840 kilotons. The other two evacuation rate models produced estimates of 1020 and 1640 kilotons, respectively. The bioenergetics model estimate was 1420 kilotons. The major differences between models were due to consumption rate estimates for younger age groups of cod. (C) 1996 International Council for the Exploration of the Sea

  5. The Modellers' Halting Foray into Ecological Theory: Or, What is This Thing Called 'Growth Rate'?

    PubMed

    Deveau, Michael; Karsten, Richard; Teismann, Holger

    2015-06-01

    This discussion paper describes the attempt of an imagined group of non-ecologists ("Modellers") to determine the population growth rate from field data. The Modellers wrestle with the multiple definitions of the growth rate available in the literature and the fact that, in their modelling, it appears to be drastically model-dependent, which seems to throw into question the very concept itself. Specifically, they observe that six representative models used to capture the data produce growth-rate values, which differ significantly. Almost ready to concede that the problem they set for themselves is ill-posed, they arrive at an alternative point of view that not only preserves the identity of the concept of the growth rate, but also helps discriminate between competing models for capturing the data. This is accomplished by assessing how robustly a given model is able to generate growth-rate values from randomized time-series data. This leads to the proposal of an iterative approach to ecological modelling in which the definition of theoretical concepts (such as the growth rate) and model selection complement each other. The paper is based on high-quality field data of mites on apple trees and may be called a "data-driven opinion piece".

  6. A master curve-mechanism based approach to modeling the effects of constraint, loading rate and irradiation on the toughness-temperature behavior of a V-4Cr-4Ti alloy

    SciTech Connect

    Odette, G.R.; Donahue, E.; Lucas, G.E.; Sheckherd, J.W.

    1996-10-01

    The influence of loading rate and constraint on the effective fracture toughness as a function of temperature [K{sub e}(T)] of the fusion program heat of V-4Cr-4Ti was measured using subsized, three point bend specimens. The constitutive behavior was characterized as a function of temperature and strain rate using small tensile specimens. Data in the literature on this alloy was also analysed to determine the effect of irradiation on K{sub e}(T) and the energy temperature (E-T) curves measured in subsized Charpy V-notch tests. It was found that V-4Cr-4Ti undergoes {open_quotes}normal{close_quotes} stress-controlled cleavage fracture below a temperature marking a sharp ductile-to-brittle transition. The transition temperature is increased by higher loading rates, irradiation hardening and triaxial constraint. Shifts in a reference transition temperature due to higher loading rates and irradiation can be reasonably predicted by a simple equivalent yield stress model. These results also suggest that size and geometry effects, which mediate constraint, can be modeled by combining local critical stressed area {sigma}*/A* fracture criteria with finite element method simulations of crack tip stress fields. The fundamental understanding reflected in these models will be needed to develop K{sub e}(T) curves for a range of loading rates, irradiation conditions, structural size scales and geometries relying (in large part) on small specimen tests. Indeed, it may be possible to develop a master K{sub e}(T) curve-shift method to account for these variables. Such reliable and flexible failure assessment methods are critical to the design and safe operation of defect tolerant vanadium structures.

  7. Generalized Treanor-Marrone model for state-specific dissociation rate coefficients

    NASA Astrophysics Data System (ADS)

    Kunova, O.; Kustova, E.; Savelev, A.

    2016-08-01

    We propose a simple and accurate model for state-specific dissociation rate coefficients based on the widely used Treanor-Marrone model. It takes into account the dependence of its parameter on temperature and vibrational level and can be used with arbitrary vibrational ladder. The model is validated by comparisons with state-specific dissociation rate coefficients of O2 and N2 obtained using molecular dynamics, and its good accuracy is demonstrated. Non-equilibrium kinetics of O2/O and N2/N mixtures under heat bath conditions is studied; applying the optimized Treanor-Marrone model leads to more efficient dissociation and vibrational relaxation.

  8. Modeling NAPL dissolution fingering with upscaled mass transfer rate coefficients

    NASA Astrophysics Data System (ADS)

    Imhoff, Paul T.; Farthing, Matthew W.; Miller, Cass T.

    2003-10-01

    The dissolution of nonaqueous phase liquids (NAPLs) at residual saturation in porous media has sometimes resulted in the development of preferential dissolution pathways or NAPL dissolution fingers. While NAPL dissolution fingering may be modeled using numerical simulators with fine discretization, this approach is computational intensive. We derived an expression for an upscaled mass transfer rate coefficient that accounts for the growth of dissolution fingers within porous media contaminated uniformly with residual NAPL. This expression was closely related to the lengthening of the dissolution front. Data from physical experiments and numerical simulations in two dimensions were used to examine the growth of the dissolution front and the corresponding upscaled mass transfer rate coefficient. Using this upscaled mass transfer rate coefficient, the time when dissolution fingering results in a reduction in the overall mass transfer rate and thus controls the rate of NAPL dissolution was determined. This crossover time is a convenient parameter for assessing the influence of dissolution fingering on NAPL removal. For the physical experiments and numerical simulations analyzed in this study, the crossover time to dissolution fingering control always occurred before the dissolution front had moved 14 cm within NAPL-contaminated porous media, which is small compared to the scale of typical systems of concern. To verify the utility of this approach, data from a three-dimensional physical experiment were predicted reasonably well using an upscaled mass transfer rate coefficient that was determined independently from this experiment.

  9. Rapid calculation of functional maps of glucose metabolic rate and individual model rate parameters from serial 2-FDG images

    SciTech Connect

    Koeppe, R.A.; Holden, J.E.; Hutchins, G.D.

    1985-05-01

    The authors have developed a method for the rapid pixel-by-pixel estimation of glucose metabolic rate from a dynamic sequence of PCT images acquired over 40 minutes following venous bolus injection of 2-deoxy-2-fluoro-D-glucose (2-FDG). The calculations are based on the conventional four parameter model. The dephosphorylation rate (k/sub 4/) cannot be reliably estimated from only 40 minutes of data; however, neglecting dephosphorylation can nonetheless introduce significant biases into the parameter estimation processes. In the authors' method, the rate is constrained to fall within a small range about a presumed value. Computer simulation studies show that this constraint greatly reduces the systematic biases in the other three fitted parameters and in the metabolic rate that arise from the assumption of no dephosphorylation. The parameter estimation scheme used is formally identical to one originally developed for dynamic methods of cerebral blood flow estimation. Estimation of metabolic rate and the individual model rate parameters k/sub 1/, k/sub 2/, and k/sub 3/, can be carried out for each pixel sequence of a 100 x 100 pixel image in less than two minutes on our PDP 11/60 minicomputer with floating point processor. While the maps of k/sub 2/ amd k/sub 3/ are quite noisy, accurate estimates of average values can be attained for regions of a few cm/sup 2/. The maps of metabolic rate offer many advantages in addition to that of direct visualization. These include improved statistical precision and the avoidance of averaging failure in the fitting of heterogeneous regions.

  10. Proto-object based rate control for JPEG2000: an approach to content-based scalability.

    PubMed

    Xue, Jianru; Li, Ce; Zheng, Nanning

    2011-04-01

    The JPEG2000 system provides scalability with respect to quality, resolution and color component in the transfer of images. However, scalability with respect to semantic content is still lacking. We propose a biologically plausible salient region based bit allocation mechanism within the JPEG2000 codec for the purpose of augmenting scalability with respect to semantic content. First, an input image is segmented into several salient proto-objects (a region that possibly contains a semantically meaningful physical object) and background regions (a region that contains no object of interest) by modeling visual focus of attention on salient proto-objects. Then, a novel rate control scheme distributes a target bit rate to each individual region according to its saliency, and constructs quality layers of proto-objects for the purpose of more precise truncation comparable to original quality layers in the standard. Empirical results show that the suggested approach adds to the JPEG2000 system scalability with respect to content as well as the functionality of selectively encoding, decoding, and manipulation of each individual proto-object in the image, with only some slightly trivial modifications to the JPEG2000 standard. Furthermore, the proposed rate control approach efficiently reduces the computational complexity and memory usage, as well as maintains the high quality of the image to a level comparable to the conventional post-compression rate distortion (PCRD) optimum truncation algorithm for JPEG2000.

  11. Examining rating scales using Rasch and Mokken models for rater-mediated assessments.

    PubMed

    Wind, Stephanie A

    2014-01-01

    A variety of methods for evaluating the psychometric quality of rater-mediated assessments have been proposed, including rater effects based on latent trait models (e.g., Engelhard, 2013; Wolfe, 2009). Although information about rater effects contributes to the interpretation and use of rater-assigned scores, it is also important to consider ratings in terms of the structure of the rating scale on which scores are assigned. Further, concern with the validity of rater-assigned scores necessitates investigation of these quality control indices within student subgroups, such as gender, language, and race/ethnicity groups. Using a set of guidelines for evaluating the interpretation and use of rating scales adapted from Linacre (1999, 2004), this study demonstrates methods that can be used to examine rating scale functioning within and across student subgroups with indicators from Rasch measurement theory (Rasch, 1960) and Mokken scale analysis (Mokken, 1971). Specifically, this study illustrates indices of rating scale effectiveness based on Rasch models and models adapted from Mokken scaling, and considers whether the two approaches to evaluating the interpretation and use of rating scales lead to comparable conclusions within the context of a large-scale rater-mediated writing assessment. Major findings suggest that indices of rating scale effectiveness based on a parametric and nonparametric approach provide related, but slightly different, information about the structure of rating scales. Implications for research, theory, and practice are discussed. PMID:24950531

  12. Queuing model of a traffic bottleneck with bimodal arrival rate

    NASA Astrophysics Data System (ADS)

    Woelki, Marko

    2016-06-01

    This paper revisits the problem of tuning the density in a traffic bottleneck by reduction of the arrival rate when the queue length exceeds a certain threshold, studied recently for variants of totally asymmetric simple exclusion process (TASEP) and Burgers equation. In the present approach, a simple finite queuing system is considered and its contrasting “phase diagram” is derived. One can observe one jammed region, one low-density region and one region where the queue length is equilibrated around the threshold. Despite the simplicity of the model the physics is in accordance with the previous approach: The density is tuned at the threshold if the exit rate lies in between the two arrival rates.

  13. Predicting online ratings based on the opinion spreading process

    NASA Astrophysics Data System (ADS)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  14. Computational modeling of dynamic mechanical properties of pure polycrystalline magnesium under high loading strain rates

    NASA Astrophysics Data System (ADS)

    Li, Qizhen

    2015-09-01

    Computational simulations were performed to investigate the dynamic mechanical behavior of pure polycrystalline magnesium under different high loading strain rates with the values of 800, 1000, 2000, and 3600 s-1. The Johnson-Cook model was utilized in the simulations based on finite element modeling. The results showed that the simulations provided well-matched predictions of the material behavior such as the strain rate-time history, the stress-strain curve, and the temperature increase. Under high loading strain rates, the tested material experienced linear strain hardening at the early stage of plastic deformation, increased strain hardening at the intermediate plastic deformation region, and decreased strain hardening at the region before fracture. The strain hardening rates for the studied high loading strain rate cases do not vary much with the change of strain rates.

  15. Calibration of hydraulic models: effects of rating-curve uncertainty

    NASA Astrophysics Data System (ADS)

    Domeneghetti, Alessio; Castellarin, Attilio; Brath, Armando

    2010-05-01

    This research focuses on the uncertainty of rating-curves and how this uncertainty propagates to Manning's roughness coefficient during the calibration of numerical hydraulic models. Rating-curves, relating stage and flow discharge, are traditionally used for describing boundary conditions. The uncertainty associated with rating-curves is often neglected, and generally considered to be less important than other factors (see e.g., Di Baldassarre and Montanari, HESS, 2009). We performed a series of simulation experiments aimed at: (1) quantitatively assessing the uncertainty of the curves; (2) investigating its effects on the calibration of Manning's roughness coefficient. We used a quasi-bidimensional (quasi-2D) model of the middle-lower reach of the River Po (Northern Italy) to simulate 10 different historical flood events for the hydrometric river cross-section located in Cremona. Using the simulated data, we mimicked 15 measurement campaigns for each flood event and we corrupted the discharge data values according to the indications on measurement campaigns and errors reported in the literature (i.e., EU. ISO EN 748, 1997). We then constructed the 90% confidence interval for the synthetic curves. Then, we performed an additional set of model runs downstream of the Cremona's cross-section to assess how the uncertainty of rating curves affects the estimated Manning coefficients during the calibration phase. The results of the study show that the variation of Manning's roughness coefficient resulting from the rating-curve uncertainty is significant. This variation is analysed and discussed relative to the variability of Manning's coefficient reported in the literature for different channel conditions characterising lower reaches of large natural streams.

  16. Heart rate measurement based on face video sequence

    NASA Astrophysics Data System (ADS)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  17. The Relationship Between Hospital Value-Based Purchasing Program Scores and Hospital Bond Ratings.

    PubMed

    Rangnekar, Anooja; Johnson, Tricia; Garman, Andrew; O'Neil, Patricia

    2015-01-01

    Tax-exempt hospitals and health systems often borrow long-term debt to fund capital investments. Lenders use bond ratings as a standard metric to assess whether to lend funds to a hospital. Credit rating agencies have historically relied on financial performance measures and a hospital's ability to service debt obligations to determine bond ratings. With the growth in pay-for-performance-based reimbursement models, rating agencies are expanding their hospital bond rating criteria to include hospital utilization and value-based purchasing (VBP) measures. In this study, we evaluated the relationship between the Hospital VBP domains--Clinical Process of Care, Patient Experience of Care, Outcome, and Medicare Spending per Beneficiary (MSPB)--and hospital bond ratings. Given the historical focus on financial performance, we hypothesized that hospital bond ratings are not associated with any of the Hospital VBP domains. This was a retrospective, cross-sectional study of all hospitals that were rated by Moody's for fiscal year 2012 and participated in the Centers for Medicare & Medicaid Services' VBP program as of January 2014 (N = 285). Of the 285 hospitals in the study, 15% had been assigned a bond rating of Aa, and 46% had been assigned an A rating. Using a binary logistic regression model, we found an association between MSPB only and bond ratings, after controlling for other VBP and financial performance scores; however, MSPB did not improve the overall predictive accuracy of the model. Inclusion of VBP scores in the methodology used to determine hospital bond ratings is likely to affect hospital bond ratings in the near term. PMID:26554267

  18. Modelling and control for heart rate regulation during treadmill exercise.

    PubMed

    Su, Steven W; Wang, Lu; Celler, Branko G; Savkin, Andrey V; Guo, Ying

    2006-01-01

    This paper proposes a novel integrated approach for the identification and control of Hammerstein systems to achieve desired heart rate tracking performance for an automated treadmill system. The pseudo-random binary sequence input is employed to decouple the identification of dynamic linear part from static nonlinearity. The powerful e-insensitivity support vector regression is adopted to obtain sparse representations of the inversion of static nonlinearity in order to obtain an approximated linear model of the Hammerstein system. An H(infinity) controller is designed for the approximated linear model to achieve robust tracking performance. This new approach is applied to the design of a computer-controlled treadmill system for the regulation of heart rate during treadmill exercise. Minimizing deviations of heart rate from a preset profile is achieved by controlling the speed of the treadmill. Both conventional proportional-integral-derivative (PID) control and the proposed approaches have been employed for the controller design. The proposed algorithm achieves much better heart rate tracking performance.

  19. Effects of distribution of infection rate on epidemic models.

    PubMed

    Lachiany, Menachem; Louzoun, Yoram

    2016-08-01

    A goal of many epidemic models is to compute the outcome of the epidemics from the observed infected early dynamics. However, often, the total number of infected individuals at the end of the epidemics is much lower than predicted from the early dynamics. This discrepancy is argued to result from human intervention or nonlinear dynamics not incorporated in standard models. We show that when variability in infection rates is included in standard susciptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models the total number of infected individuals in the late dynamics can be orders lower than predicted from the early dynamics. This discrepancy holds for SIS and SIR models, where the assumption that all individuals have the same sensitivity is eliminated. In contrast with network models, fixed partnerships are not assumed. We derive a moment closure scheme capturing the distribution of sensitivities. We find that the shape of the sensitivity distribution does not affect R_{0} or the number of infected individuals in the early phases of the epidemics. However, a wide distribution of sensitivities reduces the total number of removed individuals in the SIR model and the steady-state infected fraction in the SIS model. The difference between the early and late dynamics implies that in order to extrapolate the expected effect of the epidemics from the initial phase of the epidemics, the rate of change in the average infectivity should be computed. These results are supported by a comparison of the theoretical model to the Ebola epidemics and by numerical simulation. PMID:27627337

  20. Effects of distribution of infection rate on epidemic models

    NASA Astrophysics Data System (ADS)

    Lachiany, Menachem; Louzoun, Yoram

    2016-08-01

    A goal of many epidemic models is to compute the outcome of the epidemics from the observed infected early dynamics. However, often, the total number of infected individuals at the end of the epidemics is much lower than predicted from the early dynamics. This discrepancy is argued to result from human intervention or nonlinear dynamics not incorporated in standard models. We show that when variability in infection rates is included in standard susciptible-infected-susceptible (SIS ) and susceptible-infected-recovered (SIR ) models the total number of infected individuals in the late dynamics can be orders lower than predicted from the early dynamics. This discrepancy holds for SIS and SIR models, where the assumption that all individuals have the same sensitivity is eliminated. In contrast with network models, fixed partnerships are not assumed. We derive a moment closure scheme capturing the distribution of sensitivities. We find that the shape of the sensitivity distribution does not affect R0 or the number of infected individuals in the early phases of the epidemics. However, a wide distribution of sensitivities reduces the total number of removed individuals in the SIR model and the steady-state infected fraction in the SIS model. The difference between the early and late dynamics implies that in order to extrapolate the expected effect of the epidemics from the initial phase of the epidemics, the rate of change in the average infectivity should be computed. These results are supported by a comparison of the theoretical model to the Ebola epidemics and by numerical simulation.

  1. A high-rate PCI-based telemetry processor system

    NASA Astrophysics Data System (ADS)

    Turri, R.

    2002-07-01

    The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.

  2. Nonlinear modeling on rate dependent ferroelectric and ferroelastic response of 1-3 piezocomposites

    NASA Astrophysics Data System (ADS)

    Jayendiran, R.; Arockiarajan, A.

    2016-06-01

    The effect of loading rate on ferroelectric and ferroelastic behavior of 1-3 piezocomposites is presented in this work. Experiments are conducted for various loading rates under different loading conditions such as electrical and electromechanical to measure the rate dependent response of 1-3 piezocomposite compared with bulk piezoceramics. A thermodynamic based rate dependent domain switching criteria has been proposed to predict the ferroelectric and ferroelastic behavior of homogenized 1-3 piezocomposites. In this model, volume fraction of six distinct uni-axial variants are used as internal variables to describe the microscopic state of the material. Plasticity based kinematic hardening parameter is introduced as a function of internal variables to describe the grain boundary effects. Homogenization of 1-3 piezocomposite material properties are obtained by finite element (FE) resonator models using commercially available FE tool Abaqus. To evaluate the possible modes of vibration of 1-3 piezocomposite four different configuration of FE resonators are modeled. The FE resonator model is validated with the impedance spectra obtained experimentally for length extensional and thickness extensional resonator models. The predicted effective properties using the resonance based technique are incorporated in the proposed rate dependent macromechanical model to study the behavior of 1-3 piezocomposites. The simulated results are compared with the experimental observations.

  3. Can we properly model the neutron monitor count rate?

    NASA Astrophysics Data System (ADS)

    Gil, Agnieszka; Usoskin, Ilya G.; Kovaltsov, Gennady A.; Mishev, Alexander L.; Corti, Claudio; Bindi, Veronica

    2015-09-01

    Neutron monitors provide continuous measurements of secondary nucleonic particles produced in the atmosphere by the primary cosmic rays and form the main tool to study the heliospheric modulation of cosmic rays. In order to study cosmic rays using the world network of neutron monitor and needs to be able to model the neutron monitor count rate. Earlier it was difficult because of the poorly known yield function, which has been essentially revisited recently. We have presented a verification of the new yield function of the standard neutron monitor (NM) using a recently released data on the direct in situ measurements of the galactic cosmic rays energy spectrum during 2006-2009 (the period of the record high cosmic ray flux) by Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics spaceborne spectrometer, and on NM latitude surveys performed during the period of 1994-2007, including periods of high solar activity. We found a very good agreement between the measured count rates of sea level NMs and the modeled ones in very different conditions: from low to high solar activity and from polar to tropical regions. This implies that the count rate of a sea level neutron monitor can be properly modeled in all conditions, using the new yield function.

  4. Model-based software design

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael

    1992-01-01

    Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.

  5. Model-Based Reasoning

    ERIC Educational Resources Information Center

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  6. A cloud model-radiative model combination for determining microwave TB-rain rate relations

    NASA Technical Reports Server (NTRS)

    Szejwach, Gerard; Adler, Robert F.; Jobard, Esabelle; Mack, Robert A.

    1986-01-01

    The development of a cloud model-radiative transfer model combination for computing average brightness temperature, T(B), is discussed. The cloud model and radiative transfer model used in this study are described. The relations between rain rate, cloud and rain water, cloud and precipitation ice, and upwelling radiance are investigated. The effects of the rain rate relations on T(B) under different climatological conditions are examined. The model-derived T(B) results are compared to the 92 and 183 GHz aircraft observations of Hakkarinen and Adler (1984, 1986) and the radar-estimated rain rate of Hakkarinen and Adler (1986); good correlation between the data is detected.

  7. A model based on absorbance data on the growth rate of Listeria monocytogenes and including the effects of pH, NaCl, Na-lactate and Na-acetate.

    PubMed

    Nerbrink, E; Borch, E; Blom, H; Nesbakken, T

    1999-03-01

    A mathematical model was developed for predicting the growth of L. monocytogenes at 9 degrees C in the presence of 70 ppm sodium nitrite, and at different levels of pH (5.5-6.5), sodium chloride (1.0-4.0%), sodium lactate (0-0.5%) and sodium acetate (0-0.6%). Collection of the growth data was done using absorbance measurements in broth cultures and the absorbance measurement was evaluated. The model was compared to the Food MicroModel, and against the growth of L. monocytogenes in a vacuum-packed meat product stored at 9 degrees C. A linear relationship was obtained, for the absorbance data on different dilutions of the inoculum, in the absorbance interval studied. There was also a linear relationship between the values of the maximum specific growth rates derived from the absorbance and the ones derived from viable count measurements; and corrections were made accordingly. The statistical evaluation showed that all the main factors, i.e. pH, sodium chloride, sodium lactate and sodium acetate were statistically significant for the growth rate of L. monocytogenes. Comparison to the Food MicroModel (FMM) showed a slight underprediction for the developed model (bias = 0.84). The predictions were, on average, within 20% of the FMM predictions (n = 10). Validation against the observed growth of L. monocytogenes inoculated into an emulsion type of sausage (n = 4) also showed a slight underprediction by the model. The predictions were, on average, 16% below the observed values in the sausage (Bias 0.84, Accuracy 1.26).

  8. Rate Control Management of Atrial Fibrillation: May a Mathematical Model Suggest an Ideal Heart Rate?

    PubMed Central

    Anselmino, Matteo; Scarsoglio, Stefania; Camporeale, Carlo; Saglietto, Andrea; Gaita, Fiorenzo; Ridolfi, Luca

    2015-01-01

    Background Despite the routine prescription of rate control therapy for atrial fibrillation (AF), clinical evidence demonstrating a heart rate target is lacking. Aim of the present study was to run a mathematical model simulating AF episodes with a different heart rate (HR) to predict hemodynamic parameters for each situation. Methods The lumped model, representing the pumping heart together with systemic and pulmonary circuits, was run to simulate AF with HR of 50, 70, 90, 110 and 130 bpm, respectively. Results Left ventricular pressure increased by 57%, from 33.92±37.56 mmHg to 53.15±47.56 mmHg, and mean systemic arterial pressure increased by 27%, from 82.66±14.04 mmHg to 105.3±7.6 mmHg, at the 50 and 130 bpm simulations, respectively. Stroke volume (from 77.45±8.50 to 39.09±8.08 mL), ejection fraction (from 61.10±4.40 to 39.32±5.42%) and stroke work (SW, from 0.88±0.04 to 0.58±0.09 J) decreased by 50, 36 and 34%, at the 50 and 130 bpm simulations, respectively. In addition, oxygen consumption indexes (rate pressure product – RPP, tension time index per minute – TTI/min, and pressure volume area per minute – PVA/min) increased from the 50 to the 130 bpm simulation, respectively, by 186% (from 5598±1939 to 15995±3219 mmHg/min), 56% (from 2094±265 to 3257±301 mmHg s/min) and 102% (from 57.99±17.90 to 117.4±26.0 J/min). In fact, left ventricular efficiency (SW/PVA) decreased from 80.91±2.91% at 50 bpm to 66.43±3.72% at the 130 bpm HR simulation. Conclusion Awaiting compulsory direct clinical evidences, the present mathematical model suggests that lower HRs during permanent AF relates to improved hemodynamic parameters, cardiac efficiency, and lower oxygen consumption. PMID:25764321

  9. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The

  10. Principles of models based engineering

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  11. Caesarean Delivery Rate Review: An Evidence-Based Analysis

    PubMed Central

    Degani, N; Sikich, N

    2015-01-01

    Background In 2007, caesarean deliveries comprised 28% of all hospital deliveries in Ontario. Provincial caesarean delivery rates increased with maternal age and varied by Local Health Integration Network. However, the accepted rate of caesarean delivery in a low-risk maternal population remains unclear. Objectives To review the literature to assess factors that affect the likelihood of experiencing a caesarean delivery, and to examine Ontario caesarean delivery rates to determine whether there is rate variation across the province. Data Sources Data sources included publications from OVID MEDLINE, OVID MEDLINE In-Process and Other Non-Indexed Citations, OVID Embase, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), and EBM Reviews, as well as data from the Canadian Institute for Health Information Discharge Abstracts Database and the Better Outcomes and Registry Network. Review Methods A mixed-methods approach was used, which included a systematic review of the literature to delineate factors associated with the likelihood of caesarean delivery and an analysis of administrative and clinical data on hospital deliveries in Ontario to determine provincial caesarean delivery rates, variation in rates, and reasons for variation. Results Fourteen systematic reviews assessed 14 factors affecting the likelihood of caesarean delivery; 7 factors were associated with an increased likelihood of caesarean delivery, and 2 factors were associated with a decreased likelihood. Five factors had no influence. One factor provided moderate-quality evidence supporting elective induction policies in low-risk women. The overall Ontario caesarean delivery rate in a very-low-risk population was 17%, but varied significantly across Ontario hospitals. Limitations The literature review included a 5–year period and used only systematic reviews. The determination of Robson class for women is based on care received in hospital only, and the low-risk population may have

  12. Ocean Color Based Estimates of Global Photochemical Rate Processes

    NASA Astrophysics Data System (ADS)

    Nelson, N. B.; Siegel, D. A.; Toole, D. A.

    2005-12-01

    The development and validation of new ocean color data products beyond chlorophyll allows for the assessment of biogeochemically relevant rate processes other than primary production, such as CO production and DMS photolysis. We present here a proof-of-concept study in which we integrate multiple global remote sensing data streams to estimate the solar irradiance absorbed by chromophoric dissolved organic matter (CDOM) in the euphotic zone. This quantity can be convolved with apparent quantum yield spectra to estimate photochemical reaction rates. In this study we use ocean color reflectance spectra from SeaWiFS and/or MODIS to estimate in-water light absorption and backscattering spectra using the Garver-Siegel-Maritorena ocean color model. These quantities were used to empirically estimate the diffuse attenuation coefficient spectrum (Kd) for surface waters, and thus depth profiles of light penetration. UV Irradiance spectra at the surface were estimated using TOMS data. We also estimated the scalar to vector irradiance ratio using information from radiative transfer modeling in conjunction with absorption and backscattering coefficient spectra. These quantities were combined to estimate the spectrum of light absorption by CDOM, or photochemically active radiation. Finally, we combined the photochemically active radiation spectra with open ocean estimates of apparent quantum yield to produce maps of photochemical production of CO. Global maps of time integrated production rates closely resemble similar maps of CDOM distribution, indicating a proximal control of photochemistry by CDOM.

  13. Modeling the Growth Rates of Tetragonal Lysozyme Crystal Faces

    NASA Technical Reports Server (NTRS)

    Li, Meirong; Nadarajah, Arunan; Pusey, Marc L.

    1998-01-01

    with respect to its concentration at saturation in order to apply growth rate models to this process. The measured growth rates were then compared with the predicted ones from several dislocation and 2D nucleation growth models, employing tetramer and octamer growth units in polydisperse solutions and monomer units in monodisperse solutions. For the (110) face, the calculations consistently showed that the measured growth rates followed the expected model relations with octamer growth units. For the (101) face, it is not possible to obtain a clear agreement between the predicted and measured growth rates for a single growth unit as done for the (110) face. However, the calculations do indicate that the average size of the growth unit is between a tetramer and an octamer. This suggests that tetramers, octamers and other intermediate size growth units all participate in the growth process for this face. These calculations show that it is possible to model the macroscopic protein crystal growth rates if the molecular level processes can be account for, particularly protein aggregation processes in the bulk solution. Our recent investigations of tetragonal lysozyme crystals employing high resolution atomic force microscopy scans have further confirmed the growth of these crystals by aggregate growth units corresponding to 4(sub 3) helices.

  14. Dose-rate and irradiation temperature dependence of BJT SPICE model rad-parameters

    SciTech Connect

    Montagner, X.; Briand, R.; Fouillat, P.; Touboul, A.; Schrimpf, R.D.; Galloway, K.F.; Calvet, M.C.; Calvel, P.

    1998-06-01

    A method to predict low dose rate degradation of bipolar transistors using high dose-rate, high temperature irradiation is evaluated, based on an analysis of four new rad-parameters that are introduced in the BJT SPICE model. This improved BJT model describes the radiation-induced excess base current with great accuracy. The low-level values of the rad-parameters are good tools for evaluating the proposed high-temperature test method because of their high sensitivity to radiation-induced degradation.

  15. Modelling airborne concentration and deposition rate of maize pollen

    NASA Astrophysics Data System (ADS)

    Jarosz, Nathalie; Loubet, Benjamin; Huber, Laurent

    2004-10-01

    The introduction of genetically modified (GM) crops has reinforced the need to quantify gene flow from crop to crop. This requires predictive tools which take into account meteorological conditions, canopy structure as well as pollen aerodynamic characteristics. A Lagrangian Stochastic (LS) model, called SMOP-2D (Stochastic Mechanistic model for Pollen dispersion and deposition in 2 Dimensions), is presented. It simulates wind dispersion of pollen by calculating individual pollen trajectories from their emission to their deposition. SMOP-2D was validated using two field experiments where airborne concentration and deposition rate of pollen were measured within and downwind from different sized maize (Zea mays) plots together with micrometeorological measurements. SMOP-2D correctly simulated the shapes of the concentration profiles but generally underestimated the deposition rates in the first 10 m downwind from the source. Potential explanations of this discrepancy are discussed. Incorrect parameterisation of turbulence in the transition from the crop to the surroundings is probably the most likely reason. This demonstrates that LS models for particle transfer need to be coupled with air-flow models under complex terrain conditions.

  16. Assessment on the rates and potentials of soil organic carbon sequestration in agricultural lands in Japan using a process-based model and spatially explicit land-use change inventories - Part 2: Future potentials

    NASA Astrophysics Data System (ADS)

    Yagasaki, Y.; Shirato, Y.

    2014-08-01

    Future potentials of the sequestration of soil organic carbon (SOC) in agricultural lands in Japan were estimated using a simulation system we recently developed to simulate SOC stock change at country-scale under varying land-use change, climate, soil, and agricultural practices, in a spatially explicit manner. Simulation was run from 1970 to 2006 with historical inventories, and subsequently to 2020 with future scenarios of agricultural activity comprised of various agricultural policy targets advocated by the Japanese government. Furthermore, the simulation was run subsequently until 2100 while forcing no temporal changes in land-use and agricultural activity to investigate duration and course of SOC stock change at country scale. A scenario with an increased rate of organic carbon input to agricultural fields by intensified crop rotation in combination with the suppression of conversion of agricultural lands to other land-use types was found to have a greater reduction of CO2 emission by enhanced soil carbon sequestration, but only under a circumstance in which the converted agricultural lands will become settlements that were considered to have a relatively lower rate of organic carbon input. The size of relative reduction of CO2 emission in this scenario was comparable to that in another contrasting scenario (business-as-usual scenario of agricultural activity) in which a relatively lower rate of organic matter input to agricultural fields was assumed in combination with an increased rate of conversion of the agricultural fields to unmanaged grasslands through abandonment. Our simulation experiment clearly demonstrated that net-net-based accounting on SOC stock change, defined as the differences between the emissions and removals during the commitment period and the emissions and removals during a previous period (base year or base period of Kyoto Protocol), can be largely influenced by variations in future climate. Whereas baseline-based accounting, defined

  17. Video-rate volumetric optical coherence tomography-based microangiography

    NASA Astrophysics Data System (ADS)

    Baran, Utku; Wei, Wei; Xu, Jingjiang; Qi, Xiaoli; Davis, Wyatt O.; Wang, Ruikang K.

    2016-04-01

    Video-rate volumetric optical coherence tomography (vOCT) is relatively young in the field of OCT imaging but has great potential in biomedical applications. Due to the recent development of the MHz range swept laser sources, vOCT has started to gain attention in the community. Here, we report the first in vivo video-rate volumetric OCT-based microangiography (vOMAG) system by integrating an 18-kHz resonant microelectromechanical system (MEMS) mirror with a 1.6-MHz FDML swept source operating at ˜1.3 μm wavelength. Because the MEMS scanner can offer an effective B-frame rate of 36 kHz, we are able to engineer vOMAG with a video rate up to 25 Hz. This system was utilized for real-time volumetric in vivo visualization of cerebral microvasculature in mice. Moreover, we monitored the blood perfusion dynamics during stimulation within mouse ear in vivo. We also discussed this system's limitations. Prospective MEMS-enabled OCT probes with a real-time volumetric functional imaging capability can have a significant impact on endoscopic imaging and image-guided surgery applications.

  18. Video-rate volumetric optical coherence tomography-based microangiography

    NASA Astrophysics Data System (ADS)

    Baran, Utku; Wei, Wei; Xu, Jingjiang; Qi, Xiaoli; Davis, Wyatt O.; Wang, Ruikang K.

    2016-04-01

    Video-rate volumetric optical coherence tomography (vOCT) is relatively young in the field of OCT imaging but has great potential in biomedical applications. Due to the recent development of the MHz range swept laser sources, vOCT has started to gain attention in the community. Here, we report the first in vivo video-rate volumetric OCT-based microangiography (vOMAG) system by integrating an 18-kHz resonant microelectromechanical system (MEMS) mirror with a 1.6-MHz FDML swept source operating at ˜1.3 μm wavelength. Because the MEMS scanner can offer an effective B-frame rate of 36 kHz, we are able to engineer vOMAG with a video rate up to 25 Hz. This system was utilized for real-time volumetric in vivo visualization of cerebral microvasculature in mice. Moreover, we monitored the blood perfusion dynamics during stimulation within mouse ear in vivo. We also discussed this system's limitations. Prospective MEMS-enabled OCT probes with a real-time volumetric functional imaging capability can have a significant impact on endoscopic imaging and image-guided surgery applications.

  19. Cardiac rate detection method based on the beam splitter prism

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Liu, Xiaohua; Liu, Ming; Zhao, Yuejin; Dong, Liquan; Zhao, Ruirui; Jin, Xiaoli; Zhao, Jingsheng

    2013-09-01

    A new cardiac rate measurement method is proposed. Through the beam splitter prism, the common-path optical system of transmitting and receiving signals is achieved. By the focusing effect of the lens, the small amplitude motion artifact is inhibited and the signal-to-noise is improved. The cardiac rate is obtained based on the PhotoPlethysmoGraphy (PPG). We use LED as the light source and use photoelectric diode as the receiving tube. The LED and the photoelectric diode are on the different sides of the beam splitter prism and they form the optical system. The signal processing and display unit is composed by the signal processing circuit, data acquisition device and computer. The light emitted by the modulated LED is collimated by the lens and irradiates the measurement target through the beam splitter prism. The light reflected by the target is focused on the receiving tube through the beam splitter prism and another lens. The signal received by the photoelectric diode is processed by the analog circuit and obtained by the data acquisition device. Through the filtering and Fast Fourier Transform, the cardiac rate is achieved. We get the real time cardiac rate by the moving average method. We experiment with 30 volunteers, containing different genders and different ages. We compare the signals captured by this method to a conventional PPG signal captured concurrently from a finger. The results of the experiments are all relatively agreeable and the biggest deviation value is about 2bmp.

  20. LS-DYNA Implementation of Polymer Matrix Composite Model Under High Strain Rate Impact

    NASA Technical Reports Server (NTRS)

    Zheng, Xia-Hua; Goldberg, Robert K.; Binienda, Wieslaw K.; Roberts, Gary D.

    2003-01-01

    A recently developed constitutive model is implemented into LS-DYNA as a user defined material model (UMAT) to characterize the nonlinear strain rate dependent behavior of polymers. By utilizing this model within a micromechanics technique based on a laminate analogy, an algorithm to analyze the strain rate dependent, nonlinear deformation of a fiber reinforced polymer matrix composite is then developed as a UMAT to simulate the response of these composites under high strain rate impact. The models are designed for shell elements in order to ensure computational efficiency. Experimental and numerical stress-strain curves are compared for two representative polymers and a representative polymer matrix composite, with the analytical model predicting the experimental response reasonably well.

  1. A count rate based contamination control standard for electron accelerators

    SciTech Connect

    May, R.T.; Schwahn, S.O.

    1996-12-31

    Accelerators of sufficient energy and particle fluence can produce radioactivity as an unwanted byproduct. The radioactivity is typically imbedded in structural materials but may also be removable from surfaces. Many of these radionuclides decay by positron emission or electron capture; they often have long half lives and produce photons of low energy and yield making detection by standard devices difficult. The contamination control limit used throughout the US nuclear industry and the Department of Energy is 1,000 disintegrations per minute. This limit is based on the detection threshold of pancake type Geiger-Mueller probes for radionuclides of relatively high radiotoxicity, such as cobalt-60. Several radionuclides of concern at a high energy electron accelerator are compared in terms of radiotoxicity with radionuclides commonly found in the nuclear industry. Based on this comparison, a count-rate based contamination control limit and associated measurement strategy is proposed which provides adequate detection of contamination at accelerators without an increase in risk.

  2. A model of northern pintail productivity and population growth rate

    USGS Publications Warehouse

    Flint, P.L.; Grand, J.B.; Rockwell, R.F.

    1998-01-01

    Our objective was to synthesize individual components of reproductive ecology into a single estimate of productivity and to assess the relative effects of survival and productivity on population dynamics. We used information on nesting ecology, renesting potential, and duckling survival of northern pintails (Anas acuta) collected on the Yukon-Kuskokvim Delta (Y-K Delta), Alaska, 1991-95, to model the number of ducklings produced under a range of nest success and duckling survival probabilities. Using average values of 25% nest success, 11% duckling survival, and 56% renesting probability from our study population, we calculated that all young in our population were produced by 13% of the breeding females, and that early-nesting females produced more young than later-nesting females. Further, we calculated, on average, that each female produced only 0.16 young females/nesting season. We combined these results with estimates of first-year and adult survival to examine the growth rate (??) of the population and the relative contributions of these demographic parameters to that growth rate. Contrary to aerial survey data, the population projection model suggests our study population is declining rapidly (?? = 0.6969). The relative effects on population growth rate were 0.1175 for reproductive success, 0.1175 for first-year survival, and 0.8825 for adult survival. Adult survival had the greatest influence on ?? for our population, and this conclusion was robust over a range of survival and productivity estimates. Given published estimates of annual survival for adult females (61%), our model suggested nest success and duckling survival need to increase to approximately 40% to achieve population stability. We discuss reasons for the apparent discrepancy in population trends between our model and aerial surveys in terms of bias in productivity and survival estimates.

  3. High strain rate modeling of ceramics and ceramic composites

    NASA Astrophysics Data System (ADS)

    Espinosa, H. D.

    1994-07-01

    The high strain rate response of an AlN/AlN/Al composite manufactured by Lanxide Armor Products, Inc., has been studied through normal and pressure-shear plate impact experiments. Differences in the measured shear resistance, as a function of the impact configuration, motivated the examination of the material response by means of a microcracking multiple-plane model and a continuum elasto-viscoplastic constitutive model. Numerical simulations of the normal impact experiments do not support microcracking as the dominant inelastic mechanism. By contrast, an elasto-viscoplastic description of the material behavior predicts the main features of the normal stress history and the thickness dependence of the Hugoniot elastic limit. Nonetheless, the elasto-viscoplastic model cannot reproduce both the normal and pressure-shear experiments with a single set of model parameters. The inadequacy of the continuum elasto-viscoplastic model seems to result from the isotropic assumption embodied in its formulation. The shear resistance measured in the pressure-shear experiments is adequately predicted by a microcracking multiple-plane model. The agreement seems to hinge in the continuous shearing of the material on a micro-localized fashion, i.e. only one orientation becomes dominant and controls the inelastic shear deformation rate. This event does not occur in the normal impact configuration, in which the amount of inelasticity is primarily controlled by the elastic compressibility of the material. These findings explain the higher sensitivity to damage and microplasticity observed in the pressure-shear configuration, as well as the softer material response recorded in this configuration.

  4. [Design of Oxygen Saturation, Heart Rate, Respiration Rate Detection System Based on Smartphone of Android Operating System].

    PubMed

    Zhu, Mingshan; Zeng, Bixin

    2015-03-01

    In this paper, we designed an oxygen saturation, heart rate, respiration rate monitoring system based on smartphone of android operating system, physiological signal acquired by MSP430 microcontroller and transmitted by Bluetooth module. PMID:26524782

  5. [Design of Oxygen Saturation, Heart Rate, Respiration Rate Detection System Based on Smartphone of Android Operating System].

    PubMed

    Zhu, Mingshan; Zeng, Bixin

    2015-03-01

    In this paper, we designed an oxygen saturation, heart rate, respiration rate monitoring system based on smartphone of android operating system, physiological signal acquired by MSP430 microcontroller and transmitted by Bluetooth module.

  6. Doppler-Based Flow Rate Sensing in Microfluidic Channels

    PubMed Central

    Stern, Liron; Bakal, Avraham; Tzur, Mor; Veinguer, Maya; Mazurski, Noa; Cohen, Nadav; Levy, Uriel

    2014-01-01

    We design, fabricate and experimentally demonstrate a novel generic method to detect flow rates and precise changes of flow velocity in microfluidic devices. Using our method we can measure flow rates of ∼2 mm/s with a resolution of 0.08 mm/s. The operation principle is based on the Doppler shifting of light diffracted from a self-generated periodic array of bubbles within the channel and using self-heterodyne detection to analyze the diffracted light. As such, the device is appealing for variety of “lab on chip” bio-applications where a simple and accurate speed measurement is needed, e.g., for flow-cytometry and cell sorting. PMID:25211195

  7. Selecting and applying cesium-137 conversion models to estimate soil erosion rates in cultivated fields.

    PubMed

    Li, Sheng; Lobb, David A; Tiessen, Kevin H D; McConkey, Brian G

    2010-01-01

    The fallout radionuclide cesium-137 ((137)Cs) has been successfully used in soil erosion studies worldwide. However, discrepancies often exist between the erosion rates estimated using various conversion models. As a result, there is often confusion in the use of the various models and in the interpretation of the data. Therefore, the objective of this study was to test the structural and parametrical uncertainties associated with four conversion models typically used in cultivated agricultural landscapes. For the structural uncertainties, the Soil Constituent Redistribution by Erosion Model (SCREM) was developed and used to simulate the redistribution of fallout (137)Cs due to tillage and water erosion along a simple two-dimensional (horizontal and vertical) transect. The SCREM-predicted (137)Cs inventories were then imported into the conversion models to estimate the erosion rates. The structural uncertainties of the conversion models were assessed based on the comparisons between the conversion-model-estimated erosion rates and the erosion rates determined or used in the SCREM. For the parametrical uncertainties, test runs were conducted by varying the values of the parameters used in the model, and the parametrical uncertainties were assessed based on the responsive changes of the estimated erosion rates. Our results suggest that: (i) the performance/accuracy of the conversion models was largely dependent on the relative contributions of water vs. tillage erosion; and (ii) the estimated erosion rates were highly sensitive to the input values of the reference (137)Cs level, particle size correction factors and tillage depth. Guidelines were proposed to aid researchers in selecting and applying the conversion models under various situations common to agricultural landscapes.

  8. SFT based cosmological models

    NASA Astrophysics Data System (ADS)

    Koshelev, Alexey S.

    2010-11-01

    We consider the appearance of multiple scalar fields in SFT inspired non-local models with a single scalar field at late times. In this regime all the scalar fields are free. This system minimally coupled to gravity is mainly analyzed in this note. We build one exact solution to the equations of motion. We consider an exactly solvable model which obeys a simple exact solution in the cosmological context for the Friedmann equations and that reproduces the behavior expected from SFT in the asymptotic regime.

  9. Vertical distribution, migration rates, and model comparison of actinium in a semi-arid environment.

    PubMed

    McClellan, Y; August, R A; Gosz, J R; Gann, S; Parmenter, R R; Windsor, M

    2006-01-01

    Vertical soil characterization and migration of radionuclides were investigated at four radioactively contaminated sites on Kirtland Air Force Base (KAFB), New Mexico to determine the vertical downward migration of radionuclides in a semi-arid environment. The surface soils (0-15 cm) were intentionally contaminated with Brazilian sludge (containing (232)Thorium and other radionuclides) approximately 40 years ago, in order to simulate the conditions resulting from a nuclear weapons accident. Site grading consisted of manually raking or machine disking the sludge. The majority of the radioactivity was found in the top 15 cm of soil, with retention ranging from 69 to 88%. Two models, a compartment diffusion model and leach rate model, were evaluated to determine their capabilities and limitations in predicting radionuclide behavior. The migration rates of actinium were calculated with the diffusion compartment and the leach rate models for all sites, and ranged from 0.009 to 0.1 cm/yr increasing with depth. The migration rates calculated with the leach rate models were similar to those using the diffusion compartment model and did not increase with depth (0.045-0.076, 0.0 cm/yr). The research found that the physical and chemical properties governing transport processes of water and solutes in soil provide a valid radionuclide transport model. The evaluation also showed that the physical model has fewer limitations and may be more applicable to this environment.

  10. Vertical distribution, migration rates, and model comparison of actinium in a semi-arid environment.

    PubMed

    McClellan, Y; August, R A; Gosz, J R; Gann, S; Parmenter, R R; Windsor, M

    2006-01-01

    Vertical soil characterization and migration of radionuclides were investigated at four radioactively contaminated sites on Kirtland Air Force Base (KAFB), New Mexico to determine the vertical downward migration of radionuclides in a semi-arid environment. The surface soils (0-15 cm) were intentionally contaminated with Brazilian sludge (containing (232)Thorium and other radionuclides) approximately 40 years ago, in order to simulate the conditions resulting from a nuclear weapons accident. Site grading consisted of manually raking or machine disking the sludge. The majority of the radioactivity was found in the top 15 cm of soil, with retention ranging from 69 to 88%. Two models, a compartment diffusion model and leach rate model, were evaluated to determine their capabilities and limitations in predicting radionuclide behavior. The migration rates of actinium were calculated with the diffusion compartment and the leach rate models for all sites, and ranged from 0.009 to 0.1 cm/yr increasing with depth. The migration rates calculated with the leach rate models were similar to those using the diffusion compartment model and did not increase with depth (0.045-0.076, 0.0 cm/yr). The research found that the physical and chemical properties governing transport processes of water and solutes in soil provide a valid radionuclide transport model. The evaluation also showed that the physical model has fewer limitations and may be more applicable to this environment. PMID:16243414

  11. A regional approach for modeling cliff retreat rate: The Makhteshim Country, Israel

    NASA Astrophysics Data System (ADS)

    Finzi, Yaron; Harlev, Noam

    2016-10-01

    Cliff retreat rate significantly affect the evolution of landforms and cliff stability. Cliff retreat studies also provide intriguing clues regarding past geomorphic conditions and environmental changes. We hereby present a model to calculate cliff retreat rate based on spatial data of cliff structure and morphology. The model is applied to numerous cliffs in the arid Makhteshim Country, Israel, and results are calibrated using published rates of two local cliffs. The calculated retreat rates confirm previous assertions that the crater cliffs are receding very slowly, but reveal that the rates vary significantly along the cliffs (1-18 cm ky- 1). Results also provide first estimates of retreat rates of other major cliffs in the region including fast retreat rates at the Sede Zin cliff (300-600 cm ky- 1). The proposed model provides a robust analysis to account for local cliff-talus morphology and yields rate estimates representative of current conditions rather than a long-term (geologic) average rate. Results presented constitute important new insights into regional geomorphic processes and on the stability of specific cliff sections within the study area.

  12. Infrared imaging based hyperventilation monitoring through respiration rate estimation

    NASA Astrophysics Data System (ADS)

    Basu, Anushree; Routray, Aurobinda; Mukherjee, Rashmi; Shit, Suprosanna

    2016-07-01

    A change in the skin temperature is used as an indicator of physical illness which can be detected through infrared thermography. Thermograms or thermal images can be used as an effective diagnostic tool for monitoring and diagnosis of various diseases. This paper describes an infrared thermography based approach for detecting hyperventilation caused due to stress and anxiety in human beings by computing their respiration rates. The work employs computer vision techniques for tracking the region of interest from thermal video to compute the breath rate. Experiments have been performed on 30 subjects. Corner feature extraction using Minimum Eigenvalue (Shi-Tomasi) algorithm and registration using Kanade Lucas-Tomasi algorithm has been used here. Thermal signature around the extracted region is detected and subsequently filtered through a band pass filter to compute the respiration profile of an individual. If the respiration profile shows unusual pattern and exceeds the threshold we conclude that the person is stressed and tending to hyperventilate. Results obtained are compared with standard contact based methods which have shown significant correlations. It is envisaged that the thermal image based approach not only will help in detecting hyperventilation but can assist in regular stress monitoring as it is non-invasive method.

  13. Modeling Void Nucleation in Metals at High Strain-rates

    NASA Astrophysics Data System (ADS)

    Belak, J.; Bales, G. S.; Glosli, J.

    1997-08-01

    Isotropic tension is simulated in nanoscale polycrystalline metals using large-scale molecular dynamics. The nanocrystalline metal is fabricated on the computer by growing randomly oriented grains from random positions or lattice positions in the simulation cell. Constant volume strain rates of 10^7 - 10^9 are considered for systems ranging from 10^5 - 10^7 atoms using an EAM interatomic potential. The spacing between voids for room temperature simulations is found to scale approximately as l ~ 0.005 * Cs / dotɛ, where Cs is the sound speed and dotɛ is the strain rate. The growth of small voids is simulated by cutting a void out of the simulation cell and repeating the isotropic expansion. Results are presented for several microstructures and void sizes and compared to macroscopic models.

  14. The stochastic string model as a unifying theory of the term structure of interest rates

    NASA Astrophysics Data System (ADS)

    Bueno-Guerrero, Alberto; Moreno, Manuel; Navas, Javier F.

    2016-11-01

    We present the stochastic string model of Santa-Clara and Sornette (2001), as reformulated by Bueno-Guerrero et al. (2015), as a unifying theory of the continuous-time modeling of the term structure of interest rates. We provide several new results, such as: (a) an orthogonality condition for the volatilities in the Heath, Jarrow, and Morton (1992) (HJM) model, (b) the interpretation of multi-factor HJM models as approximations to a full infinite-dimensional model, (c) a result of consistency based on Hilbert spaces, and (d) a theorem for option valuation.

  15. Total dose and dose rate models for bipolar transistors in circuit simulation.

    SciTech Connect

    Campbell, Phillip Montgomery; Wix, Steven D.

    2013-05-01

    The objective of this work is to develop a model for total dose effects in bipolar junction transistors for use in circuit simulation. The components of the model are an electrical model of device performance that includes the effects of trapped charge on device behavior, and a model that calculates the trapped charge densities in a specific device structure as a function of radiation dose and dose rate. Simulations based on this model are found to agree well with measurements on a number of devices for which data are available.

  16. A Minimalistic Resource Allocation Model to Explain Ubiquitous Increase in Protein Expression with Growth Rate

    PubMed Central

    Keren, Leeat; Segal, Eran; Milo, Ron

    2016-01-01

    Most proteins show changes in level across growth conditions. Many of these changes seem to be coordinated with the specific growth rate rather than the growth environment or the protein function. Although cellular growth rates, gene expression levels and gene regulation have been at the center of biological research for decades, there are only a few models giving a base line prediction of the dependence of the proteome fraction occupied by a gene with the specific growth rate. We present a simple model that predicts a widely coordinated increase in the fraction of many proteins out of the proteome, proportionally with the growth rate. The model reveals how passive redistribution of resources, due to active regulation of only a few proteins, can have proteome wide effects that are quantitatively predictable. Our model provides a potential explanation for why and how such a coordinated response of a large fraction of the proteome to the specific growth rate arises under different environmental conditions. The simplicity of our model can also be useful by serving as a baseline null hypothesis in the search for active regulation. We exemplify the usage of the model by analyzing the relationship between growth rate and proteome composition for the model microorganism E.coli as reflected in recent proteomics data sets spanning various growth conditions. We find that the fraction out of the proteome of a large number of proteins, and from different cellular processes, increases proportionally with the growth rate. Notably, ribosomal proteins, which have been previously reported to increase in fraction with growth rate, are only a small part of this group of proteins. We suggest that, although the fractions of many proteins change with the growth rate, such changes may be partially driven by a global effect, not necessarily requiring specific cellular control mechanisms. PMID:27073913

  17. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  18. Beliefs and stochastic modelling of interest rate scenario risk

    NASA Astrophysics Data System (ADS)

    Galic, E.; Molgedey, L.

    2001-04-01

    We present a framework that allows for a systematic assessment of risk given a specific model and belief on the market. Within this framework the time evolution of risk is modeled in a twofold way. On the one hand, risk is modeled by the time discrete and nonlinear garch(1,1) process, which allows for a (time-)local understanding of its level, together with a short term forecast. On the other hand, via a diffusion approximation, the time evolution of the probability density of risk is modeled by a Fokker-Planck equation. Then, as a final step, using Bayes theorem, beliefs are conditioned on the stationary probability density function as obtained from the Fokker-Planck equation. We believe this to be a highly rigorous framework to integrate subjective judgments of future market behavior and underlying models. In order to demonstrate the approach, we apply it to risk assessment of empirical interest rate scenario methodologies, i.e. the application of Principal Component Analysis to the the dynamics of bonds.

  19. Ultra High-Rate Germanium (UHRGe) Modeling Status Report

    SciTech Connect

    Warren, Glen A.; Rodriguez, Douglas C.

    2012-06-07

    The Ultra-High Rate Germanium (UHRGe) project at Pacific Northwest National Laboratory (PNNL) is conducting research to develop a high-purity germanium (HPGe) detector that can provide both the high resolution typical of germanium and high signal throughput. Such detectors may be beneficial for a variety of potential applications ranging from safeguards measurements of used fuel to material detection and verification using active interrogation techniques. This report describes some of the initial radiation transport modeling efforts that have been conducted to help guide the design of the detector as well as a description of the process used to generate the source spectrum for the used fuel application evaluation.

  20. Networks in financial markets based on the mutual information rate.

    PubMed

    Fiedor, Paweł

    2014-05-01

    In the last few years there have been many efforts in econophysics studying how network theory can facilitate understanding of complex financial markets. These efforts consist mainly of the study of correlation-based hierarchical networks. This is somewhat surprising as the underlying assumptions of research looking at financial markets are that they are complex systems and thus behave in a nonlinear manner, which is confirmed by numerous studies, making the use of correlations which are inherently dealing with linear dependencies only baffling. In this paper we introduce a way to incorporate nonlinear dynamics and dependencies into hierarchical networks to study financial markets using mutual information and its dynamical extension: the mutual information rate. We show that this approach leads to different results than the correlation-based approach used in most studies, on the basis of 91 companies listed on the New York Stock Exchange 100 between 2003 and 2013, using minimal spanning trees and planar maximally filtered graphs.

  1. Networks in financial markets based on the mutual information rate

    NASA Astrophysics Data System (ADS)

    Fiedor, Paweł

    2014-05-01

    In the last few years there have been many efforts in econophysics studying how network theory can facilitate understanding of complex financial markets. These efforts consist mainly of the study of correlation-based hierarchical networks. This is somewhat surprising as the underlying assumptions of research looking at financial markets are that they are complex systems and thus behave in a nonlinear manner, which is confirmed by numerous studies, making the use of correlations which are inherently dealing with linear dependencies only baffling. In this paper we introduce a way to incorporate nonlinear dynamics and dependencies into hierarchical networks to study financial markets using mutual information and its dynamical extension: the mutual information rate. We show that this approach leads to different results than the correlation-based approach used in most studies, on the basis of 91 companies listed on the New York Stock Exchange 100 between 2003 and 2013, using minimal spanning trees and planar maximally filtered graphs.

  2. Category Rating Is Based on Prototypes and Not Instances: Evidence from Feedback-Dependent Context Effects

    ERIC Educational Resources Information Center

    Petrov, Alexander A.

    2011-01-01

    Context effects in category rating on a 7-point scale are shown to reverse direction depending on feedback. Context (skewed stimulus frequencies) was manipulated between and feedback within subjects in two experiments. The diverging predictions of prototype- and exemplar-based scaling theories were tested using two representative models: ANCHOR…

  3. Mass Customization Production Planning System by Advance Demand Information Based on Unfulfilled-order-rate II

    NASA Astrophysics Data System (ADS)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Okuhara, Koji

    In the previous paper, we proposed Mass Customization Production Planning & Management System (MCPS) based on unfulfilled-order-rate by using Advance Demand Information which is called ‘Naiji System’ as an unique corporation between a maker and suppliers in Japan, and 3 indicators to estimate the unfulfilled-order-rate. Applying these indicators to the model, we investigated the behavior of unfulfilled-order-rate at the final period in the planning horizon. In this paper, we propose a new model for purchasing, and investigate the unfulfilled-order-rate at each period and the impact to the total inventory. We find that the total inventories become 5.9%-20.0% decreases by using SOn rather than by using SOn(0). And we enhance a base-stock policy to a new one with multi-period. We prove that the MCPS model for purchasing by using SOn(0) is equivalent to the base-stock policy with multi-period under the specified condition. Under this condition, the proposed model by using SOn decreases inventories more than the base-stock policy with multi-period.

  4. Dependence of X-Ray Burst Models on Nuclear Reaction Rates

    NASA Astrophysics Data System (ADS)

    Cyburt, R. H.; Amthor, A. M.; Heger, A.; Johnson, E.; Keek, L.; Meisel, Z.; Schatz, H.; Smith, K.

    2016-10-01

    X-ray bursts are thermonuclear flashes on the surface of accreting neutron stars, and reliable burst models are needed to interpret observations in terms of properties of the neutron star and the binary system. We investigate the dependence of X-ray burst models on uncertainties in (p, γ), (α, γ), and (α, p) nuclear reaction rates using fully self-consistent burst models that account for the feedbacks between changes in nuclear energy generation and changes in astrophysical conditions. A two-step approach first identified sensitive nuclear reaction rates in a single-zone model with ignition conditions chosen to match calculations with a state-of-the-art 1D multi-zone model based on the Kepler stellar evolution code. All relevant reaction rates on neutron-deficient isotopes up to mass 106 were individually varied by a factor of 100 up and down. Calculations of the 84 changes in reaction rate with the highest impact were then repeated in the 1D multi-zone model. We find a number of uncertain reaction rates that affect predictions of light curves and burst ashes significantly. The results provide insights into the nuclear processes that shape observables from X-ray bursts, and guidance for future nuclear physics work to reduce nuclear uncertainties in X-ray burst models.

  5. Developing models for the prediction of hospital healthcare waste generation rate.

    PubMed

    Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe

    2016-01-01

    An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. PMID:26437681

  6. Modelling of Dynamic Rock Fracture Process with a Rate-Dependent Combined Continuum Damage-Embedded Discontinuity Model Incorporating Microstructure

    NASA Astrophysics Data System (ADS)

    Saksala, Timo

    2016-10-01

    This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.

  7. On conductance-based neural field models

    PubMed Central

    Pinotsis, Dimitris A.; Leite, Marco; Friston, Karl J.

    2013-01-01

    This technical note introduces a conductance-based neural field model that combines biologically realistic synaptic dynamics—based on transmembrane currents—with neural field equations, describing the propagation of spikes over the cortical surface. This model allows for fairly realistic inter-and intra-laminar intrinsic connections that underlie spatiotemporal neuronal dynamics. We focus on the response functions of expected neuronal states (such as depolarization) that generate observed electrophysiological signals (like LFP recordings and EEG). These response functions characterize the model's transfer functions and implicit spectral responses to (uncorrelated) input. Our main finding is that both the evoked responses (impulse response functions) and induced responses (transfer functions) show qualitative differences depending upon whether one uses a neural mass or field model. Furthermore, there are differences between the equivalent convolution and conductance models. Overall, all models reproduce a characteristic increase in frequency, when inhibition was increased by increasing the rate constants of inhibitory populations. However, convolution and conductance-based models showed qualitatively different changes in power, with convolution models showing decreases with increasing inhibition, while conductance models show the opposite effect. These differences suggest that conductance based field models may be important in empirical studies of cortical gain control or pharmacological manipulations. PMID:24273508

  8. Rain cell size statistics as a function of rain rate for attenuation modeling

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1983-01-01

    Rain cell size statistics as a function of rain rate have been deduced by employing a radar data base of rain reflectivity data acquired over a three-year period at Wallops Island, VA. These cell statistics have important applications in slant path rain attenuation modeling and remote sensing of the earth's surface from space at frequencies above 10 GHz.

  9. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    EPA Science Inventory

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  10. Relationship between soil erodibility and modeled infiltration rate in different soils

    NASA Astrophysics Data System (ADS)

    Wang, Guoqiang; Fang, Qingqing; Wu, Binbin; Yang, Huicai; Xu, Zongxue

    2015-09-01

    The relationship between soil erodibility, which is hard to measure, and modeled infiltration rate were rarely researched. Here, the soil erodibility factors (K and Ke in the USLE, Ki and K1 in the WEPP) were calculated and the infiltration rates were modeled based on the designed laboratory simulation experiments and proposed infiltration model, in order to build their relationship. The impacts of compost amendment on the soil erosion characteristics and relationship were also studied. Two contrasting agricultural soils (bare and cultivated fluvo-aquic soils) were used, and different poultry compost contents (control, low and high) were applied to both soils. The results indicated that the runoff rate, sediment yield rate and soil erodibility of the bare soil treatments were generally higher than those of the corresponding cultivated soil treatments. The application of composts generally decreased sediment yield and soil erodibility but did not always decrease runoff. The comparison of measured and modeled infiltration rates indicated that the model represented the infiltration processes well with an N-S coefficient of 0.84 for overall treatments. Significant negative logarithmic correlations have been found between final infiltration rate (FIR) and the four soil erodibility factors, and the relationship between USLE-K and FIR demonstrated the best correlation. The application of poultry composts would not influence the logarithmic relationship between FIR and soil erodibility. Our study provided a useful tool to estimate soil erodibility.

  11. Measurement and Modeling of Respiration Rate of Tomato (Cultivar Roma) for Modified Atmosphere Storage.

    PubMed

    Kandasamy, Palani; Moitra, Ranabir; Mukherjee, Souti

    2015-01-01

    Experiments were conducted to determine the respiration rate of tomato at 10, 20 and 30 °C using closed respiration system. Oxygen depletion and carbon dioxide accumulation in the system containing tomato was monitored. Respiration rate was found to decrease with increasing CO2 and decreasing O2 concentration. Michaelis-Menten type model based on enzyme kinetics was evaluated using experimental data generated for predicting the respiration rate. The model parameters that obtained from the respiration rate at different O2 and CO2 concentration levels were used to fit the model against the storage temperatures. The fitting was fair (R2 = 0.923 to 0.970) when the respiration rate was expressed as O2 concentation. Since inhibition constant for CO2 concentration tended towards negetive, the model was modified as a function of O2 concentration only. The modified model was fitted to the experimental data and showed good agreement (R2 = 0.998) with experimentally estimated respiration rate.

  12. Low-dose-rate extrapolation using the multistage model

    SciTech Connect

    Portier, C.; Hoel, D.

    1983-12-01

    The distribution of the maximum likelihood estimates of virtually safe levels of exposure to environmental chemicals is derived by using large-sample theory and Monte Carlo simulation according to the Armitage-Doll multistage model. Using historical dose-response we develop a set of 33 two-stage models upon which we base our conclusions. The large-sample distributions of the virtually safe dose are normal for cases in which the multistage-model parameters have nonzero expectation, and are skewed in other cases. The large-sample theory does not provide a good approximation of the distribution observed for small bioassays when Monte Carlo simulation is used. The constrained nature of the multistage-model parameters leads to bimodal distributions for small bioassays. The two modes are the direct result of estimating the linear parameter in the multistage model; the lower mode results from estimating this parameter to be nonzero, and the upper mode from estimating it to be zero. The results of this research emphasize the need for incorporation of the biological theory in the model-selection process.

  13. A method for calculating strain energy release rate based on beam theory

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Pandey, R. K.

    1993-01-01

    The Timoshenko beam theory was used to model cracked beams and to calculate the total strain energy release rate. The root rotation of the beam segments at the crack tip were estimated based on an approximate 2D elasticity solution. By including the strain energy released due to the root rotations of the beams during crack extension, the strain energy release rate obtained using beam theory agrees very well with the 2D finite element solution. Numerical examples were given for various beam geometries and loading conditions. Comparisons with existing beam models were also given.

  14. Improved method for calculating strain energy release rate based on beam theory

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Pandey, R. K.

    1994-01-01

    The Timoshenko beam theory was used to model cracked beams and to calculate the total strain-energy release rate. The root rotations of the beam segments at the crack tip were estimated based on an approximate two-dimensional elasticity solution. By including the strain energy released due to the root rotations of the beams during crack extension, the strain-energy release rate obtained using beam theory agrees very well with the two-dimensional finite element solution. Numerical examples were given for various beam geometries and loading conditions. Comparisons with existing beam models were also given.

  15. Metabolically Derived human ventilation rates: A revised approach based upon oxygen consumption rates (Final Report) 2009

    EPA Science Inventory

    The purpose of this report is to provide a revised approach for calculating an individual's ventilation rate directly from their oxygen consumption rate. This revised approach will be used to update the ventilation rate information in the Exposure Factors Handbook, which serve as...

  16. Model based vibration monitoring

    SciTech Connect

    Esat, I.; Paya, B.; Badi, M.N.M.

    1996-11-01

    The principal source of vibratory excitation of gear system is the unsteady component of the relative angular motion of pair of meshing spur gears. This vibratory excitation is described by the transmission error. The transmission error present itself as a varying force at the contact point of the meshing gear teeth. The varying force is also influenced by the varying tooth stiffness due to change of orientation of teeth relative to each other, during the contact phase of each pair. Such a varying force produces both lateral and torsional excitation to the gear system. This paper presents analytical formulation of a simple two meshing spur gear system as a three mass system (18 DOF). The mathematical model also incorporates the analytical formulation of the tooth stiffness. The analytical results are compared with the experimental results. At this stage of analysis the procedure developed for handling the nonlinear influences of the tooth geometry is not fully implemented and the tooth stiffness taken as a constant value representing the average tooth stiffness. The comparison between the analytical and experimental results are encouraging as three main frequency obtained from FFT of the experimental results correlates very closely with the analytical results.

  17. Improved model for the angular dependence of excimer laser ablation rates in polymer materials

    SciTech Connect

    Pedder, J. E. A.; Holmes, A. S.; Dyer, P. E.

    2009-10-26

    Measurements of the angle-dependent ablation rates of polymers that have applications in microdevice fabrication are reported. A simple model based on Beer's law, including plume absorption, is shown to give good agreement with the experimental findings for polycarbonate and SU8, ablated using the 193 and 248 nm excimer lasers, respectively. The modeling forms a useful tool for designing masks needed to fabricate complex surface relief by ablation.

  18. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  19. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-01

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data.

  20. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  1. Base rates of hate crime victimization among college students.

    PubMed

    Rayburn, Nadine Recker; Earleywine, Mitchell; Davison, Gerald C

    2003-10-01

    This study uses the unmatched count technique (UCT) to estimate base rates for hate crime victimization in college students and compares the results with estimates found using conventional methods. Hate crimes, criminal acts perpetrated against individuals or members of specific stigmatized groups, intend to express condemnation, hate, disapproval, dislike, or distrust for a group. The UCT is a promising tool in the investigation of hate crime because it does not require participants to directly answer sensitive questions. This may provide more accurate responses than other methods. The UCT revealed higher estimates for a variety of serious hate crimes, including physical and sexual assault. These higher estimates provide a better feel for the level of hate crime victimization and point to the increased need for hate crime victims' assistance programs on college campuses.

  2. Optimization of Surface Acoustic Wave-Based Rate Sensors

    PubMed Central

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO3, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  3. A multiscale strength model for tantalum over an extended range of strain rates

    NASA Astrophysics Data System (ADS)

    Barton, N. R.; Rhee, M.

    2013-09-01

    A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].

  4. Pronounced increase in breathing rate in the "hair dryer model" of experimental febrile seizures.

    PubMed

    Schuchmann, Sebastian; Tolner, Else A; Marshall, Pepin; Vanhatalo, Sampsa; Kaila, Kai

    2008-05-01

    In a study using a heated chamber for induction of experimental febrile seizures (eFS) in rat pups, ictal activity was shown to be precipitated by a respiratory alkalosis (Schuchmann et al., 2006). In sharp contrast to this, in a recent review Dubé et al., (2007) suggest that the respiratory alkalosis is model specific, and that no increase in respiratory rate is observed in the widely used "hair dryer model" of eFS. The data in the present work, based on well-established techniques for measuring respiratory rates in rat pups, show a pronounced increase in the "hair dryer model" with values that are slightly higher than those recorded in the heated chamber model. Hence, a temperature-evoked increase in respiration is a common feature of these two models of eFS.

  5. Regression rate behaviors of HTPB-based propellant combinations for hybrid rocket motor

    NASA Astrophysics Data System (ADS)

    Sun, Xingliang; Tian, Hui; Li, Yuelong; Yu, Nanjia; Cai, Guobiao

    2016-02-01

    The purpose of this paper is to characterize the regression rate behavior of hybrid rocket motor propellant combinations, using hydrogen peroxide (HP), gaseous oxygen (GOX), nitrous oxide (N2O) as the oxidizer and hydroxyl-terminated poly-butadiene (HTPB) as the based fuel. In order to complete this research by experiment and simulation, a hybrid rocket motor test system and a numerical simulation model are established. Series of hybrid rocket motor firing tests are conducted burning different propellant combinations, and several of those are used as references for numerical simulations. The numerical simulation model is developed by combining the Navies-Stokes equations with the turbulence model, one-step global reaction model, and solid-gas coupling model. The distribution of regression rate along the axis is determined by applying simulation mode to predict the combustion process and heat transfer inside the hybrid rocket motor. The time-space averaged regression rate has a good agreement between the numerical value and experimental data. The results indicate that the N2O/HTPB and GOX/HTPB propellant combinations have a higher regression rate, since the enhancement effect of latter is significant due to its higher flame temperature. Furthermore, the containing of aluminum (Al) and/or ammonium perchlorate(AP) in the grain does enhance the regression rate, mainly due to the more energy released inside the chamber and heat feedback to the grain surface by the aluminum combustion.

  6. A simulation model for the determination of tabarru' rate in a family takaful

    NASA Astrophysics Data System (ADS)

    Ismail, Hamizun bin

    2014-06-01

    The concept of tabarru' that is incorporated in family takaful serves to eliminate the element of uncertainty in the contract as a participant agree to relinquish as donation certain portion of his contribution. The most important feature in family takaful is that it does not guarantee a definite return on a participant's contribution, unlike its conventional counterpart where a premium is paid in return for a guaranteed amount of insurance benefit. In other words, investment return on contributed funds by the participants are based on actual investment experience. The objective of this study is to set up a framework for the determination of tabarru' rate by simulation. The model is based on binomial death process. Specifically, linear tabarru' rate and flat tabarru' rate are introduced. The results of the simulation trials show that the linear assumption on the tabarru' rate has an advantage over the flat counterpart as far as the risk of the investment accumulation on maturity is concerned.

  7. Constraint Based Modeling Going Multicellular

    PubMed Central

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches. PMID:26904548

  8. Constraint Based Modeling Going Multicellular.

    PubMed

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches.

  9. Constraint Based Modeling Going Multicellular.

    PubMed

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches. PMID:26904548

  10. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  11. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  12. Thermodynamic Development of Corrosion Rate Modeling in Iron Phosphate Glasses

    SciTech Connect

    Schlesinger, Mark; Brow, Richard

    2011-10-31

    A two-year research program investigated links between the thermodynamic properties of phosphate glasses and their corrosion rates in different solutions. Glasses in the Na2O-CaO-P2O5 and Na2O-Fe2O3-PO5 systems were prepared and characterized. These glasses were then exposed in bulk and powder form to acid (0.1M HCl), basic (0.1M KOH) and neutral (deionized water) solutions at varying exposure times and temperatures. Analysis of the solution and the glass after exposure determined the rate and type of corrosion that occurred. Simultaneously, efforts were made to determine the thermodynamic properties of solid iron phosphate compounds. This included measurement of low temperature (5-300 K) heat capacities, measured at Brigham Young University; the attempted use of a Parr calorimeter to measure ambient temperature enthalpies of formation; and attempted measurement of temperature heat capacities. Only the first of the three tasks was successfully accomplished. In lieu of experimental measurement of enthalpies of formation, first-principles calculation of enthalpies of formation was performed at Missouri S&T; these results will be used in subsequent modeling efforts.

  13. Model of radiation-induced gain degradation of NPN bipolar junction transistor at different dose rates

    NASA Astrophysics Data System (ADS)

    Qifeng, Zhao; Yiqi, Zhuang; Junlin, Bao; Wei, Hu

    2015-06-01

    Ionizing-radiation-induced current gain degradation in NPN bipolar junction transistors is due to an increase in base current as a result of recombination at the surface of the device. A model is presented which identifies the physical mechanism responsible for current gain degradation. The increase in surface recombination velocity due to interface states results in an increase in base current. Besides, changing the surface potential along the base surface induced by the oxide-trapped charges can also lead to an increased base current. By combining the production mechanisms of oxide-trapped charges and interface states, this model can explain the fact that the current gain degradation is more severe at a low dose rate than at a high dose rate. The radiations were performed in a Co60 source up to a total dose of 70 krad(Si). The low dose rate was 0.1 rad(Si)/s and the high dose rate was 10 rad(Si)/s. The model accords well with the experimental results. Project supported by the National Natural Science Foundation of China (Nos. 61076101, 61204092).

  14. Heat release rate properties of wood-based materials

    SciTech Connect

    Chamberlain, D.L.

    1983-07-01

    A background to the present heat release rate calorimetry is presented. Heat release rates and cumulative heat release were measured for 16 different lumber and wood products, using three different heat release rate instruments. The effects of moisture content, exposure heat flux, density of product, and fire retardant on rate of heat release were measured. The three small-scale heat release rate calorimeters were compared, and equations relating the data from each were developed.

  15. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  16. Influence of satellite-derived photolysis rates and NOx emissions on Texas ozone modeling

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Pour-Biazar, A.; Lamsal, L. N.; White, A. T.; Xiao, X.; Zhou, W.; Henderson, B. H.; Lash, B. F.

    2015-02-01

    Uncertain photolysis rates and emission inventory impair the accuracy of state-level ozone (O3) regulatory modeling. Past studies have separately used satellite-observed clouds to correct the model-predicted photolysis rates, or satellite-constrained top-down NOx emissions to identify and reduce uncertainties in bottom-up NOx emissions. However, the joint application of multiple satellite-derived model inputs to improve O3 state implementation plan (SIP) modeling has rarely been explored. In this study, Geostationary Operational Environmental Satellite (GOES) observations of clouds are applied to derive the photolysis rates, replacing those used in Texas SIP modeling. This changes modeled O3 concentrations by up to 80 ppb and improves O3 simulations by reducing modeled normalized mean bias (NMB) and normalized mean error (NME) by up to 0.1. A sector-based discrete Kalman filter (DKF) inversion approach is incorporated with the Comprehensive Air Quality Model with extensions (CAMx)-decoupled direct method (DDM) model to adjust Texas NOx emissions using a high-resolution Ozone Monitoring Instrument (OMI) NO2 product. The discrepancy between OMI and CAMx NO2 vertical column densities (VCDs) is further reduced by increasing modeled NOx lifetime and adding an artificial amount of NO2 in the upper troposphere. The region-based DKF inversion suggests increasing NOx emissions by 10-50% in most regions, deteriorating the model performance in predicting ground NO2 and O3, while the sector-based DKF inversion tends to scale down area and nonroad NOx emissions by 50%, leading to a 2-5 ppb decrease in ground 8 h O3 predictions. Model performance in simulating ground NO2 and O3 are improved using sector-based inversion-constrained NOx emissions, with 0.25 and 0.04 reductions in NMBs and 0.13 and 0.04 reductions in NMEs, respectively. Using both GOES-derived photolysis rates and OMI-constrained NOx emissions together reduces modeled NMB and NME by 0.05, increases the model

  17. A comparison between computer-controlled and set work rate exercise based on target heart rate

    NASA Technical Reports Server (NTRS)

    Pratt, Wanda M.; Siconolfi, Steven F.; Webster, Laurie; Hayes, Judith C.; Mazzocca, Augustus D.; Harris, Bernard A., Jr.

    1991-01-01

    Two methods are compared for observing the heart rate (HR), metabolic equivalents, and time in target HR zone (defined as the target HR + or - 5 bpm) during 20 min of exercise at a prescribed intensity of the maximum working capacity. In one method, called set-work rate exercise, the information from a graded exercise test is used to select a target HR and to calculate a corresponding constant work rate that should induce the desired HR. In the other method, the work rate is controlled by a computer algorithm to achieve and maintain a prescribed target HR. It is shown that computer-controlled exercise is an effective alternative to the traditional set work rate exercise, particularly when tight control of cardiovascular responses is necessary.

  18. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  19. Non-contact Laser-based Human Respiration Rate Measurement

    NASA Astrophysics Data System (ADS)

    Scalise, L.; Marchionni, P.; Ercoli, I.

    2011-08-01

    At present the majority of the instrumentation, used in clinical environments, to measure human respiration rate are based on invasive and contact devices. The gold standard instrument is considered the spirometer which is largely used; it needs a direct contact and requires a collaboration by the patient. Laser Doppler Vibrometer (LDVi) is an optical, non-contact measurement system for the assessment of a surface velocity and displacement. LDVi has already been used for the measurement of the cardiac activity and for the measurement of the chest-wall displacements. The aims of this work are to select the best measurement point on the thoracic surface for LDVi monitoring of the respiration rate (RR) and to compare measured data with the RR valued provided by the spirometer. The measurement system is composed by a LDV system and a data acquisition board installed on a PC. Tests were made on 10 different point of the thorax for each patient. Patients population was composed by 33 subjects (17 male and 16 female). The optimal measurement point was chosen considering the maximum peak-to-peak value of the displacement measured by LDV. Before extracting RR we have used a special wavelet decomposition for better selection of the expiration peaks. A standard spirometer was used for the validation of the data. From tests it results that the optimal measurement point, namely is located on the inferior part of the thoracic region (left, front side). From our tests we have obtained a close correlation between the RR values measured by the spirometer and those measured by the proposed method: a difference of 14±211 ms on the RR value is reported for the entire population of 33 subjects. Our method allows a no-contact measurement of lungs activity (respiration period), reducing the electric and biological risks. Moreover it allows to measure in critical environment like in RMN or in burned skin where is difficult or impossible to apply electrodes.

  20. The Impact of Modelling Rate Heterogeneity among Sites on Phylogenetic Estimates of Intraspecific Evolutionary Rates and Timescales

    PubMed Central

    Jia, Fangzhi; Lo, Nathan; Ho, Simon Y. W.

    2014-01-01

    Phylogenetic analyses of DNA sequence data can provide estimates of evolutionary rates and timescales. Nearly all phylogenetic methods rely on accurate models of nucleotide substitution. A key feature of molecular evolution is the heterogeneity of substitution rates among sites, which is often modelled using a discrete gamma distribution. A widely used derivative of this is the gamma-invariable mixture model, which assumes that a proportion of sites in the sequence are completely resistant to change, while substitution rates at the remaining sites are gamma-distributed. For data sampled at the intraspecific level, however, biological assumptions involved in the invariable-sites model are commonly violated. We examined the use of these models in analyses of five intraspecific data sets. We show that using 6–10 rate categories for the discrete gamma distribution of rates among sites is sufficient to provide a good approximation of the marginal likelihood. Increasing the number of gamma rate categories did not have a substantial effect on estimates of the substitution rate or coalescence time, unless rates varied strongly among sites in a non-gamma-distributed manner. The assumption of a proportion of invariable sites provided a better approximation of the asymptotic marginal likelihood when the number of gamma categories was small, but had minimal impact on estimates of rates and coalescence times. However, the estimated proportion of invariable sites was highly susceptible to changes in the number of gamma rate categories. The concurrent use of gamma and invariable-site models for intraspecific data is not biologically meaningful and has been challenged on statistical grounds; here we have found that the assumption of a proportion of invariable sites has no obvious impact on Bayesian estimates of rates and timescales from intraspecific data. PMID:24798481

  1. Implications of two Holocene time-dependent geomagnetic models for cosmogenic nuclide production rate scaling

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel

    2016-01-01

    The geomagnetic field is a major influence on in situ cosmogenic nuclide production rates at a given location (in addition to atmospheric pressure and, to a lesser extent, solar modulation effects). A better understanding of how past fluctuations in these influences affected production rates should allow more accurate application of cosmogenic nuclides. As such, this work explores the cosmogenic nuclide production rate scaling implications of two recent time-dependent spherical harmonic geomagnetic models spanning the Holocene. Korte and Constable (2011, Phys. Earth Planet. Inter.188, 247-259) and Korte et al. (2011, Earth Planet. Sci. Lett. 312, 497-505) recently updated earlier spherical harmonic paleomagnetic models with new paleomagnetic data from sediment cores in addition to new archeomagnetic and volcanic data. These updated models offer improved resolution and accuracy over the previous versions, in part due to increased temporal and spatial data coverage. In addition, Pavón-Carrasco et al. (2014, Earth Planet. Sci. Lett. 388, 98-109) developed another time-dependent spherical harmonic model of the Holocene geomagnetic field, based solely on archeomagnetic and volcanic paleomagnetic data from the same underlying paleomagnetic database as the Korte et al. models, but extending to 14 ka. With the new models as input, trajectory-traced estimates of effective vertical cutoff rigidity (RC - the standard method for ordering cosmic ray data) yield significantly different time-integrated scaling predictions when compared to each other and to results using the earlier models. In addition, predictions of each new model using RC are tested empirically using recently published production rate calibration data for both 10Be and 3He, and compared to predictions using corresponding time-varying geocentric dipolar RC formulations and a static geocentric axial dipole (GAD) model. Results for the few calibration sites from geomagnetically sensitive regions suggest that the

  2. Predicting enhanced mass flow rates in gas microchannels using nonkinetic models.

    PubMed

    Dadzie, S Kokou; Brenner, Howard

    2012-09-01

    Different nonkinetic approaches are adopted in this paper towards theoretically predicting the experimentally observed phenomenon of enhanced mass flow rates accompanying pressure-driven rarefied gas flows through microchannels. Our analysis utilizes a full set of mechanically consistent volume-diffusion hydrodynamic equations, allowing complete, closed-form, analytical solutions to this class of problems. As an integral part of the analysis, existing experimental data pertaining to the subatmospheric pressure dependence of viscosity were analyzed. The several nonkinetic approaches investigated were (1) pressure-dependent viscosity exponent model, (2) slip-velocity models, and (3) volume diffusion model. We explored the ability to predict the gas's mass flow rate over the full range of Knudsen numbers, including furnishing a physically sound interpretation of the well-known Knudsen minimum observed in the mass flow rate. Matching of a pressure-dependent viscosity model, one that follows the standard temperature-viscosity power law and its supporting single momentum diffusion mechanism, did not allow an accurate interpretation of the data. Rather, matching of this model with the flow rate was found to mismatch the experimental pressure dependence of the viscosity. An additional transport mechanism model, one based on volume diffusion, offered a comprehensive understanding of the Knudsen minimum, while also resulting in excellent agreement with experimental data well into the transition regime (up to a Knudsen number of 5).

  3. The influence of model structure on groundwater recharge rates in climate-change impact studies

    NASA Astrophysics Data System (ADS)

    Moeck, Christian; Brunner, Philip; Hunkeler, Daniel

    2016-08-01

    Numerous modeling approaches are available to provide insight into the relationship between climate change and groundwater recharge. However, several aspects of how hydrological model choice and structure affect recharge predictions have not been fully explored, unlike the well-established variability of climate model chains—combination of global climate models (GCM) and regional climate models (RCM). Furthermore, the influence on predictions related to subsoil parameterization and the variability of observation data employed during calibration remain unclear. This paper compares and quantifies these different sources of uncertainty in a systematic way. The described numerical experiment is based on a heterogeneous two-dimensional reference model. Four simpler models were calibrated against the output of the reference model, and recharge predictions of both reference and simpler models were compared to evaluate the effect of model structure on climate-change impact studies. The results highlight that model simplification leads to different recharge rates under climate change, especially under extreme conditions, although the different models performed similarly under historical climate conditions. Extreme weather conditions lead to model bias in the predictions and therefore must be considered. Consequently, the chosen calibration strategy is important and, if possible, the calibration data set should include climatic extremes in order to minimise model bias introduced by the calibration. The results strongly suggest that ensembles of climate projections should be coupled with ensembles of hydrogeological models to produce credible predictions of future recharge and with the associated uncertainties.

  4. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  5. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  6. Estimating glomerular filtration rate in a population-based study

    PubMed Central

    Shankar, Anoop; Lee, Kristine E; Klein, Barbara EK; Muntner, Paul; Brazy, Peter C; Cruickshanks, Karen J; Nieto, F Javier; Danforth, Lorraine G; Schubert, Carla R; Tsai, Michael Y; Klein, Ronald

    2010-01-01

    Background: Glomerular filtration rate (GFR)-estimating equations are used to determine the prevalence of chronic kidney disease (CKD) in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample. Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women). We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD) equation, Cockcroft–Gault (CG) equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L. Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1.73 m2 by the MDRD equation had cystatin C levels >1.23 mg/L; their mean cystatin C level was only 1 mg/L (interquartile range, 0.9–1.2 mg/L). This finding was similar for the CG equation. For the Mayo equation, 62.8% of those patients with GFR in the range of 50–59 mL/min per 1.73 m2 had cystatin C levels >1.23 mg/L; their mean cystatin C level was 1.3 mg/L (interquartile range, 1.2–1.5 mg/L). The MDRD and CG equations showed a false-positive rate of >10%. Discussion: We found that the MDRD and CG equations, the current standard to estimate GFR, appeared to overestimate the prevalence of CKD in a general population sample. PMID:20730018

  7. Numerical solution of the Penna model of biological aging with age-modified mutation rate

    NASA Astrophysics Data System (ADS)

    Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.

    2009-06-01

    In this paper we present results of numerical calculation of the Penna bit-string model of biological aging, modified for the case of a -dependent mutation rate m(a) , where a is the parent’s age. The mutation rate m(a) is the probability per bit of an extra bad mutation introduced in offspring inherited genome. We assume that m(a) increases with age a . As compared with the reference case of the standard Penna model based on a constant mutation rate m , the dynamics of the population growth shows distinct changes in age distribution of the population. Here we concentrate on mortality q(a) , a fraction of items eliminated from the population when we go from age (a) to (a+1) in simulated transition from time (t) to next time (t+1) . The experimentally observed q(a) dependence essentially follows the Gompertz exponential law for a above the minimum reproduction age. Deviation from the Gompertz law is however observed for the very old items, close to the maximal age. This effect may also result from an increase in mutation rate m with age a discussed in this paper. The numerical calculations are based on analytical solution of the Penna model, presented in a series of papers by Coe [J. B. Coe, Y. Mao, and M. E. Cates, Phys. Rev. Lett. 89, 288103 (2002)]. Results of the numerical calculations are supported by the data obtained from computer simulation based on the solution by Coe

  8. Mathematical Modeling of Radiocesium Migration and Air Dose Rate Changes in Eastern Fukushima Prefecture

    NASA Astrophysics Data System (ADS)

    Kitamura, A.; Sakuma, K.; Kurikami, H.; Malins, A.; Okumura, M.; Itakura, M.; Yamada, S.; Machida, M.

    2015-12-01

    Radioactive cesium that was deposited over Fukushima Prefecture after the accident at the Fukushima Daiichi nuclear power plant station is one of the major concerns regarding health physics today. Its migration is primarily by soil erosion and sediment transport within surface water during times of heavy rainfall and flooding. In order to predict the future distribution of radioactive cesium and resulting air dose rate at any location in Fukushima, we have integrated a number of mathematical models covering different time and spatial scales. In this presentation we report our overall scheme of prediction starting from sediment and radioactive cesium movement and resulting long term air dose rate changes. Specifically, we present simulation results of sediment movement and radioactive cesium migration using semi-empirical and physics based watershed models, and that of sediment and radioactive cesium behavior in a dam reservoir using one and two dimensional river simulation models. The model's results are compared with ongoing field monitoring.

  9. Estimating taxonomic diversity, extinction rates, and speciation rates from fossil data using capture-recapture models

    USGS Publications Warehouse

    Nichols, J.D.; Pollock, K.H.

    1983-01-01

    Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.

  10. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    PubMed

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  11. Modeling the Time—Varying Subjective Quality of HTTP Video Streams With Rate Adaptations

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W.; Bovik, Alan C.

    2014-05-01

    Newly developed HTTP-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' Quality of Experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer-duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for on-line TVSQ prediction in HTTP based streaming.

  12. A stochastic model for estimation of mutation rates in multiple-replication proliferation processes.

    PubMed

    Xiong, Xiaoping; Boyett, James M; Webster, Robert G; Stech, Juergen

    2009-08-01

    In this paper we propose a stochastic model based on the branching process for estimation and comparison of the mutation rates in proliferation processes of cells or microbes. We assume in this model that cells or microbes (the elements of a population) are reproduced by generations and thus the model is more suitably applicable to situations in which the new elements in a population are produced by older elements from the previous generation rather than by newly created elements from the same current generation. Cells and bacteria proliferate by binary replication, whereas the RNA viruses proliferate by multiple replication. The model is in terms of multiple replications, which includes the special case of binary replication. We propose statistical procedures for estimation and comparison of the mutation rates from data of multiple cultures with divergent culture sizes. The mutation rate is defined as the probability of mutation per replication per genome and thus can be assumed constant in the entire proliferation process. We derive the number of cultures for planning experiments to achieve desired accuracy for estimation or desired statistical power for comparing the mutation rates of two strains of microbes. We establish the efficiency of the proposed method by demonstrating how the estimation of mutation rates would be affected when the culture sizes were assumed similar but actually diverge. PMID:18846374

  13. The Use of Leaf Functional Traits for Modeling the Timing and Rate of Canopy Development

    NASA Astrophysics Data System (ADS)

    Savoy, P.; Mackay, D. S.

    2015-12-01

    Leaves vary in their habit, with some being short lived and possessing high intrinsic photosynthetic rates and others being long lived with lower photosynthetic capacity. Longer lived leaves will thus tend to cost more to produce and be able to assimilate carbon over a longer period of time. The timing and seasonality of forest canopies is a cost benefit strategy for the exploitation of favorable environmental conditions and avoidance of unfavorable conditions. Because of the selective pressure for plants to gather a return on leaf investment in relation to their leaf habit we propose that there is a relationship between plant functional traits and the timing and rate of canopy development. In a recent study it was shown that errors in predicted canopy dynamics could be reduced via a single parameter (τ) which modified the timing and rate of canopy development (Savoy & Mackay 2015). If τ is related to underlying mechanisms of plant physiology then it should vary predictably. To test this we will first examine the relationship between τ and observable biophysical variables which vary in ecologically meaningful ways. Then we will develop a model based on leaf traits which will regulate the timing and rate at which vegetation reaches peak rates of assimilation. The model will then be tested at eddy covariance sites which span a range environmental conditions. Preliminary results demonstrate a strong relationship (R2 = 0.58) between estimated values of τ and leaf carbon to nitrogen ratio, which is important for representing the costs of leaf construction and nitrogen investment into photosynthetic machinery of leaves. By developing a canopy seasonality model based on plant functional traits and rooted in the framework of leaf economics it is possible to have a more flexible and generalized model. Such a model will be more adept at making predictions under novel environmental conditions than purely correlative empirical models.

  14. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate.

    PubMed

    Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng

    2015-01-01

    Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter's improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate. PMID:26690148

  15. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate.

    PubMed

    Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng

    2015-12-04

    Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter's improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate.

  16. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate

    PubMed Central

    Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng

    2015-01-01

    Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter’s improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate. PMID:26690148

  17. Sketch-based geologic modeling

    NASA Astrophysics Data System (ADS)

    Rood, M. P.; Jackson, M.; Hampson, G.; Brazil, E. V.; de Carvalho, F.; Coda, C.; Sousa, M. C.; Zhang, Z.; Geiger, S.

    2015-12-01

    Two-dimensional (2D) maps and cross-sections, and 3D conceptual models, are fundamental tools for understanding, communicating and modeling geology. Yet geologists lack dedicated and intuitive tools that allow rapid creation of such figures and models. Standard drawing packages produce only 2D figures that are not suitable for quantitative analysis. Geologic modeling packages can produce 3D models and are widely used in the groundwater and petroleum communities, but are often slow and non-intuitive to use, requiring the creation of a grid early in the modeling workflow and the use of geostatistical methods to populate the grid blocks with geologic information. We present an alternative approach to rapidly create figures and models using sketch-based interface and modelling (SBIM). We leverage methods widely adopted in other industries to prototype complex geometries and designs. The SBIM tool contains built-in geologic rules that constrain how sketched lines and surfaces interact. These rules are based on the logic of superposition and cross-cutting relationships that follow from rock-forming processes, including deposition, deformation, intrusion and modification by diagenesis or metamorphism. The approach allows rapid creation of multiple, geologically realistic, figures and models in 2D and 3D using a simple, intuitive interface. The user can sketch in plan- or cross-section view. Geologic rules are used to extrapolate sketched lines in real time to create 3D surfaces. Quantitative analysis can be carried our directly on the models. Alternatively, they can be output as simple figures or imported directly into other modeling tools. The software runs on a tablet PC and can be used in a variety of settings including the office, classroom and field. The speed and ease of use of SBIM enables multiple interpretations to be developed from limited data, uncertainty to be readily appraised, and figures and models to be rapidly updated to incorporate new data or concepts.

  18. A Rate-Dependent Viscoelastic Damage Model for Simulation of Solid Propellant Impacts

    NASA Astrophysics Data System (ADS)

    Matheson, Erik

    2005-07-01

    A viscoelastic deformation and damage model (VED) for solid rocket propellants has been developed based on an extensive set of mechanical properties experiments. Monotonic tensile tests performed at several strain rates showed rate and dilatation effects. During cyclic tensile tests, hysteresis and a rate-dependent shear modulus were observed. A tensile relaxation experiment showed significant stress decay in the sample. Taylor impact tests exhibited large dilatations without significant crack growth. Extensive modifications to a viscoelastic-viscoplastic model (VEP) necessary to capture these experimental results have led to development of the VED model. In particular, plasticity has been eliminated in the model, and the multiple Maxwell viscoelastic formulation has been replaced with a time-dependent shear modulus. Furthermore, the loading and unloading behaviors of the material are modeled independently. To characterize the damage and dilatation behavior, the Tensile Damage and Distention (TDD) model is run in conjunction with VED. The VED model is connected to a single-cell driver as well as to the CTH shock physics code. Simulations of tests show good comparisons with tensile tests and some aspects of the Taylor tests.

  19. A Rate-Dependent Viscoelastic Damage Model for Simulation of Solid Propellant Impacts

    NASA Astrophysics Data System (ADS)

    Matheson, E. R.; Nguyen, D. Q.

    2006-07-01

    A viscoelastic deformation and damage model (VED) for solid rocket propellants has been developed based on an extensive set of mechanical properties experiments. Monotonic tensile tests performed at several strain rates showed rate and dilatation effects. During cyclic tensile tests, hysteresis and a rate-dependent shear modulus were observed. A tensile relaxation experiment showed significant stress decay in the sample. Taylor impact tests exhibited large dilatations without significant crack growth. Extensive modifications to a viscoelastic-viscoplastic model (VEP) necessary to capture these experimental results have led to development of the VED model. In particular, plasticity has been eliminated in the model, and the multiple Maxwell viscoelastic formulation has been replaced with a time-dependent shear modulus. Furthermore, the loading and unloading behaviors of the material are modeled independently. To characterize the damage and dilatation behavior, the Tensile Damage and Distention (TDD) model is run in conjunction with VED. The VED model is connected to a single-cell driver as well as to the CTH shock physics code. Simulations of tests show good comparisons with tensile tests and some aspects of the Taylor tests.

  20. Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates (External Review Draft)

    EPA Science Inventory

    EPA has released a draft report entitled, Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates, for independent external peer review and public comment. NCEA published the Exposure Factors Handbook in 1997. This comprehens...

  1. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  2. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  3. A method to estimate emission rates from industrial stacks based on neural networks.

    PubMed

    Olcese, Luis E; Toselli, Beatriz M

    2004-11-01

    This paper presents a technique based on artificial neural networks (ANN) to estimate pollutant rates of emission from industrial stacks, on the basis of pollutant concentrations measured on the ground. The ANN is trained on data generated by the ISCST3 model, widely accepted for evaluation of dispersion of primary pollutants as a part of an environmental impact study. Simulations using theoretical values and comparison with field data are done, obtaining good results in both cases at predicting emission rates. The application of this technique would allow the local environment authority to control emissions from industrial plants without need of performing direct measurements inside the plant.

  4. Moving base simulation evaluation of translational rate command systems for STOVL aircraft in hover

    NASA Technical Reports Server (NTRS)

    Franklin, James A.; Stortz, Michael W.

    1996-01-01

    Using a generalized simulation model, a moving-base simulation of a lift-fan short takeoff/vertical landing fighter aircraft has been conducted on the Vertical Motion Simulator at Ames Research Center. Objectives of the experiment were to determine the influence of system bandwidth and phase delay on flying qualities for translational rate command and vertical velocity command systems. Assessments were made for precision hover control and for landings aboard an LPH type amphibious assault ship in the presence of winds and rough seas. Results obtained define the boundaries between satisfactory and adequate flying qualities for these design features for longitudinal and lateral translational rate command and for vertical velocity command.

  5. Fast entropy-based CABAC rate estimation for mode decision in HEVC.

    PubMed

    Chen, Wei-Gang; Wang, Xun

    2016-01-01

    High efficiency video coding (HEVC) seeks the best code tree configuration, the best prediction unit division and the prediction mode, by evaluating the rate-distortion functional in a recursive way and using a "try all and select the best" strategy. Further, HEVC only supports context adaptive binary arithmetic coding (CABAC), which has the disadvantage of being highly sequential and having strong data dependencies, as the entropy coder. So, the development of a fast rate estimation algorithm for CABAC-based coding has a great practical significance for mode decision in HEVC. There are three elementary steps in CABAC encoding process: binarization, context modeling, and binary arithmetic coding. Typical approaches to fast CABAC rate estimation simplify or eliminate the last two steps, but leave the binarization step unchanged. To maximize the reduction of computational complexity, we propose a fast entropy-based CABAC rate estimator in this paper. It eliminates not only the modeling and the coding steps, but also the binarization step. Experimental results demonstrate that the proposed estimator is able to reduce the computational complexity of the mode decision in HEVC by 9-23 % with negligible PSNR loss and BD-rate increment, and therefore exhibits applicability to practical HEVC encoder implementation.

  6. Fast entropy-based CABAC rate estimation for mode decision in HEVC.

    PubMed

    Chen, Wei-Gang; Wang, Xun

    2016-01-01

    High efficiency video coding (HEVC) seeks the best code tree configuration, the best prediction unit division and the prediction mode, by evaluating the rate-distortion functional in a recursive way and using a "try all and select the best" strategy. Further, HEVC only supports context adaptive binary arithmetic coding (CABAC), which has the disadvantage of being highly sequential and having strong data dependencies, as the entropy coder. So, the development of a fast rate estimation algorithm for CABAC-based coding has a great practical significance for mode decision in HEVC. There are three elementary steps in CABAC encoding process: binarization, context modeling, and binary arithmetic coding. Typical approaches to fast CABAC rate estimation simplify or eliminate the last two steps, but leave the binarization step unchanged. To maximize the reduction of computational complexity, we propose a fast entropy-based CABAC rate estimator in this paper. It eliminates not only the modeling and the coding steps, but also the binarization step. Experimental results demonstrate that the proposed estimator is able to reduce the computational complexity of the mode decision in HEVC by 9-23 % with negligible PSNR loss and BD-rate increment, and therefore exhibits applicability to practical HEVC encoder implementation. PMID:27386240

  7. MODELING THE RATE-CONTROLLED SORPTION OF HEXAVALENT CHROMIUM.

    USGS Publications Warehouse

    Grove, D.B.; Stollenwerk, K.G.

    1985-01-01

    Sorption of chromium VI on the iron-oxide- and hydroxide-coated surface of alluvial material was numerically simulated with rate-controlled reactions. Reaction kinetics and diffusional processes, in the form of film, pore, and particle diffusion, were simulated and compared with experimental results. The use of empirically calculated rate coefficients for diffusion through the reacting surface was found to simulate experimental data; pore or particle diffusion is believed to be a possible rate-controlling mechanism. The use of rate equations to predict conservative transport and rate- and local-equilibrium-controlled reactions was shown to be feasible.

  8. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    NASA Technical Reports Server (NTRS)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  9. Sexually violent predators: toward reasonable estimates of recidivism base rates.

    PubMed

    Neller, Daniel J; Petris, Giovanni

    2013-01-01

    The sexual recidivism rate of sex offenders is a controversial issue. Perhaps as controversial is the sexual recidivism rate of the select group of sex offenders who are examined pursuant to sexually violent predator (SVP) statutes. At present, reliable estimates of SVP recidivism are unavailable. We propose that reasonable estimates of SVP recidivism can be reached by considering three available pieces of data: (i) a likely recidivism rate of the general population of sex offenders; (ii) procedures typically followed by jurisdictions that civilly commit sex offenders; and (iii) classification accuracy of procedures. Although sexual recidivism rates vary across jurisdictions, the results of our analyses suggest sex offenders referred for examination pursuant to SVP statutes recidivate at substantially higher rates than typical sex offenders. Our results further suggest that sex offenders recommended for commitment as SVPs recidivate at even greater rates than SVP respondents who are not recommended for commitment. We discuss practice and policy implications of these findings.

  10. A growth inhibitory model with SOx influenced effective growth rate for estimation of algal biomass concentration under flue gas atmosphere

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A theoretical model for the prediction of biomass concentration under real flue gas emission has been developed. The model considers the CO2 mass transfer rate, the critical SOx concentration and its role on pH based inter-conversion of bicarbonate in model building. The calibration and subsequent v...

  11. Motion-compensated coding and frame rate up-conversion: models and analysis.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M

    2015-07-01

    Block-based motion estimation (ME) and motion compensation (MC) techniques are widely used in modern video processing algorithms and compression systems. The great variety of video applications and devices results in diverse compression specifications, such as frame rates and bit rates. In this paper, we study the effect of frame rate and compression bit rate on block-based ME and MC as commonly utilized in inter-frame coding and frame rate up-conversion (FRUC). This joint examination yields a theoretical foundation for comparing MC procedures in coding and FRUC. First, the video signal is locally modeled as a noisy translational motion of an image. Then, we theoretically model the motion-compensated prediction of available and absent frames as in coding and FRUC applications, respectively. The theoretic MC-prediction error is studied further and its autocorrelation function is calculated, yielding useful separable-simplifications for the coding application. We argue that a linear relation exists between the variance of the MC-prediction error and temporal distance. While the relevant distance in MC coding is between the predicted and reference frames, MC-FRUC is affected by the distance between the frames available for interpolation. We compare our estimates with experimental results and show that the theory explains qualitatively the empirical behavior. Then, we use the models proposed to analyze a system for improving of video coding at low bit rates, using a spatio-temporal scaling. Although this concept is practically employed in various forms, so far it lacked a theoretical justification. We here harness the proposed MC models and present a comprehensive analysis of the system, to qualitatively predict the experimental results.

  12. On a sparse pressure-flow rate condensation of rigid circulation models.

    PubMed

    Schiavazzi, D E; Hsia, T Y; Marsden, A L

    2016-07-26

    Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol׳ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219

  13. The preprocessing method of K band range rate measurements and impact on recovering gravity field model

    NASA Astrophysics Data System (ADS)

    Guo, Nannan; Zhou, Xuhua; Wu, Bin

    2016-04-01

    In this paper, based on simultaneous solution approach, using Gravity Recovery and Climate Experiment (GRACE) onboard GPS observations and K band range rate measurements to achieve the precise orbit of GRACE satellite and monthly temporal gravity field solutions to degree and order 60 successfully. The most significant part of the GRACE mission is the precise K band ranging (KBR) system to measure the ranges between the twin satellites. We analyze the characteristics and the observation error of K band range rate measurements which is critical to recovering the earth gravity field. A nine-points sliding window of least-squares fitting method is put forward to preprocess K band range rate measurements. This method is applied to recover the gravity field model to study the influence on recovering the Earth gravity field model. The results showed that: (1) RMS of fitting residual has been improved from 0.235μm/s to 0.182μm/s by using the K band range rate measurements in 2007. The results demonstrate that this method can eliminate outliner of KBRR observation data effectively. (2) This method is applied to recover the gravity field model. Comparisons of the degree variance and the spatial distribution of time-varying signal demonstrate that our model is comparable with the other existing models, i.e., the Centre for Space Research RL05, Jet Propulsion Laboratory RL05, and GeoForschungsZentrum RL05a models. Our model is very close to those from other three models and with similar spatial patterns of signals. The results indicate the preprocessing method in this paper can be effectively applied to the recovery of the earth gravity field model.

  14. Estimating mental fatigue based on electroencephalogram and heart rate variability

    NASA Astrophysics Data System (ADS)

    Zhang, Chong; Yu, Xiaolin

    2010-01-01

    The effects of long term mental arithmetic task on psychology are investigated by subjective self-reporting measures and action performance test. Based on electroencephalogram (EEG) and heart rate variability (HRV), the impacts of prolonged cognitive activity on central nervous system and autonomic nervous system are observed and analyzed. Wavelet packet parameters of EEG and power spectral indices of HRV are combined to estimate the change of mental fatigue. Then wavelet packet parameters of EEG which change significantly are extracted as the features of brain activity in different mental fatigue state, support vector machine (SVM) algorithm is applied to differentiate two mental fatigue states. The experimental results show that long term mental arithmetic task induces the mental fatigue. The wavelet packet parameters of EEG and power spectral indices of HRV are strongly correlated with mental fatigue. The predominant activity of autonomic nervous system of subjects turns to the sympathetic activity from parasympathetic activity after the task. Moreover, the slow waves of EEG increase, the fast waves of EEG and the degree of disorder of brain decrease compared with the pre-task. The SVM algorithm can effectively differentiate two mental fatigue states, which achieves the maximum classification accuracy (91%). The SVM algorithm could be a promising tool for the evaluation of mental fatigue. Fatigue, especially mental fatigue, is a common phenomenon in modern life, is a persistent occupational hazard for professional. Mental fatigue is usually accompanied with a sense of weariness, reduced alertness, and reduced mental performance, which would lead the accidents in life, decrease productivity in workplace and harm the health. Therefore, the evaluation of mental fatigue is important for the occupational risk protection, productivity, and occupational health.

  15. Investigation of Atmospheric Recycling Rate from Observation and Model

    NASA Astrophysics Data System (ADS)

    Trammell, J. H.; Jiang, X.; Li, L.; Liang, M.; Zhou, J.; Yung, Y. L.

    2012-12-01

    Precipitation plays an important role in the hydrological cycle on Earth. Based on the long-term (1988-2009) meteorological data sets, our observational study (Li et al., 2011) revealed that the precipitation increased over the wet area (i.e., monthly precipitation > 200 mm) and decreased over the dry area (i.e., monthly precipitation < 50 mm) during the past two decades. The precipitation trend reported in our study is consistent with a "rich-get-richer" mechanism suggested by theoretical studies (Chou and Neelin, 2004; Neelin et al., 2006; Chou et al., 2009). Here, we investigate whether the current atmospheric models can quantitatively capture the characteristics of precipitation from the observational study (Li et al., 2011). Quantitatively simulating the precipitation trend over the globe during the past two decades not only help predict the variation of precipitation in the future but also provide a numerical basis to better understand the physics behind the temporal variation of precipitation. The NASA Goddard Institute for Space Studies (GISS) model is used to look at a historic run of the global precipitation, temperature, and water vapor, in which the historic greenhouse gases are included. We compare the historic simulation from the GISS model with the actual observations provided by the Special Sensor Microwave Imager (SSM/I) and the Global Precipitation Climatology Project (GPCP) (Li et al., 2011). We also compare the historic run with a control run, in which the concentrations of the greenhouse gases are fixed. With the global warming due to the historic greenhouse gases, the historic run precipitation data in the "wet" areas illustrate an increasing trend of precipitation over the wet area, which is consistent with the observational analysis (Li et al., 2011). In contrast, the control run does not show a significant temporal variation in the global temperature and precipitation. The comparison between the historic run and the control run suggests

  16. Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling

    SciTech Connect

    Flach, G. P.

    2015-05-12

    Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.

  17. The scaling of contact rates with population density for the infectious disease models.

    PubMed

    Hu, Hao; Nigmatulina, Karima; Eckhoff, Philip

    2013-08-01

    Contact rates and patterns among individuals in a geographic area drive transmission of directly-transmitted pathogens, making it essential to understand and estimate contacts for simulation of disease dynamics. Under the uniform mixing assumption, one of two mechanisms is typically used to describe the relation between contact rate and population density: density-dependent or frequency-dependent. Based on existing evidence of population threshold and human mobility patterns, we formulated a spatial contact model to describe the appropriate form of transmission with initial growth at low density and saturation at higher density. We show that the two mechanisms are extreme cases that do not capture real population movement across all scales. Empirical data of human and wildlife diseases indicate that a nonlinear function may work better when looking at the full spectrum of densities. This estimation can be applied to large areas with population mixing in general activities. For crowds with unusually large densities (e.g., transportation terminals, stadiums, or mass gatherings), the lack of organized social contact structure deviates the physical contacts towards a special case of the spatial contact model - the dynamics of kinetic gas molecule collision. In this case, an ideal gas model with van der Waals correction fits well; existing movement observation data and the contact rate between individuals is estimated using kinetic theory. A complete picture of contact rate scaling with population density may help clarify the definition of transmission rates in heterogeneous, large-scale spatial systems.

  18. Modeling of atmospheric OH reaction rates using newly developed variable distance weighted zero order connectivity index

    NASA Astrophysics Data System (ADS)

    Markelj, Jernej; Pompe, Matevž

    2016-04-01

    A new variable distance weighted zero order connectivity index was used for development of structure-activity relationship for modeling reactivity of OH radical with alkanes and non-conjugated alkenes in the atmosphere. The proposed model is based on the assumptions that the total reaction rate can be obtained by summing all partial reaction rates and that all reaction sites are interrelated by influencing each other. The results suggest that these assumptions are justified. The model was compared with the EPA implemented model in the studied application domain and showed superior prediction capabilities. Further, optimized values of the weights that were used in our model permit some insight into mechanisms that govern the reaction OH + alkane/alkene. The most important conclusion is that the branching degree of the forming radical seems to play a major role in site specific reaction rates. Relative qualitative structural interpretation is possible, e.g. allylic site is suggested to be much more reactive than even tertiary sp3 carbon. Novel modeling software MACI, which was developed in our lab and is now available for research purposes, was used for calculations. Various variable topological indices that are again starting to be recognized because of their great potentials in simplicity, fast calculations, very good correlations and structural information, were implemented in the program.

  19. A Dynamic Feedback Model for High Repetition Rate LINAC-Driven FELS

    SciTech Connect

    Mellado Munoz, M.; Doolittle, L.; Emma, P.; Huang, G.; Ratti, A.; Serrano, C.; Byrd, J. M.

    2012-05-20

    One of the concepts for the next generation of linacdriven FELs is a CW superconducting linac driving an electron beam with MHz repetition rates. One of the challenges for next generation FELs is improve the stability of the xray pulses by improving the shot-to-shot stability of the energy, charge, peak current, and timing jitter of the electron beam. A high repetition rate FEL with a CW linac presents an opportunity to use a variety of broadband feedbacks to stabilize the beam parameters. To understand the performance of such a feedback system, we are developing a dynamic model of the machine with a focus on the longitudinal beam properties. The model is being developed as an extension of the LITrack code and includes the dynamics of the beam-cavity interaction, RF feedback, beam-based feedback, and multibunch effects. In this paper, we present a detailed description of this model.

  20. Modeling relative frost weathering rates at geomorphic scales

    NASA Astrophysics Data System (ADS)

    Rempel, Alan W.; Marshall, Jill A.; Roering, Joshua J.

    2016-11-01

    amplitudes, with a broad maximum centered on a mean annual temperature near the threshold required for crack growth. Warmer mean annual temperatures lead to less damage because of the reduction in time during which it is cold enough for cracking, whereas colder mean annual temperatures are accompanied by reduced water supply due to the temperature dependence of permeability. All of the controlling parameters in our model are tied explicitly to physical properties that can in principle be measured independently, which suggests promise for informing geomorphic interpretations of the role of frost weathering in evolving landforms and determining erosion rates.

  1. Genome Informed Trait-Based Models

    NASA Astrophysics Data System (ADS)

    Karaoz, U.; Cheng, Y.; Bouskill, N.; Tang, J.; Beller, H. R.; Brodie, E.; Riley, W. J.

    2013-12-01

    Trait-based approaches are powerful tools for representing microbial communities across both spatial and temporal scales within ecosystem models. Trait-based models (TBMs) represent the diversity of microbial taxa as stochastic assemblages with a distribution of traits constrained by trade-offs between these traits. Such representation with its built-in stochasticity allows the elucidation of the interactions between the microbes and their environment by reducing the complexity of microbial community diversity into a limited number of functional ';guilds' and letting them emerge across spatio-temporal scales. From the biogeochemical/ecosystem modeling perspective, the emergent properties of the microbial community could be directly translated into predictions of biogeochemical reaction rates and microbial biomass. The accuracy of TBMs depends on the identification of key traits of the microbial community members and on the parameterization of these traits. Current approaches to inform TBM parameterization are empirical (i.e., based on literature surveys). Advances in omic technologies (such as genomics, metagenomics, metatranscriptomics, and metaproteomics) pave the way to better-initialize models that can be constrained in a generic or site-specific fashion. Here we describe the coupling of metagenomic data to the development of a TBM representing the dynamics of metabolic guilds from an organic carbon stimulated groundwater microbial community. Illumina paired-end metagenomic data were collected from the community as it transitioned successively through electron-accepting conditions (nitrate-, sulfate-, and Fe(III)-reducing), and used to inform estimates of growth rates and the distribution of metabolic pathways (i.e., aerobic and anaerobic oxidation, fermentation) across a spatially resolved TBM. We use this model to evaluate the emergence of different metabolisms and predict rates of biogeochemical processes over time. We compare our results to observational

  2. Flexible regression models for rate differences, risk differences and relative risks.

    PubMed

    Donoghoe, Mark W; Marschner, Ian C

    2015-05-01

    Generalized additive models (GAMs) based on the binomial and Poisson distributions can be used to provide flexible semi-parametric modelling of binary and count outcomes. When used with the canonical link function, these GAMs provide semi-parametrically adjusted odds ratios and rate ratios. For adjustment of other effect measures, including rate differences, risk differences and relative risks, non-canonical link functions must be used together with a constrained parameter space. However, the algorithms used to fit these models typically rely on a form of the iteratively reweighted least squares algorithm, which can be numerically unstable when a constrained non-canonical model is used. We describe an application of a combinatorial EM algorithm to fit identity link Poisson, identity link binomial and log link binomial GAMs in order to estimate semi-parametrically adjusted rate differences, risk differences and relative risks. Using smooth regression functions based on B-splines, the method provides stable convergence to the maximum likelihood estimates, and it ensures that the estimates always remain within the parameter space. It is also straightforward to apply a monotonicity constraint to the smooth regression functions. We illustrate the method using data from a clinical trial in heart attack patients. PMID:25781711

  3. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    SciTech Connect

    Dahlen, Oda Erp, Titus S. van

    2015-06-21

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimental data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.

  4. High Strain Rate Deformation Modeling of a Polymer Matrix Composite. Part 2; Composite Micromechanical Model

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Stouffer, Donald C.

    1998-01-01

    Recently applications have exposed polymer matrix composite materials to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under these extreme conditions. In this second paper of a two part report, a three-dimensional composite micromechanical model is described which allows for the analysis of the rate dependent, nonlinear deformation response of a polymer matrix composite. Strain rate dependent inelastic constitutive equations utilized to model the deformation response of a polymer are implemented within the micromechanics method. The deformation response of two representative laminated carbon fiber reinforced composite materials with varying fiber orientation has been predicted using the described technique. The predicted results compare favorably to both experimental values and the response predicted by the Generalized Method of Cells, a well-established micromechanics analysis method.

  5. Digital heart rate measurement based on Atmega16L

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqing; Feng, Lishuang; Wang, Jiqiang

    2008-02-01

    The photoelectric heart rate meter reported in this paper picks up heart rate signals with a photoelectric cell, converts them into standard TTL pulse signal, and sends them into the input capture interface of a single-chip computer Atmega16L. Its input capture register can capture the Timer/Counter value at a given external (edge triggered) event on the input capture pin (ICP1) of T/C1. The counter number is sent into T/C1's input capture register ICR1 after the voltage of the input capture pin ICP1 jumped according to the program setting. The single-chip computer catches the input pulse signal as some numerical values of Timer/Counter (T/C1) and works out a single heart rate cycle and displays by three seven segment tubes, which are the peripheral equipments of the single-chip computer. ICCAVR integrated compiler is applied to assemble and compile the software programs of the heart rate meter. After the programs compiled successfully, a HEX file is produced and downloaded into the single chip computer by software SLISP. This photoelectric heart rate meter can measure the people heart rate efficiently with measurement range of 10-200 times per minute, precision of +/- 1%, low cost and reliable performance.

  6. A model study with light-dependent mortality rates of copepod stages

    NASA Astrophysics Data System (ADS)

    Neumann, Thomas; Kremp, Christine

    2005-06-01

    This paper is based on an advanced ecosystem model of the Baltic Sea (ERGOM [ J. Mar. Sys. 25 (3-4) (2005) 405]), but with an increased resolution of the zooplankton stage variable [ J. Plankton Res. 23 (2001) 1217; ICES Marine Science 219 (2003) 208]. The model copepods are represented by five stages: eggs, an aggregated variable of nauplii, two aggregated groups of copepodites and adults. The transfer among the stages, i.e., hatching, molting and reproduction, is controlled by food availability and temperature. As usual, the model food web is truncated at the level of zooplankton. The study explores the effects of different parametrization of zooplankton mortality and looks in particular on light-dependent rates. The light climate may serve a proxy for the effects of visual feeding of fish larvae and fish. Different choices of the mortality parameters can result in remarkable differences in abundances and biomass of the model zooplankton and in the timing of its development. It is found that the different choices of mortality affect the development of populations in several ways: Relative small initial differences of abundances at the beginning of the spring bloom are important for the development of the model populations. Higher mortality rates are less important at food rich conditions than at scarce resources. At low phytoplankton levels, the individual development of the copepods through the stages can be faster for elevated mortality rates because then less animals have to share the available food.

  7. Analytical Modeling of the High Strain Rate Deformation of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Roberts, Gary D.; Gilat, Amos

    2003-01-01

    The results presented here are part of an ongoing research program to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. State variable constitutive equations originally developed for metals have been modified in order to model the nonlinear, strain rate dependent deformation of polymeric matrix materials. To account for the effects of hydrostatic stresses, which are significant in polymers, the classical 5 plasticity theory definitions of effective stress and effective plastic strain are modified by applying variations of the Drucker-Prager yield criterion. To verify the revised formulation, the shear and tensile deformation of a representative toughened epoxy is analyzed across a wide range of strain rates (from quasi-static to high strain rates) and the results are compared to experimentally obtained values. For the analyzed polymers, both the tensile and shear stress-strain curves computed using the analytical model correlate well with values obtained through experimental tests. The polymer constitutive equations are implemented within a strength of materials based micromechanics method to predict the nonlinear, strain rate dependent deformation of polymer matrix composites. In the micromechanics, the unit cell is divided up into a number of independently analyzed slices, and laminate theory is then applied to obtain the effective deformation of the unit cell. The composite mechanics are verified by analyzing the deformation of a representative polymer matrix composite (composed using the representative polymer analyzed for the correlation of the polymer constitutive equations) for several fiber orientation angles across a variety of strain rates. The computed values compare favorably to experimentally obtained results.

  8. Modeling changes in glucose and glycerol rates of appearance when true basal rates of appearance cannot be readily determined.

    PubMed

    Pyle, Laura; Bergman, Bryan C; Nadeau, Kristen J; Cree-Green, Melanie

    2016-03-01

    Advancing diabetes care requires accurate physiological assessments. Hyperinsulinemic clamps with stable isotope tracers can simultaneously measure insulin's ability to suppress lipolysis and hepatic glucose release. Traditionally, these methods require an assessment of basal glucose and glycerol rate of appearance (Ra). Basal Ra is challenging to measure in insulin-dependent diabetes, where exogenous insulin required to maintain normoglycemia can raise peripheral insulin concentrations sufficiently to suppress basal Ra. Thus we identified two alternative statistical approaches to describe changes in glucose and glycerol Ra that are less reliant on basal assessments. Sixteen youths (4 type 1 diabetic, 4 type 2 diabetic, 4 lean controls, and 4 obese nondiabetic) underwent a four-phase ("basal" and 10, 16, and 80 mU·m(2)·min(-1)) hyperinsulinemic euglycemic clamp with glucose and glycerol tracers. Glucose and glycerol Ra were calculated per phase. A statistical method, the standard two-stage (STS) algorithm, was applied to the individual log insulin vs. Ra curves to calculate a single predicted Ra value. A population-based mixed-effects model (MEM) compared the group average Ra with log insulin curves and described individual deviations from group means and was used to calculate individual predicted Ra. Both models were applied to the participant data, and predicted Ras at the mean insulin concentration per phase (10 for glycerol, 16 for glucose) were calculated, with good agreement between observed and predicted values. In our data set, the MEM was better able to detect group differences. Both STS and MEM can model lipolysis and endogenous glucose release in insulin-dependent states when basal Ra cannot be accurately measured. PMID:26714848

  9. [In vitro dissolution rate of Liuwei Wuling tablet based on biological potency and integrated dissolution].

    PubMed

    Zheng, Juan; Cheng, Ling; Shen, Cheng-ying; Li, Juan-juan; Qiu, Ling; Shen, Gang; Han, Jin; Yuan, Hai-long

    2015-11-01

    To explore the feasibility of chemical and biological method in evaluation of the in vitro dissolution rate of Liuwei Wuling tablet (LWT), this experiment investigated the inhibitory effect of LWT dissolving solutions on LX-2 hepatic stellate cells in 0.1% SDS dissolution medium in different dissolving periods. From these results, the cumulative dissolution rate of LWT was obtained based on the cell inhibitory rate. The dissolution rates of deoxyschizandrin, phillyrin, and Specnuezhenide were determined by HPLC method. A novel approach of self-defined weighting coefficient had been created to establish the integrated dissolution rate model. Then f2 similar factor method was used to evaluate the relevance of these two methods. The results showed that f2 values for deoxyschizandrin, phillyrin, Specnuezhenide, and the integrated dissolution were 61, 43, 61 and 75 respectively, indicating that the dissolution of multi-component integration could fully reflect the biological potency of the whole recipe. The dissolution evaluation method for multicomponent integration based on biological activity is expected to be one of the effective means for in vitro dissolution test of LWT. PMID:27097413

  10. A supply-based concentration rating curve to predict total phosphorus concentrations in the Rhine River

    NASA Astrophysics Data System (ADS)

    Van der Perk, M.; Vogels, M. F. A.

    2012-04-01

    Concentration rating curves are useful for the analysis of the response of sediment or solute concentrations to changes in stream discharge or for the interpolation of infrequent concentration measurements in time with discharge as auxiliary variable, for example to estimate annual sediment or solute loads. A known limitation of rating curves is that their performance is generally poor, which can be partly attributed to the fact that rating curve methods neglect the hysteresis effects in the concentration response to changes in discharge. To enhance the performance of rating curve models, they should account for these hysteresis effects. Here, we present a supply-based concentration rating curve for total phosphorus concentrations in the Rhine River, the Netherlands, which does account for the above hysteresis effects. The supply-based concentration rating curve has four components: 1) The traditional power law rating curve of the form C = a Qb where C is the phosphorus concentration [M L-3], Qis the river discharge [L T-1], and a and b are constants [-]; 2) A long-term linear trend; 3) A seasonal trend of the form C(t) = Acos [2π(t - Tk)/T] where A is the concentration amplitude [M L-3], t is the time (T), Tk is the phase shift (T), and T is the period [T] (365.25 d). 4) A discharge dependent supply or loss term of the form C = -ΔS/(QΔt), where S is the phosphorus stock [M]. The phosphorus stock was assumed to increase linearly during periods of deposition, i.e. the discharge is below a critical discharge. If the discharge is greater than the critical discharge during a sufficiently long period (> 16 days), the decrease in phosphorus stock was assumed to be proportional to the excess discharge above the critical discharge. For model parameterization and calibration, we used the daily Aqualarm data of total phosphorus concentrations and the Waterbase data of water discharge measured daily by Rijkswaterstaat (Dutch Ministry of Infrastructure and the Environment

  11. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    NASA Astrophysics Data System (ADS)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  12. Predicting key malaria transmission factors, biting and entomological inoculation rates, using modelled soil moisture in Kenya.

    PubMed

    Patz, J A; Strzepek, K; Lele, S; Hedden, M; Greene, S; Noden, B; Hay, S I; Kalkstein, L; Beier, J C

    1998-10-01

    While malaria transmission varies seasonally, large inter-annual heterogeneity of malaria incidence occurs. Variability in entomological parameters, biting rates and entomological inoculation rates (EIR) have been strongly associated with attack rates in children. The goal of this study was to assess the weather's impact on weekly biting and EIR in the endemic area of Kisian, Kenya. Entomological data collected by the U.S. Army from March 1986 through June 1988 at Kisian, Kenya was analysed with concurrent weather data from nearby Kisumu airport. A soil moisture model of surface-water availability was used to combine multiple weather parameters with landcover and soil features to improve disease prediction. Modelling soil moisture substantially improved prediction of biting rates compared to rainfall; soil moisture lagged two weeks explained up to 45% of An. gambiae biting variability, compared to 8% for raw precipitation. For An. funestus, soil moisture explained 32% variability, peaking after a 4-week lag. The interspecies difference in response to soil moisture was significant (P < 0.00001). A satellite normalized differential vegetation index (NDVI) of the study site yielded a similar correlation (r = 0.42 An. gambiae). Modelled soil moisture accounted for up to 56% variability of An. gambiae EIR, peaking at a lag of six weeks. The relationship between temperature and An. gambiae biting rates was less robust; maximum temperature r2 = -0.20, and minimum temperature r2 = 0.12 after lagging one week. Benefits of hydrological modelling are compared to raw weather parameters and to satellite NDVI. These findings can improve both current malaria risk assessments and those based on El Niño forecasts or global climate change model projections.

  13. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically.

  14. Radiocarbon Based Ages and Growth Rates: Hawaiian Deep Sea Corals

    SciTech Connect

    Roark, E B; Guilderson, T P; Dunbar, R B; Ingram, B L

    2006-01-13

    The radial growth rates and ages of three different groups of Hawaiian deep-sea 'corals' were determined using radiocarbon measurements. Specimens of Corallium secundum, Gerardia sp., and Leiopathes glaberrima, were collected from 450 {+-} 40 m at the Makapuu deep-sea coral bed using a submersible (PISCES V). Specimens of Antipathes dichotoma were collected at 50 m off Lahaina, Maui. The primary source of carbon to the calcitic C. secundum skeleton is in situ dissolved inorganic carbon (DIC). Using bomb {sup 14}C time markers we calculate radial growth rates of {approx} 170 {micro}m y{sup -1} and ages of 68-75 years on specimens as tall as 28 cm of C. secundum. Gerardia sp., A. dichotoma, and L. glaberrima have proteinaceous skeletons and labile particulate organic carbon (POC) is their primary source of architectural carbon. Using {sup 14}C we calculate a radial growth rate of 15 {micro}m y{sup -1} and an age of 807 {+-} 30 years for a live collected Gerardia sp., showing that these organisms are extremely long lived. Inner and outer {sup 14}C measurements on four sub-fossil Gerardia spp. samples produce similar growth rate estimates (range 14-45 {micro}m y{sup -1}) and ages (range 450-2742 years) as observed for the live collected sample. Similarly, with a growth rate of < 10 {micro}m y{sup -1} and an age of {approx}2377 years, L. glaberrima at the Makapuu coral bed, is also extremely long lived. In contrast, the shallow-collected A. dichotoma samples yield growth rates ranging from 130 to 1,140 {micro}m y{sup -1}. These results show that Hawaiian deep-sea corals grow more slowly and are older than previously thought.

  15. Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.

    PubMed

    Habershon, Scott

    2016-04-12

    In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles. PMID:26938837

  16. Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.

    PubMed

    Habershon, Scott

    2016-04-12

    In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.

  17. High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories.

    PubMed

    Sullivan, Shane Z; Muir, Ryan D; Newman, Justin A; Carlsen, Mark S; Sreehari, Suhas; Doerge, Chris; Begue, Nathan J; Everly, R Michael; Bouman, Charles A; Simpson, Garth J

    2014-10-01

    A simple beam-scanning optical design based on Lissajous trajectory imaging is described for achieving up to kHz frame-rate optical imaging on multiple simultaneous data acquisition channels. In brief, two fast-scan resonant mirrors direct the optical beam on a circuitous trajectory through the field of view, with the trajectory repeat-time given by the least common multiplier of the mirror periods. Dicing the raw time-domain data into sub-trajectories combined with model-based image reconstruction (MBIR) 3D in-painting algorithms allows for effective frame-rates much higher than the repeat time of the Lissajous trajectory. Since sub-trajectory and full-trajectory imaging are simply different methods of analyzing the same data, both high-frame rate images with relatively low resolution and low frame rate images with high resolution are simultaneously acquired. The optical hardware required to perform Lissajous imaging represents only a minor modification to established beam-scanning hardware, combined with additional control and data acquisition electronics. Preliminary studies based on laser transmittance imaging and polarization-dependent second harmonic generation microscopy support the viability of the approach both for detection of subtle changes in large signals and for trace-light detection of transient fluctuations.

  18. High removal rate laser-based coating removal system

    DOEpatents

    Matthews, Dennis L.; Celliers, Peter M.; Hackel, Lloyd; Da Silva, Luiz B.; Dane, C. Brent; Mrowka, Stanley

    1999-11-16

    A compact laser system that removes surface coatings (such as paint, dirt, etc.) at a removal rate as high as 1000 ft.sup.2 /hr or more without damaging the surface. A high repetition rate laser with multiple amplification passes propagating through at least one optical amplifier is used, along with a delivery system consisting of a telescoping and articulating tube which also contains an evacuation system for simultaneously sweeping up the debris produced in the process. The amplified beam can be converted to an output beam by passively switching the polarization of at least one amplified beam. The system also has a personal safety system which protects against accidental exposures.

  19. Mass Customization Production Planning System by Advance Demand Information Based on Unfulfilled-order-rate

    NASA Astrophysics Data System (ADS)

    Ueno, Nobuyuki; Kawasaki, Masaya; Okuhara, Koji

    In this paper, we try to model for ‘Naiji System’ which is a unique corporation between a maker and suppliers in Japan. We propose Mass Customization Production Planning & Management System (MCPS) based on unfulfilled-order-rate by using Advance Demand Information, which is called ‘Naiji’. This model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to the set of probabilistic constraint and some linear production constraints. We propose the new upper bound SOn (ρmin) to estimate the unfulfilled-order-rate more strictly. The procedure to find a good solution is developed by solving the linear programming problem repeatedly on the basic solution strategy that is ‘relaxation’. A computational load to obtain a solution by the proposed indicator is shown to be very small. Finally, an availability of the procedure is shown.

  20. Coupling airborne laser scanning and acoustic Doppler current profiler data to model stream rating curves

    NASA Astrophysics Data System (ADS)

    Lam, N.; Lyon, S. W.; Kean, J. W.

    2015-12-01

    The rating curve enables the translation of water depth into discharge through a reference cross section. Errors in estimating stream channel geometry can therefore result in increased discharge uncertainty. This study investigates coupling national-scale airborne laser scanning (ALS) and acoustic Doppler current profiler (ADCP) bathymetric survey data for generating stream rating curves. Specifically, stream channel geometries were generated from coupled ALS and ADCP scanning data collected for a well-monitored site located in northern Sweden. These data were used to define the hydraulic geometry required by a physically-based 1-D hydraulic model. The results of our study demonstrate that the effects of potential scanning data errors on the model generated rating curve were less than the uncertainties due to stream gauging measurements and empirical rating curve fitting. Further analysis of the ALS data showed that an overestimation of the streambank elevation (the main scanning data error) was primarily due to vegetation that could be adjusted for through a root-mean-square-error bias correction. We consider these findings encouraging as hydrometric agencies can potentially leverage national-scale ALS and ADCP instrumentation to reduce the cost and effort required for maintaining and establish rating curves at gauging stations.

  1. Philosophy of the Spike: Rate-Based vs. Spike-Based Theories of the Brain

    PubMed Central

    Brette, Romain

    2015-01-01

    Does the brain use a firing rate code or a spike timing code? Considering this controversial question from an epistemological perspective, I argue that progress has been hampered by its problematic phrasing. It takes the perspective of an external observer looking at whether those two observables vary with stimuli, and thereby misses the relevant question: which one has a causal role in neural activity? When rephrased in a more meaningful way, the rate-based view appears as an ad hoc methodological postulate, one that is practical but with virtually no empirical or theoretical support. PMID:26617496

  2. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H.; Lehman, Sean K.; Goodman, Dennis M.

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  3. Energy based hybrid turbulence modeling

    NASA Astrophysics Data System (ADS)

    Haering, Sigfried; Moser, Robert

    2015-11-01

    Traditional hybrid approaches exhibit deficiencies when used for fluctuating smooth-wall separation and reattachment necessitating ad-hoc delaying functions and model tuning making them no longer useful as a predictive tool. Additionally, complex geometries and flows often require high cell aspect-ratios and large grid gradients as a compromise between resolution and cost. Such transitions and inconsistencies in resolution detrimentally effect the fidelity of the simulation. We present the continued development of a new hybrid RANS/LES modeling approach specifically developed to address these challenges. In general, modeled turbulence is returned to resolved scales by reduced or negative model viscosity until a balance between theoretical and actual modeled turbulent kinetic energy is attained provided the available resolution. Anisotropy in the grid and resolved field are directly integrated into this balance. A viscosity-based correction is proposed to account for resolution inhomogeneities. Both the hybrid framework and resolution gradient corrections are energy conserving through an exchange of resolved and modeled turbulence.

  4. The inverse base-rate effect is not explained by eliminative inference.

    PubMed

    Kruschke, J K

    2001-11-01

    The inverse base-rate effect is a phenomenon in which people learn about some common and some rare outcomes and in subsequent testing people predict the rare outcome for particular sets of conflicting cues, contrary to normative predictions. P. Juslin, P. Wennerholm, and A. Winman suggested that the effect could be explained by eliminative inference, contrary to the attention-shifting explanation of J. K. Kruschke. The present article shows that the eliminative inference model exhibits ordinal discrepancies from previously published data and from data of 2 new experiments. A connectionist implementation of attentional theory fits the data well. The author concludes that people can use eliminative inference but that it cannot account for the inverse base-rate effect. PMID:11713874

  5. Determination of the polymerisation rate of a low-toxicity diacetone acrylamide-based holographic photopolymer using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Cody, D.; Mihaylova, E.; O'Neill, L.; Naydenova, I.

    2015-10-01

    The polymerisation rate of a low-toxicity Diacetone Acrylamide (DA)-based photopolymer has been measured for the first time using Raman spectroscopy. A value for the polymerisation rate of 0.020 s-1 has been obtained for the DA photopolymer by modelling the polymerisation reaction dynamics as a stretched exponential or Kohlrausch decay function. This is significantly lower than the polymerisation rate of 0.100 s-1 measured for the well known Acrylamide (AA)-based photopolymer composition. The effect of the additive glycerol on the polymerisation rate of the DA-based photopolymer has also been investigated. The inclusion of glycerol is observed to increase the rate of polymerisation of the DA photopolymer by up to 60%. It is also observed that the polymerisation rate of the DA photopolymer is less dependent on the recording intensity when glycerol is present.

  6. Dimensions of Escherichia coli at various growth rates: model for envelope growth.

    PubMed Central

    Pierucci, O

    1978-01-01

    The duplication of Escherichia coli B/r is described based on two independent sequences, the replication of the genome and the growth of the envelope. It is proposed that (i) new envelope growth zones are activated coincident with the initiation of new rounds of chromosome replication; (ii) each zone is active in envelope synthesis from the time of its inauguration to the division which follows the completion of the round of chromosome replication (that is, for C + D min); and (iii) the rate of envelope synthesis at each site is constant, independent of the growth rate. Measurements of the surface areas of two E. coli B/r substrains growing at a variety of rates and during nutritional transitions are consistent with the predictions of the model. PMID:355233

  7. Phase plane based identification of fetal heart rate patterns

    PubMed Central

    Vairavan, Srinivasan; Sriram, Bhargavi; Wilson, James D.; Preissl, Hubert; Eswaran, Hari

    2012-01-01

    Using a phase plane analysis (PPA) of the spatial spread of trajectories of the fetal heart rate and its time-derivative we characterize the fetal heart rate patterns (fHRP) as defined by Nijhuis. For this purpose, we collect 22 fetal magnetocardiogram using a 151 SQUID system from 22 low-risk fetuses in gestational ages ranging from 30 to 37 weeks. Each study lasted for 30 minutes. After the attenuation of the maternal cardiac signals, we identify the R waves using an adaptive Hilbert transform approach and calculate the fetal heart rate. On these datasets, we apply the proposed approach and the traditionally used approaches such as standard deviation of the normal to normal intervals (SDNN) and root mean square of the successive difference (RMSSD). Heart rate patterns are scored by an expert using Nijhuis criteria and revealed A, B, and D patterns. A receiver operator characteristic (ROC) curve is used to assess the performance of the metric to differentiate the different patterns. Results showed that only PPA was able to differentiate all pairs of fHRP with high performance. PMID:22254593

  8. A liquid cooled garment temperature controller based on sweat rate

    NASA Technical Reports Server (NTRS)

    Chambers, A. B.; Blackaby, J. R.

    1972-01-01

    An automatic controller for liquid cooled space suits is reported that utilizes human sweat rate as the primary input signal. The controller is so designed that the coolant inlet temperature is inversely proportional to the subject's latent heat loss as evidenced by evaporative water loss.

  9. An Empirical Approach to Determining Employee Deviance Base Rates.

    ERIC Educational Resources Information Center

    Slora, Karen B.

    Employee deviance may reflect either acts of employee theft or of production deviance. Employee theft refers to the unauthorized taking of cash, merchandise, or property. Production deviance refers to counterproductive activities which serve to slow the rate or quality of output, such as intentionally doing slow or sloppy work or using drugs on…

  10. High base pair opening rates in tracts of GC base pairs.

    PubMed

    Dornberger, U; Leijon, M; Fritzsche, H

    1999-03-12

    Sequence-dependent structural features of the DNA double helix have a strong influence on the base pair opening dynamics. Here we report a detailed study of the kinetics of base pair breathing in tracts of GC base pairs in DNA duplexes derived from 1H NMR measurements of the imino proton exchange rates upon titration with the exchange catalyst ammonia. In the limit of infinite exchange catalyst concentration, the exchange times of the guanine imino protons of the GC tracts extrapolate to much shorter base pair lifetimes than commonly observed for isolated GC base pairs. The base pair lifetimes in the GC tracts are below 5 ms for almost all of the base pairs. The unusually rapid base pair opening dynamics of GC tracts are in striking contrast to the behavior of AT tracts, where very long base pair lifetimes are observed. The implication of these findings for the structural principles governing spontaneous helix opening as well as the DNA-binding specificity of the cytosine-5-methyltransferases, where flipping of the cytosine base has been observed, are discussed.

  11. Analytical Model for Prediction of Reduced Strain Energy Release Rate of Single-Side-Patched Plates

    NASA Astrophysics Data System (ADS)

    Kwon, Y. W.; Lee, W. Y.; McGee, A. S.; Hart, D. C.; Loup, D. C.; Rasmussen, E. A.

    2013-12-01

    A study was undertaken to develop an analytical model that can predict how much reduction in Strain Energy Release Rate (SERR) can be achieved by repairing a cracked plate using a single-side bonded patch. The plate may be subjected to inplane or out-of-plane bending loading. Furthermore, the plate may be flat or curved in a cylindrical shape. The model helps to select patch material (i.e., elastic modulus of the material) and the appropriate patch size in order to reduce the SERR at the crack tip of the patched base plate. In other words, the analytical model can be utilized to select the patch material and patch dimensions required to achieve the desired SERR for a cracked base plate with known modulus, thickness, and crack size. The model is based on axial and bending stresses of the single-side strap joint configuration, which are related to the SERR at the crack tip of a plate with a single-side patch repair. In order to verify the analytical model, finite element analyses were conducted to determine stresses as well as SERR in many different patched plates. The numerical study confirmed the validity of the analytical model in predicting the reduction ratio of SERR resulting from the single-side patch repair.

  12. Stochastic modelling of evaporation based on copulas

    NASA Astrophysics Data System (ADS)

    Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko

    2015-04-01

    Evapotranspiration is an important process in the water cycle that represents a considerable amount of moisture lost through evaporation from the soil surface and transpiration from plants in a watershed. Therefore, an accurate estimate of evapotranspiration rates is necessary, along with precipitation data, for running hydrological models. Often, daily reference evapotranspiration is modelled based on the Penman, Priestley-Taylor or Hargraeves equation. However, each of these models requires extensive input data, such as daily mean temperature, wind speed, relative humidity and solar radiation. Yet, in design studies, such data is unavailable in case stochastically generated time series of precipitation are used to force a hydrologic model. In the latter case, an alternative model approach is needed that allows for generating evapotranspiration data that are consistent with the accompanying precipitation data. This contribution presents such an approach in which the statistical dependence between evapotranspiration, temperature and precipitation is described by three- and four-dimensional vine copulas. Based on a case study of 72 years of evapotranspiration, temperature and precipitation data, observed in Uccle, Belgium, it was found that canonical vine copulas (C-Vines) in which bivariate Frank copulas are employed perform very well in preserving the dependencies between variables. While 4-dimensional C-Vine copulas performed best in simulating time series of evapotranspiration, a 3-dimensional C-Vine copula (relating evapotranspiration, daily precipitation depth and temperature) still allows for modelling evapotranspiration, though with larger error statistics.

  13. Modeling climate effects on hip fracture rate by the multivariate GARCH model in Montreal region, Canada.

    PubMed

    Modarres, Reza; Ouarda, Taha B M J; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2014-07-01

    Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMAX-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56% of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk.

  14. Modeling climate effects on hip fracture rate by the multivariate GARCH model in Montreal region, Canada

    NASA Astrophysics Data System (ADS)

    Modarres, Reza; Ouarda, Taha B. M. J.; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2014-07-01

    Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMA X-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56 % of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk.

  15. What Explains Usage of Mobile Physician-Rating Apps? Results From a Web-Based Questionnaire

    PubMed Central

    Terlutter, Ralf; Röttl, Johanna

    2014-01-01

    Background Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. Objective This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. Methods A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. Results The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients’ value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients’ value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of

  16. A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.

    PubMed

    Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu

    2015-12-01

    Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.

  17. Chlorination and dechlorination rates in a forest soil - A combined modelling and experimental approach.

    PubMed

    Montelius, Malin; Svensson, Teresia; Lourino-Cabana, Beatriz; Thiry, Yves; Bastviken, David

    2016-06-01

    Much of the total pool of chlorine (Cl) in soil consists of naturally produced organic chlorine (Clorg). The chlorination of bulk organic matter at substantial rates has been experimentally confirmed in various soil types. The subsequent fates of Clorg are important for ecosystem Cl cycling and residence times. As most previous research into dechlorination in soils has examined either single substances or specific groups of compounds, we lack information about overall bulk dechlorination rates. Here we assessed bulk organic matter chlorination and dechlorination rates in coniferous forest soil based on a radiotracer experiment conducted under various environmental conditions (additional water, labile organic matter, and ammonium nitrate). Experiment results were used to develop a model to estimate specific chlorination (i.e., fraction of Cl(-) transformed to Clorg per time unit) and specific dechlorination (i.e., fraction of Clorg transformed to Cl(-) per time unit) rates. The results indicate that chlorination and dechlorination occurred simultaneously under all tested environmental conditions. Specific chlorination rates ranged from 0.0005 to 0.01 d(-1) and were hampered by nitrogen fertilization but were otherwise similar among the treatments. Specific dechlorination rates were 0.01-0.03d(-1) and were similar among all treatments. This study finds that soil Clorg levels result from a dynamic equilibrium between the chlorination and rapid dechlorination of some Clorg compounds, while another Clorg pool is dechlorinated more slowly. Altogether, this study demonstrates a highly active Cl cycling in soils. PMID:26950634

  18. Simultaneous quantification of depolymerization and mineralization rates by a novel 15N tracing model

    NASA Astrophysics Data System (ADS)

    Andresen, Louise C.; Björsne, Anna-Karin; Bodé, Samuel; Klemedtsson, Leif; Boeckx, Pascal; Rütting, Tobias

    2016-09-01

    The depolymerization of soil organic matter, such as proteins and (oligo-)peptides, into monomers (e.g. amino acids) is currently considered to be the rate-limiting step for nitrogen (N) availability in terrestrial ecosystems. The mineralization of free amino acids (FAAs), liberated by the depolymerization of peptides, is an important fraction of the total mineralization of organic N. Hence, the accurate assessment of peptide depolymerization and FAA mineralization rates is important in order to gain a better process-based understanding of the soil N cycle. In this paper, we present an extended numerical 15N tracing model Ntrace, which incorporates the FAA pool and related N processes in order to provide a more robust and simultaneous quantification of depolymerization and gross mineralization rates of FAAs and soil organic N. We discuss analytical and numerical approaches for two forest soils, suggest improvements of the experimental work for future studies, and conclude that (i) when about half of all depolymerized peptide N is directly mineralized, FAA mineralization can be as important a rate-limiting step for total gross N mineralization as peptide depolymerization rate; (ii) gross FAA mineralization and FAA immobilization rates can be used to develop FAA use efficiency (NUEFAA), which can reveal microbial N or carbon (C) limitation.

  19. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    NASA Astrophysics Data System (ADS)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  20. Modeling the growth rate of distortion product otoacoustic emissions by active nonlinear oscillators.

    PubMed

    Sisto, Renata; Moleti, Arturo

    2004-09-01

    In this work, growth-rate curves of the 2 f1-f2 distortion product otoacoustic emission (DPOAE) are analyzed in a population of 30 noise exposed subjects, including both normal-hearing and hearing impaired subjects. A particular embedded limit-cycle oscillator equation is used to model the cochlear resonant response at the cochlear places of the primary and secondary tone frequencies (f2 and 2 f1-f2). The parameters of the oscillator equation can be directly interpreted in terms of effectiveness of the cochlear feedback mechanisms associated with the active filter amplification. A two-sources paradigm is included in the model, in agreement with experimental evidence and with the assumptions of more detailed full cochlear models based on the transmission line formalism. According to this paradigm, DPOAEs are nonlinearly generated at the cochlear place that is resonant at frequency f2, and coherently reflected at the 2 f1-f2 place. The analysis shows that the model, which had been previously used to describe the relaxation dynamics of transient evoked otoacoustic emissions (TEOAEs), also correctly predicts the observed growth rate of the DPOAE response as a function of the primary tones amplitude. A significant difference is observed between normal and impaired ears. The comparison between the growth rate curves at different frequencies provides information about the dependence of cochlear tuning on frequency.

  1. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  2. MICROBIAL VOLATILE ORGANIC COMPOUND EMISSION RATES AND EXPOSURE MODEL

    EPA Science Inventory

    This paper presents the results from a study that examined microbial volatile organic compound (MVOC) emissions from six fungi and one bacterial species (Streptomyces spp.) commonly found in indoor environments. Data are presented on peak emission rates from inoculated agar plate...

  3. An Item Response Unfolding Model for Graphic Rating Scales

    ERIC Educational Resources Information Center

    Liu, Ying

    2009-01-01

    The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…

  4. A Latent-Variable Causal Model of Faculty Reputational Ratings.

    ERIC Educational Resources Information Center

    King, Suzanne; Wolfle, Lee M.

    A reanalysis was conducted of Saunier's research (1985) on sources of variation in the National Research Council (NRC) reputational ratings of university faculty. Saunier conducted a stepwise regression analysis using 12 predictor variables. Due to problems with multicollinearity and because of the atheoretical nature of stepwise regression,…

  5. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  6. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  7. Infant breathing rate counter based on variable resistor for pneumonia

    NASA Astrophysics Data System (ADS)

    Sakti, Novi Angga; Hardiyanto, Ardy Dwi; La Febry Andira R., C.; Camelya, Kesa; Widiyanti, Prihartini

    2016-03-01

    Pneumonia is one of the leading causes of death in new born baby in Indonesia. According to WHO in 2002, breathing rate is very important index to be the symptom of pneumonia. In the Community Health Center, the nurses count with a stopwatch for exactly one minute. Miscalculation in Community Health Center occurs because of long time concentration and focus on two object at once. This calculation errors can cause the baby who should be admitted to the hospital only be attended at home. Therefore, an accurate breathing rate counter at Community Health Center level is necessary. In this work, resistance change of variable resistor is made to be breathing rate counter. Resistance change in voltage divider can produce voltage change. If the variable resistance moves periodically, the voltage will change periodically too. The voltage change counted by software in the microcontroller. For the every mm shift at the variable resistor produce average 0.96 voltage change. The software can count the number of wave generated by shifting resistor.

  8. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    PubMed Central

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  9. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    NASA Astrophysics Data System (ADS)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare

  10. Mathematical model of cycad cones' thermogenic temperature responses: inverse calorimetry to estimate metabolic heating rates.

    PubMed

    Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I

    2012-12-21

    A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. PMID:22995822

  11. Modelling Oxygen Dynamics in an Intermittently Stratified Estuary: Estimation of Process Rates Using Field Data

    NASA Astrophysics Data System (ADS)

    Borsuk, M. E.; Stow, C. A.; Luettich, R. A.; Paerl, H. W.; Pinckney, J. L.

    2001-01-01

    The relationship between bottom water dissolved oxygen concentration, vertical stratification, and temperature was investigated for the Neuse River estuary, North Carolina, a shallow, intermittently-mixed estuary using approximately 10 years of weekly/biweekly, mid-channel data. A generalized additive model (GAM) was used to initially explore the major relationships among observed variables. The results of this statistical model guided the specification of a process-based model of oxygen dynamics that is consistent with theory yet simple enough to be parameterized using available field data. The nonlinear optimization procedure employed allows for the direct estimation of microbial oxygen consumption and physical reoxygenation rates, including the effects of temperature and vertical stratification. These estimated rates may better represent aggregate system behaviour than closed chamber measurements made in the laboratory and in situ. The resulting model describes 79% of the variation in dissolved oxygen concentration and is robust when compared across separate locations and time periods. Model predictions suggest that the spatial extent and duration of hypoxia in the bottom waters of the Neuse are controlled by the balance between the net oxygen depletion rate and the frequency of vertical mixing events. During cool months, oxygen consumption rates remain low enough to keep oxygen concentration well above levels of concern even under extended periods of stratification. A concentration below 4 mg l -1is only expected under extended periods without vertical mixing when bottom water temperature exceeds 15 °C, while a concentration below 2 mg l -1is only expected when water temperature exceeds 20 °C. To incorporate the effects of parameter uncertainty, model error, and natural variability on model prediction, we used Monte Carlo simulation to generate distributions for the predicted number of days of hypoxia during the summer season. The expected number of days with

  12. Geodetic slip-rates from block-modeling of a dense GPS velocity field in Italy: comparison with geological slip-rates and seismic moment release

    NASA Astrophysics Data System (ADS)

    Serpelloni, E.; Anderlini, L.; Mastrolembo, B.; Cavaliere, A.; Baldi, P.; Belardinelli, M. E.

    2012-04-01

    uncertainties in the geological slip-rates. In general, geodetically determined slip-rates are faster than geologically determined ones. We use our best-fit block model to define a continuous horizontal velocity field, and estimate the velocity-gradient field. The strain-rate field estimated from the sparse GPS velocities shows areas, outside the major fault systems, with significant strain accumulation, which are interpreted as the effects of secondary localized deformation zones that would require additional investigations. We calculate the regional moment accumulation rate in Italy based on the geodetically determined fault slip rates, and compare it with the moment released from earthquake catalogues with the goal of estimating localized moment deficits.

  13. Does childhood cancer affect parental divorce rates? A population-based study.

    PubMed

    Syse, Astri; Loge, Jon H; Lyngstad, Torkild H

    2010-02-10

    PURPOSE Cancer in children may profoundly affect parents' personal relationships in terms of psychological stress and an increased care burden. This could hypothetically elevate divorce rates. Few studies on divorce occurrence exist, so the effect of childhood cancers on parental divorce rates was explored. PATIENTS AND METHODS Data on the entire Norwegian married population, age 17 to 69 years, with children age 0 to 20 years in 1974 to 2001 (N = 977,928 couples) were retrieved from the Cancer Registry, the Central Population Register, the Directorate of Taxes, and population censuses. Divorce rates for 4,590 couples who were parenting a child with cancer were compared with those of otherwise similar couples by discrete-time hazard regression models. Results Cancer in a child was not associated with an increased risk of parental divorce overall. An increased divorce rate was observed with Wilms tumor (odds ratio [OR], 1.52) but not with any of the other common childhood cancers. The child's age at diagnosis, time elapsed from diagnosis, and death from cancer did not influence divorce rates significantly. Increased divorce rates were observed for couples in whom the mothers had an education greater than high school level (OR, 1.16); the risk was particularly high shortly after diagnosis, for CNS cancers and Wilms tumors, for couples with children 0 to 9 years of age at diagnosis, and after a child's death. CONCLUSION This large, registry-based study shows that cancer in children is not associated with an increased parental divorce rate, except with Wilms tumors. Couples in whom the wife is highly educated appear to face increased divorce rates after a child's cancer, and this may warrant additional study.

  14. Process-Based Modeling of Constructed Wetlands

    NASA Astrophysics Data System (ADS)

    Baechler, S.; Brovelli, A.; Rossi, L.; Barry, D. A.

    2007-12-01

    Constructed wetlands (CWs) are widespread facilities for wastewater treatment. In subsurface flow wetlands, contaminated wastewater flows through a porous matrix, where oxidation and detoxification phenomena occur. Despite the large number of working CWs, system design and optimization are still mainly based upon empirical equations or simplified first-order kinetics. This results from an incomplete understanding of the system functioning, and may in turn hinder the performance and effectiveness of the treatment process. As a result, CWs are often considered not suitable to meet high water quality-standards, or to treat water contaminated with recalcitrant anthropogenic contaminants. To date, only a limited number of detailed numerical models have been developed and successfully applied to simulate constructed wetland behavior. Among these, one of the most complete and powerful is CW2D, which is based on Hydrus2D. The aim of this work is to develop a comprehensive simulator tailored to model the functioning of horizontal flow constructed wetlands and in turn provide a reliable design and optimization tool. The model is based upon PHWAT, a general reactive transport code for saturated flow. PHWAT couples MODFLOW, MT3DMS and PHREEQC-2 using an operator-splitting approach. The use of PHREEQC to simulate reactions allows great flexibility in simulating biogeochemical processes. The biogeochemical reaction network is similar to that of CW2D, and is based on the Activated Sludge Model (ASM). Kinetic oxidation of carbon sources and nutrient transformations (nitrogen and phosphorous primarily) are modeled via Monod-type kinetic equations. Oxygen dissolution is accounted for via a first-order mass-transfer equation. While the ASM model only includes a limited number of kinetic equations, the new simulator permits incorporation of an unlimited number of both kinetic and equilibrium reactions. Changes in pH, redox potential and surface reactions can be easily incorporated

  15. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  16. Effect of Shear Rate and Temperature on Rheological Properties of Vegetable Based Oil

    NASA Astrophysics Data System (ADS)

    Nik, W. B. Wan; Giap, S. G. Eng; Senin, H. B.; Bulat, K. H. Ku

    2007-05-01

    Petroleum oil has been the raw material for over 90% of hydraulic fluid. Limitations of this base material in the aspect of non-renewable, not environmental friendly and its sustainability in the future have prompted a search for more stable and environmentally friendly alternatives. This article presents rheological aspects of hydraulic fluid derived from bio-based material when used as hydraulic fluid. Palm oil with F10 additive is found to be most shearstable. Various empirical models such as modified Power Law, Herschel-Bulkley and Arrhenius-type-relationship are used to evaluate the rheological data. The influence of shear rate and temperature on the variation of viscosity is clearly observed but temperature has more significant influence. Interpretations of rheological models indicate that crop oils belong to pseudo-plastic category. The effect of oil degradation in the aspect of physical property on viscosity is also evaluated.

  17. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with

  18. Tables of Nuclear Cross Sections and Reaction Rates: AN Addendum to the Paper ``ASTROPHYSICAL Reaction Rates from Statistical Model Calculations'' ()

    NASA Astrophysics Data System (ADS)

    Rauscher, Thomas; Thielemann, Friedrich-Karl

    2001-09-01

    In a previous publication (ATOMIC DATAAND NUCLEAR DATA TABLES75, 1 (2000)), we gave seven-parameter analytical fits to theoretical reaction rates derived from nuclear cross sections calculated in the statistical model (Hauser-Feshbach formalism) for targets with 10<=Z<=83 (Ne to Bi) and for a mass range reaching the neutron and proton driplines. Reactions considered were (n,γ), (n,p), (n,α), (p,γ), (p,α), (α,γ), and their inverse reactions. Here, we present the theoretical nuclear cross sections and astrophysical reaction rates from which those rate fits were derived, and we provide these data as on-line electronic files. Corresponding to the fitted rates, two complete data sets are provided, one of which includes a phenomenological treatment of shell quenching for neutron-rich nuclei.

  19. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939

  20. Variable-rate colour image quantization based on quadtree segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Y. C.; Li, C. Y.; Chuang, J. C.; Lo, C. C.

    2011-09-01

    A novel variable-sized block encoding with threshold control for colour image quantization (CIQ) is presented in this paper. In CIQ, the colour palette used has a great influence on the reconstructed image quality. Typically, a higher image quality and a larger storage cost are obtained when a larger-sized palette is used in CIQ. To cut down the storage cost while preserving quality of the reconstructed images, the threshold control policy for quadtree segmentation is used in this paper. Experimental results show that the proposed method adaptively provides desired bit rates while having better image qualities comparing to CIQ with the usage of multiple palettes of different sizes.

  1. Exposure Modeling of Residential Air Exchange Rates for NEXUS Participants

    EPA Science Inventory

    Due to cost and participant burden of personal measurements, air pollution health studies often estimate exposures using local ambient air monitors. Since outdoor levels do not necessarily reflect personal exposures, we developed the Exposure Model for Individuals (EMI) to improv...

  2. Exposure Modeling of Residential Air Exchange Rates for NEXUS Participants.

    EPA Science Inventory

    Due to cost and participant burden of personal measurements, air pollution health studies often estimate exposures using local ambient air monitors. Since outdoor levels do not necessarily reflect personal exposures, we developed the Exposure Model for Individuals (EMI) to improv...

  3. Moving from gamma passing rates to patient DVH-based QA metrics in pretreatment dose QA

    SciTech Connect

    Zhen, Heming; Nelms, Benjamin E.; Tome, Wolfgang A.

    2011-10-15

    Purpose: The purpose of this work is to explore the usefulness of the gamma passing rate metric for per-patient, pretreatment dose QA and to validate a novel patient-dose/DVH-based method and its accuracy and correlation. Specifically, correlations between: (1) gamma passing rates for three 3D dosimeter detector geometries vs clinically relevant patient DVH-based metrics; (2) Gamma passing rates of whole patient dose grids vs DVH-based metrics, (3) gamma passing rates filtered by region of interest (ROI) vs DVH-based metrics, and (4) the capability of a novel software algorithm that estimates corrected patient Dose-DVH based on conventional phan-tom QA data are analyzed. Methods: Ninety six unique ''imperfect'' step-and-shoot IMRT plans were generated by applying four different types of errors on 24 clinical Head/Neck patients. The 3D patient doses as well as the dose to a cylindrical QA phantom were then recalculated using an error-free beam model to serve as a simulated measurement for comparison. Resulting deviations to the planned vs simulated measured DVH-based metrics were generated, as were gamma passing rates for a variety of difference/distance criteria covering: dose-in-phantom comparisons and dose-in-patient comparisons, with the in-patient results calculated both over the whole grid and per-ROI volume. Finally, patient dose and DVH were predicted using the conventional per-beam planar data as input into a commercial ''planned dose perturbation'' (PDP) algorithm, and the results of these predicted DVH-based metrics were compared to the known values. Results: A range of weak to moderate correlations were found between clinically relevant patient DVH metrics (CTV-D95, parotid D{sub mean}, spinal cord D1cc, and larynx D{sub mean}) and both 3D detector and 3D patient gamma passing rate (3%/3 mm, 2%/2 mm) for dose-in-phantom along with dose-in-patient for both whole patient volume and filtered per-ROI. There was considerable scatter in the gamma passing rate

  4. Particle-based model for skiing traffic

    NASA Astrophysics Data System (ADS)

    Holleczek, Thomas; Tröster, Gerhard

    2012-05-01

    We develop and investigate a particle-based model for ski slope traffic. Skiers are modeled as particles with a mass that are exposed to social and physical forces, which define the riding behavior of skiers during their descents on ski slopes. We also report position and speed data of 21 skiers recorded with GPS-equipped cell phones on two ski slopes. A comparison of these data with the trajectories resulting from computer simulations of our model shows a good correspondence. A study of the relationship among the density, speed, and flow of skiers reveals that congestion does not occur even with arrival rates of skiers exceeding the maximum ski lift capacity. In a sensitivity analysis, we identify the kinetic friction coefficient of skis on snow, the skier mass, the range of repelling social forces, and the arrival rate of skiers as the crucial parameters influencing the simulation results. Our model allows for the prediction of speed zones and skier densities on ski slopes, which is important in the prevention of skiing accidents.

  5. Optimizing the Parameters of the Rate-and-State Constitutive Law in an Earthquake Clustering Model

    NASA Astrophysics Data System (ADS)

    Console, R.; Murru, M.; Catalli, F.

    2004-12-01

    The phenomenon of earthquake clustering, i.e. the increase of occurrence probability for seismic events close in space and time to other previous earthquakes, has been modeled both by statistical and physical processes. From a statistical viewpoint, the so-called epidemic model (ETAS) introduced by Ogata in 1988 and its variations have become fairly well known in the seismological community. Tests on real seismicity and comparison with a plain time-independent Poissonian model through likelihood-based methods have reliably proved their validity. On the other hand, in the last decade many papers have been published on the so-called Coulomb stress change principle, based on the theory of elasticity, showing qualitatively that an increase of the Coulomb stress in a given area is usually associated with an increase of seismic activity. More specifically, the rate-and-state theory developed by Dieterich in the `90s has been able to give a physical justification to the phenomenon known as Omori law. According to this law, a mainshock is followed by a series of aftershocks whose frequency decreases in time as an inverse power law. In this study we give an outline of the above mentioned stochastic and physical models, and build up an approach by which these models can be merged in a single algorithm and statistically tested. The application to the seismicity of Japan from 1970 to 2003 shows that the new model incorporating the physical concept of the rate-and-state theory performs even better of the purely stochastic model with a smaller number of free parameters

  6. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  7. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed

  8. Development and performance of a fluence rate distribution model for a cylindrical excimer lamp.

    PubMed

    Naunovic, Zorana; Pennell, Kelly G; Blatchley, Ernest R

    2008-03-01

    Ultraviolet disinfection systems employing excimer lamp technology represent a suitable choice in situations where lamp mercury content is restricted, or otherwise undesirable. The XeBr* excimer lamp emits nearly monochromatic radiation at 282 nm, and dose-response experiments with Bacillus subtilis spores have shown that it is germicidally effective. A numerical model was developed to describe the fluence rate (E') distribution emanating from a cylindrical XeBr* excimer lamp, based on liquid water or air as the surrounding medium. The E' distribution model is based on physical phenomena that are known to govern excimer lamps; the model also accounts for refraction, reflection, and absorbance effects of the quartz lamp envelope and the media surrounding the lamp. Measurements of the E' distribution by local actinometry supported the validity of the numerical model. This model can be used as a component (submodel) of a more general model to simulate the behavior of photochemical reactors that employ excimer lamps as their source of electromagnetic radiation.

  9. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference

    PubMed Central

    Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.

    2015-01-01

    The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888

  10. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    PubMed

    Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J

    2015-10-01

    The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888

  11. Evaluating range-expansion models for calculating nonnative species' expansion rate.

    PubMed

    Preuss, Sonja; Low, Matthew; Cassel-Lundhagen, Anna; Berggren, Asa

    2014-07-01

    Species range shifts associated with environmental change or biological invasions are increasingly important study areas. However, quantifying range expansion rates may be heavily influenced by methodology and/or sampling bias. We compared expansion rate estimates of Roesel's bush-cricket (Metrioptera roeselii, Hagenbach 1822), a nonnative species currently expanding its range in south-central Sweden, from range statistic models based on distance measures (mean, median, 95(th) gamma quantile, marginal mean, maximum, and conditional maximum) and an area-based method (grid occupancy). We used sampling simulations to determine the sensitivity of the different methods to incomplete sampling across the species' range. For periods when we had comprehensive survey data, range expansion estimates clustered into two groups: (1) those calculated from range margin statistics (gamma, marginal mean, maximum, and conditional maximum: ˜3 km/year), and (2) those calculated from the central tendency (mean and median) and the area-based method of grid occupancy (˜1.5 km/year). Range statistic measures differed greatly in their sensitivity to sampling effort; the proportion of sampling required to achieve an estimate within 10% of the true value ranged from 0.17 to 0.9. Grid occupancy and median were most sensitive to sampling effort, and the maximum and gamma quantile the least. If periods with incomplete sampling were included in the range expansion calculations, this generally lowered the estimates (range 16-72%), with exception of the gamma quantile that was slightly higher (6%). Care should be taken when interpreting rate expansion estimates from data sampled from only a fraction of the full distribution. Methods based on the central tendency will give rates approximately half that of methods based on the range margin. The gamma quantile method appears to be the most robust to incomplete sampling bias and should be considered as the method of choice when sampling the entire

  12. Evaluating range-expansion models for calculating nonnative species' expansion rate

    PubMed Central

    Preuss, Sonja; Low, Matthew; Cassel-Lundhagen, Anna; Berggren, Åsa

    2014-01-01

    Species range shifts associated with environmental change or biological invasions are increasingly important study areas. However, quantifying range expansion rates may be heavily influenced by methodology and/or sampling bias. We compared expansion rate estimates of Roesel's bush-cricket (Metrioptera roeselii, Hagenbach 1822), a nonnative species currently expanding its range in south-central Sweden, from range statistic models based on distance measures (mean, median, 95th gamma quantile, marginal mean, maximum, and conditional maximum) and an area-based method (grid occupancy). We used sampling simulations to determine the sensitivity of the different methods to incomplete sampling across the species' range. For periods when we had comprehensive survey data, range expansion estimates clustered into two groups: (1) those calculated from range margin statistics (gamma, marginal mean, maximum, and conditional maximum: ˜3 km/year), and (2) those calculated from the central tendency (mean and median) and the area-based method of grid occupancy (˜1.5 km/year). Range statistic measures differed greatly in their sensitivity to sampling effort; the proportion of sampling required to achieve an estimate within 10% of the true value ranged from 0.17 to 0.9. Grid occupancy and median were most sensitive to sampling effort, and the maximum and gamma quantile the least. If periods with incomplete sampling were included in the range expansion calculations, this generally lowered the estimates (range 16–72%), with exception of the gamma quantile that was slightly higher (6%). Care should be taken when interpreting rate expansion estimates from data sampled from only a fraction of the full distribution. Methods based on the central tendency will give rates approximately half that of methods based on the range margin. The gamma quantile method appears to be the most robust to incomplete sampling bias and should be considered as the method of choice when sampling the entire

  13. A Medical Cloud-Based Platform for Respiration Rate Measurement and Hierarchical Classification of Breath Disorders

    PubMed Central

    Fekr, Atena Roshan; Janidarmian, Majid; Radecka, Katarzyna; Zilic, Zeljko

    2014-01-01

    The measurement of human respiratory signals is crucial in cyberbiological systems. A disordered breathing pattern can be the first symptom of different physiological, mechanical, or psychological dysfunctions. Therefore, a real-time monitoring of the respiration patterns, as well as respiration rate is a critical need in medical applications. There are several methods for respiration rate measurement. However, despite their accuracy, these methods are expensive and could not be integrated in a body sensor network. In this work, we present a real-time cloud-based platform for both monitoring the respiration rate and breath pattern classification, remotely. The proposed system is designed particularly for patients with breathing problems (e.g., respiratory complications after surgery) or sleep disorders. Our system includes calibrated accelerometer sensor, Bluetooth Low Energy (BLE) and cloud-computing model. We also suggest a procedure to improve the accuracy of respiration rate for patients at rest positions. The overall error in the respiration rate calculation is obtained 0.53% considering SPR-BTA spirometer as the reference. Five types of respiration disorders, Bradapnea, Tachypnea, Cheyn-stokes, Kaussmal, and Biot's breathing are classified based on hierarchical Support Vector Machine (SVM) with seven different features. We have evaluated the performance of the proposed classification while it is individualized to every subject (case 1) as well as considering all subjects (case 2). Since the selection of kernel function is a key factor to decide SVM's performance, in this paper three different kernel functions are evaluated. The experiments are conducted with 11 subjects and the average accuracy of 94.52% for case 1 and the accuracy of 81.29% for case 2 are achieved based on Radial Basis Function (RBF). Finally, a performance evaluation has been done for normal and impaired subjects considering sensitivity, specificity and G-mean parameters of different kernel

  14. A medical cloud-based platform for respiration rate measurement and hierarchical classification of breath disorders.

    PubMed

    Fekr, Atena Roshan; Janidarmian, Majid; Radecka, Katarzyna; Zilic, Zeljko

    2014-06-24

    The measurement of human respiratory signals is crucial in cyberbiological systems. A disordered breathing pattern can be the first symptom of different physiological, mechanical, or psychological dysfunctions. Therefore, a real-time monitoring of the respiration patterns, as well as respiration rate is a critical need in medical applications. There are several methods for respiration rate measurement. However, despite their accuracy, these methods are expensive and could not be integrated in a body sensor network. In this work, we present a real-time cloud-based platform for both monitoring the respiration rate and breath pattern classification, remotely. The proposed system is designed particularly for patients with breathing problems (e.g., respiratory complications after surgery) or sleep disorders. Our system includes calibrated accelerometer sensor, Bluetooth Low Energy (BLE) and cloud-computing model. We also suggest a procedure to improve the accuracy of respiration rate for patients at rest positions. The overall error in the respiration rate calculation is obtained 0.53% considering SPR-BTA spirometer as the reference. Five types of respiration disorders, Bradapnea, Tachypnea, Cheyn-stokes, Kaussmal, and Biot's breathing are classified based on hierarchical Support Vector Machine (SVM) with seven different features. We have evaluated the performance of the proposed classification while it is individualized to every subject (case 1) as well as considering all subjects (case 2). Since the selection of kernel function is a key factor to decide SVM's performance, in this paper three different kernel functions are evaluated. The experiments are conducted with 11 subjects and the average accuracy of 94.52% for case 1 and the accuracy of 81.29% for case 2 are achieved based on Radial Basis Function (RBF). Finally, a performance evaluation has been done for normal and impaired subjects considering sensitivity, specificity and G-mean parameters of different kernel

  15. [Testing of germination rate of hybrid rice seeds based on near-infrared reflectance spectroscopy].

    PubMed

    Li, Yi-nian; Jiang, Dan; Liu, Ying-ying; Ding, Wei-min; Ding, Qi-shuo; Zha, Liang-yu

    2014-06-01

    Germination rate of rice seeds was measured according to technical stipulation of germination testing for agricultural crop seeds at present. There existed many faults for this technical stipulation such as long experimental period, more costing and higher professional requirement. A rapid and non-invasive method was put forward to measure the germination rate of hybrid rice seeds based on near-infrared reflectance spectroscopy. Two varieties of hybrid rice seeds were aged artificially at temperature 45 degrees C and humidity 100% condition for 0, 24, 48, 72, 96, 120 and 144 h. Spectral data of 280 samples for 2 varieties of hybrid rice seeds with different aging time were acquired individually by near-infrared spectra analyzer. Spectral data of 280 samples for 2 varieties of hybrid rice seeds were randomly divided into calibration set (168 samples) and prediction set (112 samples). Gormination rate of rice seed with different aging time was tested. Regression model was established by using partial least squares (PLS). The effect of the different spectral bands on the accuracy of models was analyzed and the effect of the different spectral preprocessing methods on the accuracy of models was also compared. Optimal model was achieved under the whole bands and by using standardization and orthogonal signal correction (OSC) preprocessing algorithms with CM2000 software for spectral data of 2 varieties of hybrid rice seeds, the coefficient of determination of the calibration set (Rc) and that of the prediction set (Rp) were 0.965 and 0.931 individually, standard error of calibration set (SEC) and that of prediction set (SEP) were 1.929 and 2.899 respectively. Relative error between tested value and predicted value for prediction set of rice seeds is below 4.2%. The experimental results show that it is feasible that rice germination rate is detected rapidly and nondestructively by using the near-infrared spectroscopy analysis technology. PMID:25358159

  16. [Testing of germination rate of hybrid rice seeds based on near-infrared reflectance spectroscopy].

    PubMed

    Li, Yi-nian; Jiang, Dan; Liu, Ying-ying; Ding, Wei-min; Ding, Qi-shuo; Zha, Liang-yu

    2014-06-01

    Germination rate of rice seeds was measured according to technical stipulation of germination testing for agricultural crop seeds at present. There existed many faults for this technical stipulation such as long experimental period, more costing and higher professional requirement. A rapid and non-invasive method was put forward to measure the germination rate of hybrid rice seeds based on near-infrared reflectance spectroscopy. Two varieties of hybrid rice seeds were aged artificially at temperature 45 degrees C and humidity 100% condition for 0, 24, 48, 72, 96, 120 and 144 h. Spectral data of 280 samples for 2 varieties of hybrid rice seeds with different aging time were acquired individually by near-infrared spectra analyzer. Spectral data of 280 samples for 2 varieties of hybrid rice seeds were randomly divided into calibration set (168 samples) and prediction set (112 samples). Gormination rate of rice seed with different aging time was tested. Regression model was established by using partial least squares (PLS). The effect of the different spectral bands on the accuracy of models was analyzed and the effect of the different spectral preprocessing methods on the accuracy of models was also compared. Optimal model was achieved under the whole bands and by using standardization and orthogonal signal correction (OSC) preprocessing algorithms with CM2000 software for spectral data of 2 varieties of hybrid rice seeds, the coefficient of determination of the calibration set (Rc) and that of the prediction set (Rp) were 0.965 and 0.931 individually, standard error of calibration set (SEC) and that of prediction set (SEP) were 1.929 and 2.899 respectively. Relative error between tested value and predicted value for prediction set of rice seeds is below 4.2%. The experimental results show that it is feasible that rice germination rate is detected rapidly and nondestructively by using the near-infrared spectroscopy analysis technology.

  17. A self-tuning model for inflation rate dynamics

    NASA Astrophysics Data System (ADS)

    Mamon, Rogemar; Duan, Zheng

    2010-09-01

    A regime-switching model is proposed to capture the structural changes in inflation dynamics. We apply a special version of the EM algorithm to find optimal parameter estimates of the model within the framework of a discrete-time finite state Markov chain that governs the switching of regimes from one state to another. The model is implemented to Canada's consumer price index (CPI) data series and its performance is assessed by comparing its one-step ahead predictions with the actual data. We found that, within the dataset studied, a two-state Markov-switching model is sufficient to capture the dynamics of Canadian CPI series. The model being proposed is adaptive as parameters are updated upon the arrival of a new set of information. A description of how to calculate the standard errors for parameter estimates using the Fisher information matrix is provided. We also determine the optimal number of states for the Markov chain within the dataset considered via the AIC analysis.

  18. A Comparison of Model-Data Fit for Parametric and Nonparametric Item Response Theory Models Using Ordinal-Level Ratings

    ERIC Educational Resources Information Center

    Dyehouse, Melissa A.

    2009-01-01

    This study compared the model-data fit of a parametric item response theory (PIRT) model to a nonparametric item response theory (NIRT) model to determine the best-fitting model for use with ordinal-level alternate assessment ratings. The PIRT Generalized Graded Unfolding Model (GGUM) was compared to the NIRT Mokken model. Chi-square statistics…

  19. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 44 Emergency Management and Assistance 1 2011-10-01 2011-10-01 false Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  20. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 44 Emergency Management and Assistance 1 2012-10-01 2011-10-01 true Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  1. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 44 Emergency Management and Assistance 1 2014-10-01 2014-10-01 false Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  2. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 44 Emergency Management and Assistance 1 2013-10-01 2013-10-01 false Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  3. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  4. 48 CFR 1616.7002 - Clause-contracts based on cost analysis (experience rated).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cost analysis (experience rated). 1616.7002 Section 1616.7002 Federal Acquisition Regulations System... based on cost analysis (experience rated). The clause at section 1652.216-71 shall be inserted in all FEHBP contracts based on cost analysis (experience rated)....

  5. Strengthened PAN-based carbon fibers obtained by slow heating rate carbonization

    PubMed Central

    Kim, Min-A; Jang, Dawon; Tejima, Syogo; Cruz-Silva, Rodolfo; Joh, Han-Ik; Kim, Hwan Chul; Lee, Sungho; Endo, Morinobu

    2016-01-01

    Large efforts have been made over the last 40 years to increase the mechanical strength of polyacrylonitrile (PAN)-based carbon fibers (CFs) using a variety of chemical or physical protocols. In this paper, we report a new method to increase CFs mechanical strength using a slow heating rate during the carbonization process. This new approach increases both the carbon sp3 bonding and the number of nitrogen atoms with quaternary bonding in the hexagonal carbon network. Theoretical calculations support a crosslinking model promoted by the interstitial carbon atoms located in the graphitic interlayer spaces. The improvement in mechanical performance by a controlled crosslinking between the carbon hexagonal layers of the PAN based CFs is a new concept that can contribute further in the tailoring of CFs performance based on the understanding of their microstructure down to the atomic scale. PMID:27004752

  6. Using skew-logistic probability density function as a model for age-specific fertility rate pattern.

    PubMed

    Asili, Sahar; Rezaei, Sadegh; Najjar, Lotfollah

    2014-01-01

    Fertility rate is one of the most important global indexes. Past researchers found models which fit to age-specific fertility rates. For example, mixture probability density functions have been proposed for situations with bi-modal fertility patterns. This model is less useful for unimodal age-specific fertility rate patterns, so a model based on skew-symmetric (skew-normal) pdf was proposed by Mazzuco and Scarpa (2011) which was flexible for unimodal and bimodal fertility patterns. In this paper, we introduce skew-logistic probability density function as a better model: its residuals are less than those of the skew-normal model and it can more precisely estimate the parameters of the model. PMID:24967404

  7. High-level reasoning and base-rate use: do we need cue-competition to explain the inverse base-rate effect?

    PubMed

    Juslin, P; Wennerholm, P; Winman, A

    2001-05-01

    Previous accounts of the inverse base-rate effect (D. L. Medin & S. M. Edelson, 1988) have revolved around the concept of cue-competition. In this article, the authors propose that high-level reasoning in the form of an eliminative inference mechanism may contribute to the effect. A quantitative implementation of this idea demonstrates that it has the power by itself to produce the pattern of base-rate effects in the Medin and Edelson (1988) design. Four predictions are derived that contradict the predictions by attention to distinctive input (ADIT; J. K. Kruschke, 1996), up to date the most successful account of the inverse base-rate effect. Results from 3 experiments disconfirm the predictions by ADIT and demonstrate the importance of high-level reasoning in designs of the Medin and Edelson kind. Implications for the interpretation of the inverse base-rate effect and the attention-shifting mechanisms presumed by ADIT are discussed. PMID:11394684

  8. Rates of heat exchange in largemouth bass: experiment and model