Science.gov

Sample records for rate model based

  1. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  2. Rate-based degradation modeling of lithium-ion cells

    SciTech Connect

    E.V. Thomas; I. Bloom; J.P. Christophersen; V.S. Battaglia

    2012-05-01

    Accelerated degradation testing is commonly used as the basis to characterize battery cell performance over a range of stress conditions (e.g., temperatures). Performance is measured by some response that is assumed to be related to the state of health of the cell (e.g., discharge resistance). Often, the ultimate goal of such testing is to predict cell life at some reference stress condition, where cell life is defined to be the point in time where performance has degraded to some critical level. These predictions are based on a degradation model that expresses the expected performance level versus the time and conditions under which a cell has been aged. Usually, the degradation model relates the accumulated degradation to the time at a constant stress level. The purpose of this article is to present an alternative framework for constructing a degradation model that focuses on the degradation rate rather than the accumulated degradation. One benefit of this alternative approach is that prediction of cell life is greatly facilitated in situations where the temperature exposure is not isothermal. This alternative modeling framework is illustrated via a family of rate-based models and experimental data acquired during calendar-life testing of high-power lithium-ion cells.

  3. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S

  4. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  5. The Estimation and Control of the Electroslag Remelting Melt Rate by Mechanism-Based Modeling

    NASA Astrophysics Data System (ADS)

    Li, Wanzhou; Wang, Weiyu; Hu, Yuechen; Chen, Yixing

    2012-04-01

    The process control of industrial electroslag remelting production is addressed in this work. This article proposes a mechanism-based model using electrode displacement to estimate the melt rate, designs the remelting process control system, and uses practical application data to verify the validity of the model. The soft measurement of the melt rate based on mechanism modeling is proved to be an economical and reliable solution to the online melt rate estimation and control for large industrial electroslag remelting furnaces.

  6. Acid–base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer

    PubMed Central

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.

    2012-01-01

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid–base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030

  7. A model-based technique for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1982-01-01

    A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat "unconventional" aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method was able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.

  8. A model-based technique for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1980-01-01

    A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat 'unconventional' aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method is able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.

  9. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  10. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter

    PubMed Central

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  11. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  12. Estimating base rates of impairment in neuropsychological test batteries: a comparison of quantitative models.

    PubMed

    Decker, Scott L; Schneider, W Joel; Hale, James B

    2012-01-01

    Neuropsychologists frequently rely on a battery of neuropsychological tests which are normally distributed to determine impaired functioning. The statistical likelihood of Type I error in clinical decision-making is in part determined by the base rate of normative individuals obtaining atypical performance on neuropsychological tests. Base rates are most accurately obtained by co-normed measures, but this is rarely accomplished in neuropsychological testing. Several statistical methods have been proposed to estimate base rates for tests that are not co-normed. This study compared two statistical approaches (binomial and Monte Carlo models) used to estimate the base rates for flexible test batteries. The two approaches were compared against empirically derived base rates for a multitest co-normed battery of cognitive measures. Estimates were compared across a variety of conditions including age and different α levels (N =3,356). Monte Carlo R(2) estimates ranged from .980 to .997 across five different age groups, indicating a good fit. In contrast, the binomial model fit estimates ranged from 0.387 to 0.646. Results confirm that the binomial model is insufficient for estimating base rates because it does not take into account correlations among measures in a multitest battery. Although the Monte Carlo model produced more accurate results, minor biases occurred that are likely due to skewess and kurtosis of test variables. Implications for future research and applied practice are discussed. PMID:22172567

  13. Modeling Low-Dose-Rate Effects in Irradiated Bipolar-Base Oxides

    SciTech Connect

    Cirba, C.R.; Fleetwood, D.M.; Graves, R.J.; Michez, A.; Milanowski, R.J.; Saigne, F.; Schrimpf, R.D.; Witczak, S.C.

    1998-10-26

    A physical model is developed to quantify the contribution of oxide-trapped charge to enhanced low-dose-rate gain degradation in bipolar junction transistors. Multiple-trapping simulations show that space charge limited transport is partially responsible for low-dose-rate enhancement. At low dose rates, more holes are trapped near the silicon-oxide interface than at high dose rates, resulting in larger midgap voltage shifts at lower dose rates. The additional trapped charge near the interface may cause an exponential increase in excess base current, and a resultant decrease in current gain for some NPN bipolar technologies.

  14. A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Zhang, Junhua; Lohmann, Ulrike

    2002-01-01

    The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.

  15. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.

    2016-06-01

    In this work, we develop a tantalum strength model that incorporates effects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate effects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa. The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.

  16. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGESBeta

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  17. Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate

    NASA Astrophysics Data System (ADS)

    Li, Chun-Hsien

    2015-06-01

    This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.

  18. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    SciTech Connect

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.

  19. Modelling Heart Rate Kinetics

    PubMed Central

    Zakynthinaki, Maria S.

    2015-01-01

    The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual’s cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164

  20. Modelling heart rate kinetics.

    PubMed

    Zakynthinaki, Maria S

    2015-01-01

    The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual's cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164

  1. Effect of control sampling rates on model-based manipulator control schemes

    NASA Technical Reports Server (NTRS)

    Khosla, P. K.

    1987-01-01

    The effect of changing the control sampling period on the performance of the computed-torque and independent joint control schemes is discussed. While the former utilizes the complete dynamics model of the manipulator, the latter assumes a decoupled and linear model of the manipulator dynamics. Researchers discuss the design of controller gains for both the computed-torque and the independent joint control schemes and establish a framework for comparing their trajectory tracking performance. Experiments show that within each scheme the trajectory tracking accuracy varies slightly with the change of the sampling rate. However, at low sampling rates the computed-torque scheme outperforms the independent joint control scheme. Based on experimental results, researchers also conclusively establish the importance of high sampling rates as they result in an increased stiffness of the system.

  2. Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns.

    PubMed

    Tang, Chong-Jian; He, Rui; Zheng, Ping; Chai, Li-Yuan; Min, Xiao-Bo

    2013-04-15

    A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50-55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor. PMID:23434474

  3. Research and realization of ultrasonic gas flow rate measurement based on ultrasonic exponential model.

    PubMed

    Zheng, Dandan; Hou, Huirang; Zhang, Tao

    2016-04-01

    For ultrasonic gas flow rate measurement based on ultrasonic exponential model, when the noise frequency is close to that of the desired signals (called similar-frequency noise) or the received signal amplitude is small and unstable at big flow rate, local convergence of the algorithm genetic-ant colony optimization-3cycles may appear, and measurement accuracy may be affected. Therefore, an improved method energy genetic-ant colony optimization-3cycles (EGACO-3cycles) is proposed to solve this problem. By judging the maximum energy position of signal, the initial parameter range of exponential model can be narrowed and then the local convergence can be avoided. Moreover, a DN100 flow rate measurement system with EGACO-3cycles method is established based on NI PCI-6110 and personal computer. A series of experiments are carried out for testing the new method and the measurement system. It is shown that local convergence doesn't appear with EGACO-3cycles method when similar-frequency noises exist and flow rate is big. Then correct time of flight can be obtained. Furthermore, through flow calibration on this system, the measurement range ratio is achieved 500:1, and the measurement accuracy is 0.5% with a low transition velocity 0.3 m/s. PMID:26821309

  4. Model-based melt rate control during vacuum arc remelting of alloy 718

    NASA Astrophysics Data System (ADS)

    Williamson, Rodney L.; Melgaard, David K.; Shelmidine, Gregory J.; Beaman, Joseph J.; Morrison, Robert

    2004-02-01

    Vacuum arc remelting (VAR) is used widely throughout the specialty metals industry to produce superalloy and titanium alloy cast ingots. Optimum VAR casting requires that the electrode melting rate be controlled at all times during the process. This is especially difficult when process conditions are such that the temperature distribution in the electrode has not achieved, or has been driven away from, steady state. This condition is encountered during the beginning and closing stages of the VAR process, and also during some process disturbances such as when the melt zone passes through a transverse crack. To address these transient melting situations, a new method of VAR melt rate control has been developed that incorporates an accurate, low-order melting model to continually estimate the temperature distribution in the electrode. This method of model-based control was tested at Carpenter Technology Corporation. In the first test, two 0.43-m-diameter alloy 718 electrodes were melted into 0.51-m-diameter ingots. Aggressive start-up and hot-top procedures were used to test the dynamic capabilities of the control technique. Additionally, a transverse cut was placed in each electrode with an abrasive saw to mimic an electrode crack. Accurate melt rate control was demonstrated throughout each melt. The second test used an electrode size and grade proprietary to the host company. Because it was not stress relieved after the primary casting process, the electrode was known to possess multiple cracks that make accurate melt rate control impossible using standard VAR controller technology. This electrode was also successfully melted with good melt rate control using the model-based controller.

  5. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  6. Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis

    USGS Publications Warehouse

    Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John

    2009-01-01

    Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.

  7. Thermomechanics-based nonlinear rate-dependent coupled damage-plasticity granular micromechanics model

    NASA Astrophysics Data System (ADS)

    Misra, Anil; Singh, Viraj

    2015-09-01

    Thermomechanics and granular micromechanics approaches are combined to derive constitutive equations for modeling rate-dependent granular materials with damage and plasticity. The derivation is motivated by the recognition that the effect of micro-scale mechanisms upon the macro-scale behavior is known to be significant for granular materials. A general thermomechanical framework applicable to rate-dependent granular materials with damage and plasticity is developed. Based upon this framework, an expression for macro-scale Cauchy stress tensor is obtained in terms of the micro-scale grain interaction forces and the relationship between micro- and macro-scale kinematics. In addition, a Clausius-Duhem type inequality applicable to inter-granular interaction is derived, which is used to establish micro-scale constitutive relations for particular type of inter-granular interactions. The expression for Cauchy stress tensor and the micro-scale constitutive relations is then combined under a mean field kinematic assumption to obtain evolution-type macro-scale constitutive equations. The advantage of the granular micromechanics approach is that the damage and plasticity are defined using simple 1d functions at micro-scale, and complicated plastic potentials, damage functions and rules for their evolution are not required. The resultant model is applied to investigate primary, secondary and tertiary creep, creep-recovery as well as rate-dependent response under uniaxial compressive loading. Model applicability is also demonstrated for asymmetric tensile-compressive response under creep-recovery loading. The model is used to evaluate the evolution of elastic energy, and viscous, plastic and damage dissipation at the macro- and micro-scale with respect to creep time and loading level. The results show the development of loading-induced anisotropy due to damage and plasticity in these materials.

  8. Mechanical properties of methacrylate-based model dentin adhesives: effect of loading rate and moisture exposure.

    PubMed

    Singh, Viraj; Misra, Anil; Parthasarathy, Ranganathan; Ye, Qiang; Park, Jonggu; Spencer, Paulette

    2013-11-01

    The aim of this study is to investigate the mechanical behavior of model methacrylate-based dentin adhesives under conditions that simulate the wet oral environment. A series of monotonic and creep experiments were performed on rectangular beam samples of dentin adhesive in three-point bending configuration under different moisture conditions. The monotonic test results show a significant effect of loading rate on the failure strength and the linear limit (yield point) of the stress-strain response. In addition, these tests show that the failure strength is low, and the failure occurs at a smaller deformation when the test is performed under continuously changing moisture conditions. The creep test results show that under constant moisture conditions, the model dentin adhesives can have a viscoelastic response under certain low loading levels. However, when the moisture conditions vary under the same low loading levels, the dentin adhesives have an anomalous creep response accompanied by large secondary creep and high strain accumulation. PMID:23744598

  9. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  10. Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models

    NASA Astrophysics Data System (ADS)

    Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.

    2012-04-01

    The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.

  11. Learning to maximize reward rate: a model based on semi-Markov decision processes

    PubMed Central

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.

    2014-01-01

    When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252

  12. Logarithmic rate based elasto-viscoplastic cyclic constitutive model for soft biological tissues.

    PubMed

    Zhu, Yilin; Kang, Guozheng; Yu, Chao; Poh, Leong Hien

    2016-08-01

    Based on the logarithmic rate and piecewise linearization theory, a thermodynamically consistent elasto-viscoplastic constitutive model is developed in the framework of finite deformations to describe the nonlinear time-dependent biomechanical performances of soft biological tissues, such as nonlinear anisotropic monotonic stress-strain responses, stress relaxation, creep and ratchetting. In the proposed model, the soft biological tissue is assumed as a typical composites consisting of an isotropic matrix and anisotropic fiber aggregation. Accordingly, the free energy function and stress tensor are divided into two parts related to the matrix and fiber aggregation, respectively. The nonlinear biomechanical responses of the tissues are described by the piecewise linearization theory with hypo-elastic relations of fiber aggregation. The evolution equations of viscoplasticity are formulated from the dissipation inequalities by the co-directionality hypotheses. The anisotropy is considered in the hypo-elastic relations and viscoplastic flow rules by introducing some material parameters dependent on the loading direction. Then the capability of the proposed model to describe the nonlinear time-dependent deformation of soft biological tissues is verified by comparing the predictions with the corresponding experimental results of three tissues. It is seen that the predicted monotonic stress-strain responses, stress relaxation, creep and ratchetting of soft biological tissues are in good agreement with the corresponding experimental ones. PMID:27108349

  13. Evidence-Based Adequacy Model for School Funding: Success Rates in Illinois Schools that Meet Targets

    ERIC Educational Resources Information Center

    Murphy, Gregory J.

    2012-01-01

    This quantitative study explores the 2010 recommendation of the Educational Funding Advisory Board to consider the Evidence-Based Adequacy model of school funding in Illinois. This school funding model identifies and costs research based practices necessary in a prototypical school and sets funding levels based upon those practices. This study…

  14. Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique

    2016-05-01

    In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated

  15. Modeling Passing Rates on a Computer-Based Medical Licensing Examination: An Application of Survival Data Analysis

    ERIC Educational Resources Information Center

    de Champlain, Andre F.; Winward, Marcia L.; Dillon, Gerard F.; de Champlain, Judy E.

    2004-01-01

    The purpose of this article was to model United States Medical Licensing Examination (USMLE) Step 2 passing rates using the Cox Proportional Hazards Model, best known for its application in analyzing clinical trial data. The number of months it took to pass the computer-based Step 2 examination was treated as the dependent variable in the model.…

  16. A Micromechanics Based Constitutive Model For Brittle Failure at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Bhat, H. S.; Rosakis, A.; Sammis, C. G.

    2011-12-01

    The micromechanical damage mechanics formulated by Ashby and Sammis [1] and generalized by Desh- pande and Evans [2] has been extended to allow for a more generalized stress state and to incorporate an ex- perimentally motivated new crack growth (damage evo- lution) law that is valid over a wide range of loading rates. This law is sensitive to both the crack tip stress field and its time derivative. Incorporating this feature produces strain-rate sensitivity in the constitutive re- sponse. The model is also experimentally verified by predicting the failure strength of Dionysus-Pentelicon marble over strain rates ranging from ˜ 10-6 to 103 s-1. Model parameters determined from from quasi-static experiments were used to predict the failure strength at higher loading rates. Agreement with experimental results was excellent.

  17. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  18. An agent-based modeling approach for determining corn stover removal rate and transboundary effects.

    PubMed

    Gan, Jianbang; Langeveld, J W A; Smith, C T

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers. PMID:24276896

  19. An Agent-Based Modeling Approach for Determining Corn Stover Removal Rate and Transboundary Effects

    NASA Astrophysics Data System (ADS)

    Gan, Jianbang; Langeveld, J. W. A.; Smith, C. T.

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  20. Meta-Analytic Connectivity Modelling Revisited: Controlling for Activation Base Rates

    PubMed Central

    Langner, Robert; Rottschy, Claudia; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.

    2014-01-01

    Co-activation of distinct brain regions is a measure of functional interaction, or connectivity, between those regions. The co-activation pattern of a given region can be investigated using seed-based activation likelihood estimation meta-analysis of functional neuroimaging data stored in databases such as BrainMap. This method reveals inter-regional functional connectivity by determining brain regions that are consistently co-activated with a given region of interest (the “seed”) across a broad range of experiments. In current implementations of this meta-analytic connectivity modelling (MACM), significant spatial convergence (i.e. consistent co-activation) is distinguished from noise by comparing it against an unbiased null-distribution of random spatial associations between experiments according to which all grey-matter voxels have the same chance of convergence. As the a priori probability of finding activation in different voxels markedly differs across the brain, computing such a quasi-rectangular null-distribution renders the detection of significant convergence more likely in those voxels that are frequently activated. Here, we propose and test a modified MACM approach that takes this activation frequency bias into account. In this new specific co-activation likelihood estimation (SCALE) algorithm, a null-distribution is generated that reflects the base rate of reporting activation in any given voxel and thus equalizes the a priori chance of finding across-study convergence in each voxel of the brain. Using four exemplary seed regions (right visual area V4, left anterior insula, right intraparietal sulcus, and subgenual cingulum), our tests corroborated the enhanced specificity of the modified algorithm, indicating that SCALE may be especially useful for delineating distinct core networks of co-activation. PMID:24945668

  1. A Self-Organized Model for Cell-Differentiation Based on Variations of Molecular Decay Rates

    PubMed Central

    Hanel, Rudolf; Pöchacker, Manfred; Schölling, Manuel; Thurner, Stefan

    2012-01-01

    Systemic properties of living cells are the result of molecular dynamics governed by so-called genetic regulatory networks (GRN). These networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems. At any point in time only small subsets of these networks are active. Any active subset of the GRN leads to the expression of particular sets of molecules (expression modes). The subsets of active networks change over time, leading to the observed complex dynamics of expression patterns. Understanding of these dynamics becomes increasingly important in systems biology and medicine. While the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems, the understanding of the role of degradation of biochemical agents (mRNA, protein) in regulatory dynamics remains limited. Recent experimental data suggests that there exists a functional relation between mRNA and protein decay rates and expression modes. In this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the GRN. The model is able to reproduce key characteristics of molecular dynamics, including homeostasis, multi-stability, periodic dynamics, alternating activity, differentiability, and self-organized critical dynamics. Moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes. The model explains recent experimental observations that decay-rates (or turnovers) vary between differentiated tissue-classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation. PMID:22693554

  2. A self-organized model for cell-differentiation based on variations of molecular decay rates.

    PubMed

    Hanel, Rudolf; Pöchacker, Manfred; Schölling, Manuel; Thurner, Stefan

    2012-01-01

    Systemic properties of living cells are the result of molecular dynamics governed by so-called genetic regulatory networks (GRN). These networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems. At any point in time only small subsets of these networks are active. Any active subset of the GRN leads to the expression of particular sets of molecules (expression modes). The subsets of active networks change over time, leading to the observed complex dynamics of expression patterns. Understanding of these dynamics becomes increasingly important in systems biology and medicine. While the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems, the understanding of the role of degradation of biochemical agents (mRNA, protein) in regulatory dynamics remains limited. Recent experimental data suggests that there exists a functional relation between mRNA and protein decay rates and expression modes. In this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the GRN. The model is able to reproduce key characteristics of molecular dynamics, including homeostasis, multi-stability, periodic dynamics, alternating activity, differentiability, and self-organized critical dynamics. Moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes. The model explains recent experimental observations that decay-rates (or turnovers) vary between differentiated tissue-classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation. PMID:22693554

  3. Variation of crystal dissolution rate based on a dissolution stepwave model.

    PubMed

    Lasaga, A C; Luttge, A

    2001-03-23

    A formulation based on defect-generated dissolution stepwaves of the variation of dissolution rate with the degree of undersaturation is validated by near-atomic-scale observations of surfaces, Monte Carlo simulations, and experimental bulk dissolution rates. The dissolution stepwaves emanating from etch pits provide a train of steps similar to those of a spiral but with different behavior. Their role in accounting for the bulk dissolution rate of crystals provides a conceptual framework for mineral dissolution far from equilibrium. Furthermore, the law extends research to conditions closer to equilibrium and predicts a nonlinear decrease in the rate of dissolution as equilibrium is approached, which has implications for understanding artificial and natural processes involving solid-fluid reactions. PMID:11264534

  4. Source mass eruption rate retrieved from satellite-based data using statistical modelling

    NASA Astrophysics Data System (ADS)

    Gouhier, Mathieu; Guillin, Arnaud; Azzaoui, Nourddine; Eychenne, Julia; Valade, Sébastien

    2015-04-01

    Ash clouds emitted during volcanic eruptions have long been recognized as a major hazard likely to have dramatic consequences on aircrafts, environment and people. Thus, the International Civil Aviation Organization (ICAO) established nine Volcanic Ash Advisory Centers (VAACs) around the world, whose mission is to forecast the location and concentration of ash clouds over hours to days, using volcanic ash transport and dispersion models (VATDs). Those models use input parameters such as plume height (PH), particle size distribution (PSD), and mass eruption rate (MER), the latter being a key parameter as it directly controls the amount of ash injected into the atmosphere. The MER can be obtained rather accurately from detailed ground deposit studies, but this method does not match the operational requirements in case of a volcanic crisis. Thus, VAACs use empirical laws to determine the MER from the estimation of the plume height. In some cases, this method can be difficult to apply, either because plume height data are not available or because uncertainties related to this method are too large. We propose here an alternative method based on the utilization of satellite data to assess the MER at the source, during explosive eruptions. Satellite-based techniques allow fine ash cloud loading to be quantitatively retrieved far from the source vent. Those measurements can be carried out in a systematic and real-time fashion using geostationary satellite, in particular. We tested here the relationship likely to exist between the amount of fine ash dispersed in the atmosphere and of coarser tephra deposited on the ground. The sum of both contributions yielding an estimate of the MER. For this purpose we examined 19 eruptions (of known duration) in detail for which both (i) the amount of fine ash dispersed in the atmosphere, and (ii) the mass of tephra deposited on the ground have been estimated and published. We combined these data with contextual information that may

  5. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  6. Dislocation mechanism based model for stage II fatigue crack propagation rate

    NASA Technical Reports Server (NTRS)

    Mazumdar, P. K.

    1986-01-01

    Repeated plastic deformation, which of course depends on dislocation mechanism, at or near the crack tip leads to the fatigue crack propagation. By involving the theory of thermally activated flow and the cumulative plastic strain criterion, an effort is made here to model the stage II fatigue crack propagation rate in terms of the dislocation mechanism. The model, therefore, provides capability to ascertain: (1) the dislocation mechanism (and hence the near crack tip microstructures) assisting the crack growth, (2) the relative resistance of dislocation mechanisms to the crack growth, and (3) the fracture surface characteristics and its interpretation in terms of the dislocation mechanism. The local microstructure predicted for the room temperature crack growth in copper by this model is in good agreement with the experimental results taken from the literature. With regard to the relative stability of such dislocation mechanisms as the cross-slip and the dislocation intersection, the model suggests an enhancement of crack growth rate with an ease of cross-slip which in general promotes dislocation cell formation and is common in material which has high stacking fault energy (produces wavy slips). Cross-slip apparently enhances crack growth rate by promoting slip irreversibility and fracture surface brittleness to a greater degree.

  7. Modeling the evaporation rate of cesium off tungsten based controlled porosity dispenser photocathodes

    NASA Astrophysics Data System (ADS)

    Pan, Z.; Jensen, K. L.

    2013-04-01

    The evaporation of cesium from a tungsten surface is modeled using an effective one-dimensional potential well representation of the binding energy. The model accounts for both local and global interactions of cesium with the surface metal as well as with other cesium atoms. The theory is compared with the data of Taylor and Langmuir [Phys. Rev. 44, 423 (1933), 10.1103/PhysRev.44.423] comparing evaporation rates to sub-monolayer surface coverage of cesium, gives good agreement, and reproduces the nonlinear behavior of evaporation with varying coverage and temperature.

  8. Modelling of Short Term Interest Rate Based on Fractional Relaxation Equation

    NASA Astrophysics Data System (ADS)

    Jaworska, K.

    2008-09-01

    In this paper, we try to model the dynamics of short term interest rate using the fractional nonhomogeneous differential equation with stochastic free term. This type of equation is similar to one which represents the viscoelastic behavior of certain materials from rheologic point of view. As a final result we obtain the closed formula for prices of zero-coupon bonds. They are analogous to those in Vasiček model, where instead of the exponential functions we have the Mittag-Leffler ones.

  9. Evaluation of Finite-Rate GasSurface Interaction Models for a Carbon Based Ablator

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Goekcen, Tahir

    2015-01-01

    Two sets of finite-rate gas-surface interaction model between air and the carbon surface are studied. The first set is an engineering model with one-way chemical reactions, and the second set is a more detailed model with two-way chemical reactions. These two proposed models intend to cover the carbon surface ablation conditions including the low temperature rate-controlled oxidation, the mid-temperature diffusion-controlled oxidation, and the high temperature sublimation. The prediction of carbon surface recession is achieved by coupling a material thermal response code and a Navier-Stokes flow code. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and Ablation Program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting full Navier-Stokes equations using Data Parallel Line Relaxation method. Recession analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities with heat fluxes ranging from 45 to 1100 wcm2 are performed and compared with data for model validation. The ablating material used in these arc-jet tests is Phenolic Impregnated Carbon Ablator. Additionally, computational predictions of surface recession and shape change are in good agreement with measurement for arc-jet conditions of Small Probe Reentry Investigation for Thermal Protection System Engineering.

  10. The effects of composition on glass dissolution rates: The application of four models to a data base

    SciTech Connect

    Geldart, R.W.; Kindle, C.H.

    1988-01-01

    Four models have been applied to a data base to relate glass dissolution in distilled water to composition. The data base is used to compare the precisions obtained from the models in fitting actual data. The usefulness of the data base in formulating a model is also demonstrated. Two related models in which the composite or pH-adjusted free energy of hydration of the glass is the correlating parameter are compared with experimental data. In a structural model, the nonbridging oxygen content of the glasses is used to correlate glass dissolution rate to composition. In a model formulated for this report, the cation valence and the oxygen content of the glass are compared with observed dissolution rates. The models were applied to the 28-day normalized silica release at 90/sup 0/C for over 285 glass compositions with surface area to volume ratios of 10 m/sup -1/ (Materials Characterization Center MCC-1 glass durability test using distilled water). These glasses included the nonradioactive analogs of WV205 and SRL-165, as well as SRL-131, PNL 76-68, and a European glass, UK209. Predicted glass dissolution rates show similar fits to the data for all four models. The predictions of the models were also plotted for two subsets of the glasses: waste glasses and Savannah River Laboratory glasses. The model predictions fit the data for these groups much better than they fit the data for the entire set of glasses. 14 refs., 12 figs., 7 tabs.

  11. Modeling High Rate Phosphorus and Nitrogen Removal in a Vertical Flow Alum Sludge based Constructed Wetlands

    NASA Astrophysics Data System (ADS)

    Jeyakumar, Lordwin; Zhao, Yaqian

    2014-05-01

    Increased awareness of the impacts of diffuse pollution and their intensification has pushed forward the need for the development of low-cost wastewater treatment techniques. One of such efforts is the use of novel DASC (Dewatered Alum Sludge Cakes) based constructed wetlands (CWs) for removing nutrients, organics, trace elements and other pollutants from wastewater. Understanding of the processes in CWs requires a numerical model that describes the biochemical transformation and degradation processes in subsurface vertical flow (VF) CWs. Therefore, this research focuses on the development of a process-based model for phosphorus (P) and nitrogen (N) removal to achieve a stable performance by using DASC as a substrate in CWs treatment system. An object-oriented modelling tool known as "STELLA" which works based on the principle of system dynamics is used for the development of P and N model. The core objective of the modelling work is oriented towards understanding the process in DASC-based CWs and optimizes design criteria. The P and N dynamic model is developed for DASC-based CWs. The P model developed exclusively for DASC-based CW was able to simulate the effluent P concentration leaving the system satisfactorily. Moreover, the developed P dynamic model has identified the major P pathways as adsorption (72%) followed by plant uptake (20%) and microbial uptake (7%) in single-stage laboratory scale DASC-based CW. Similarly, P dynamic simulation model was developed to simulate the four-stage laboratory scale DASC-based CWs. It was found that simulated and observed values of P removal were in good agreement. The fate of P in all the four stages clearly shows that adsorption played a pivotal role in each stage of the system due to the use of the DASC as a substrate. P adsorption by wetland substrate/DASC represents 59-75% of total P reduction. Subsequently, plant uptake and microbial uptake have lesser role regarding P removal (as compared to adsorption).With regard

  12. A growth kinetics model of rate decomposition for Si1-xGex alloy based on dimer theory

    NASA Astrophysics Data System (ADS)

    Dai, Xian-Ying; Ji, Yao; Hao, Yue

    2014-01-01

    According to the dimer theory on semiconductor surface and chemical vapor deposition(CVD) growth characteristics of Si1-xGex, two mechanisms of rate decomposition and discrete flow density are proposed. Based on these two mechanisms, the Grove theory and Fick's first law, a CVD growth kinetics model of Si1-xGex alloy is established. In order to make the model more accurate, two growth control mechanisms of vapor transport and surface reaction are taken into account. The paper also considers the influence of the dimer structure on the growth rate. The results show that the model calculated value is consistent with the experimental values at different temperatures.

  13. Dynamic mechanical response and a constitutive model of Fe-based high temperature alloy at high temperatures and strain rates.

    PubMed

    Su, Xiang; Wang, Gang; Li, Jianfeng; Rong, Yiming

    2016-01-01

    The effects of strain rate and temperature on the dynamic behavior of Fe-based high temperature alloy was studied. The strain rates were 0.001-12,000 s(-1), at temperatures ranging from room temperature to 800 °C. A phenomenological constitutive model (Power-Law constitutive model) was proposed considering adiabatic temperature rise and accurate material thermal physical properties. During which, the effects of the specific heat capacity on the adiabatic temperature rise was studied. The constitutive model was verified to be accurate by comparison between predicted and experimental results. PMID:27186468

  14. Boosting Degree Completion and Transfer Rates: An Examination of Counseling/Advising Using the Relationship-Based Model

    ERIC Educational Resources Information Center

    Jefferson, Trevina

    2010-01-01

    The purpose of this pilot study was to explore another way to boost degree completion and transfer rates through helping students develop informed decision-making power; specifically, counseling methods in Los Angeles Community Colleges using the Relationship-Based model was examined. As part of this study, students would develop their research…

  15. Aftershock triggering by postseismic stresses: A study based on Coulomb rate-and-state models

    NASA Astrophysics Data System (ADS)

    Cattania, Camilla; Hainzl, Sebastian; Wang, Lifeng; Enescu, Bogdan; Roth, Frank

    2015-04-01

    The spatiotemporal clustering of earthquakes is a feature of medium- and short-term seismicity, indicating that earthquakes interact. However, controversy exists about the physical mechanism behind aftershock triggering: static stress transfer and reloading by postseismic processes have been proposed as explanations. In this work, we use a Coulomb rate-and-state model to study the role of coseismic and postseismic stress changes on aftershocks and focus on two processes: creep on the main shock fault plane (afterslip) and secondary aftershock triggering by previous aftershocks. We model the seismic response to Coulomb stress changes using the Dieterich constitutive law and focus on two events: the Parkfield, Mw = 6.0, and the Tohoku, Mw = 9.0, earthquakes. We find that modeling secondary triggering systematically improves the maximum log likelihood fit of the sequences. The effect of afterslip is more subtle and difficult to assess for near-fault events, where model errors are largest. More robust conclusions can be drawn for off-fault aftershocks: following the Tohoku earthquake, afterslip promotes shallow crustal seismicity in the Fukushima region. Simple geometrical considerations indicate that afterslip-induced stress changes may have been significant on trench parallel crustal fault systems following several of the largest recorded subduction earthquakes. Moreover, the time dependence of afterslip strongly enhances its triggering potential: seismicity triggered by an instantaneous stress change decays more quickly than seismicity triggered by gradual loading, and as a result we find afterslip to be particularly important between few weeks and few months after the main shock.

  16. A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity.

    PubMed

    Rachmuth, Guy; Shouval, Harel Z; Bear, Mark F; Poon, Chi-Sang

    2011-12-01

    Current advances in neuromorphic engineering have made it possible to emulate complex neuronal ion channel and intracellular ionic dynamics in real time using highly compact and power-efficient complementary metal-oxide-semiconductor (CMOS) analog very-large-scale-integrated circuit technology. Recently, there has been growing interest in the neuromorphic emulation of the spike-timing-dependent plasticity (STDP) Hebbian learning rule by phenomenological modeling using CMOS, memristor or other analog devices. Here, we propose a CMOS circuit implementation of a biophysically grounded neuromorphic (iono-neuromorphic) model of synaptic plasticity that is capable of capturing both the spike rate-dependent plasticity (SRDP, of the Bienenstock-Cooper-Munro or BCM type) and STDP rules. The iono-neuromorphic model reproduces bidirectional synaptic changes with NMDA receptor-dependent and intracellular calcium-mediated long-term potentiation or long-term depression assuming retrograde endocannabinoid signaling as a second coincidence detector. Changes in excitatory or inhibitory synaptic weights are registered and stored in a nonvolatile and compact digital format analogous to the discrete insertion and removal of AMPA or GABA receptor channels. The versatile Hebbian synapse device is applicable to a variety of neuroprosthesis, brain-machine interface, neurorobotics, neuromimetic computation, machine learning, and neural-inspired adaptive control problems. PMID:22089232

  17. An enhanced rate-based emission trading program for NOX: the Dutch model.

    PubMed

    Sholtz, A M; Van Amburg, B; Wochnick, V K

    2001-12-01

    Since 1997 government and industry in The Netherlands have been engaged in intensive policy discussions on how to design an emission trading program that would satisfy the Government's policy objectives within the national and international regulatory framework and accommodate industry's need for a flexible and cost-effective approach. Early on in the discussion the most promising solution was a rate-based approach, which dynamically allocated saleable emission credits based on a performance standard rate and actual energy used by facilities. All industrial facilities above a threshold of 20 MWth would be judged on their ability to meet this performance rate. Those "cleaner" than the standard can sell excess credits to others with an allocation that is less than their actual NOX emission. With some changes in law, such a design could be made to fit well into the national and EU legislative framework while at the same time uniquely meeting industry's requirement of flexibility toward economic growth and facility expansion. (An analysis of the legislative changes required will be given in a separate paper by Chris Dekkers.) However, the environmental outcome of such a system is not as certain as under an absolute emission cap. At the request of the Netherlands Ministry of Housing, Spatial Planning and the Environment (VROM), Automated Credit Exchange (ACE), in close cooperation with the working group of government and industry representatives introduced a number of features into the Dutch NOX program allowing full exploitation of market mechanisms while allowing intermediate adjustments in the performance standard rates. The design is geared toward meeting environmental targets without jeopardizing the trading market the program intends to create. The paper discusses the genesis of the two-tier credit system ACE helped to design, explains the differences between primary (fixed) and secondary (variable) credits, and outlines how the Dutch system is expected to

  18. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration rates

    PubMed Central

    Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James

    2014-01-01

    We build an agent-based model of incarceration based on the susceptible–infected–suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology. PMID:24966237

  19. Shell Model Based Reaction Rates for rp-PROCESS Nuclei in the Mass Range A=44-63

    NASA Astrophysics Data System (ADS)

    Fisker, J. L.; Barnard, V.; Görres, J.; Langanke, K.; Martínez-Pinedo, G.; Wiescher, M. C.

    2001-11-01

    We have used large-scale shell-model diagonalization calculations to determine the level spectra, proton spectroscopic factors, and electromagnetic transition probabilities for proton rich nuclei in the mass range A=44-63. Based on these results and the available experimental data, we calculated the resonances for proton capture reactions on neutron deficient nuclei in this mass range. We also calculated the direct capture processes on these nuclei in the framework of a Woods-Saxon potential model. Taking into account both resonant and direct contributions, we determined the ground-state proton capture reaction rates for these nuclei under hot hydrogen burning conditions for temperatures between 108 and 1010 K. The calculated compound-nucleus level properties and the reaction rates are presented here; the rates are also available in computer-readable format from the authors.

  20. A model-based evaluation of sedimentary reconstructions of 10Be production rates

    NASA Astrophysics Data System (ADS)

    Carney, Lewis; Plancherel, Yves; Khatiwala, Samar; Henderson, Gideon

    2016-04-01

    Atmospheric production of 10Be is small when solar activity and, therefore, solar magnetic field and total solar irradiance are strong. Variations in solar activity affect climate and the production of other climate-relevant isotopes, such as 14C. Solar activity is thus an important variable to constrain. Since 10Be production is clearly related to solar activity and the cycle of beryllium is simpler than that of carbon, 10Be records in ice cores have been used to reconstruct total solar irradiance variability. Unfortunately, 10Be records in ice cores are not only affected by variations in atmospheric production, but are also modulated by changes in wind patterns since spatiotemporal atmospheric 10Be gradients are quite large. In that context, sedimentary 10Be records from the abyssal ocean could be of great interest: since the residence time of 10Be in the ocean is thought to be comparable to the overturning time-scale of the ocean, spatial 10Be gradients may be relatively weaker than those in the atmosphere. Under these conditions, regional oceanic variability should only weakly affect the distribution of 10Be in the ocean and local sedimentary 10Be records are expected to represent the global average 10Be production better than 10Be measured in ice cores. We here show results from a global ocean model of 10Be that we use to investigate the spatial variability of simulated sedimentary 10Be records and test the sensitivity of the 10Be sedimentary flux to uncertainties in the circulation field and in the particle chemistry of beryllium. Our ocean model is based on the Transport Matrix method. The surface 10Be input fluxes are taken from atmospheric model simulations. Our model experiments, constrained by available dissolved 10Be data, show that there exist regions in the ocean where the sedimentary 10Be flux is relatively insensitive to changes in input patterns and magnitudes, assumed particle chemistry and flux patterns, and ocean circulation. We submit that

  1. Cosmogenic nuclide production rates as a function of latitude and altitude calculated via a physics based model and excitation functions

    NASA Astrophysics Data System (ADS)

    Argento, D.; Reedy, R. C.; Stone, J. O.

    2012-12-01

    Cosmogenic nuclides have been used to develop a set of tools critical to the quantification of a wide range of geomorphic and climatic processes and events (Dunai 2010). Having reliable absolute measurement methods has had great impact on research constraining ice age extents as well as providing important climatic data via well constrained erosion rates, etc. Continuing to improve CN methods is critical for these sciences. While significant progress has been made in the last two decades to reduce uncertainties (Dunai 2010; Gosse & Phillips 2001), numerous aspects still need to be refined in order to achieve the analytic resolution desired by glaciologists and geomorphologists. In order to investigate the finer details of the radiation responsible for cosmogenic nuclide production, we have developed a physics based model which models the radiation cascade of primary and secondary cosmic-rays through the atmosphere. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray (GCR) radiation impinging on the upper atmosphere. Beginning with a spectrum of high energy protons and alpha particles at the top of the atmosphere, the code tracks the primary and resulting secondary particles through a model of the Earth's atmosphere and into the lithosphere. Folding the neutron and proton flux results with energy dependent cross sections for nuclide production provides production rates for key cosmogenic nuclides (Argento et al. 2012, in press; Reedy 2012, in press). Our initial study for high latitude shows that nuclides scale at different rates for each nuclide (Argento 2012, in press). Furthermore, the attenuation length for each of these nuclide production rates increases with altitude, and again, they increase at different rates. This has the consequence of changing the production rate ratio as a function of altitude. The earth's geomagnetic field differentially filters low energy cosmic-rays by deflecting them away

  2. Applying constraints on model-based methods: estimation of rate constants in a second order consecutive reaction.

    PubMed

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents. PMID:23220674

  3. Strain-rate sensitivity of foam materials: A numerical study using 3D image-based finite element model

    NASA Astrophysics Data System (ADS)

    Sun, Yongle; Li, Q. M.; Withers, P. J.

    2015-09-01

    Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band) of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi) may not be sufficiently representative to capture the actual structural effect. To overcome these limitations, the strain-rate sensitivity of the compressive and tensile properties of closed-cell aluminium Alporas foam is investigated in this study by means of meso-scale realistic finite element (FE) simulations. The FE modelling method based on X-ray computed tomography (CT) image is introduced first, as well as its applications to foam materials. Then the compression and tension of Alporas foam at a wide variety of applied nominal strain-rates are simulated using FE model constructed from the actual cell geometry obtained from the CT image. The stain-rate sensitivity of compressive strength (collapse stress) and tensile strength (0.2% offset yield point) are evaluated when considering different cell-wall material properties. The numerical results show that the rate dependence of cell-wall material is the main cause of the strain-rate hardening of the compressive and tensile strengths at low and intermediate strain-rates. When the strain-rate is sufficiently high, shock compression is initiated, which significantly enhances the stress at the loading end and has complicated effect on the stress at the supporting end. The plastic tensile wave effect is evident at high strain-rates, but shock tension cannot develop in Alporas foam due to the softening associated with single fracture process zone occurring in tensile response. In all cases the micro inertia of individual cell walls subjected to localised deformation is found to

  4. Modelling temperature-compensated physiological rates, based on the co-ordination of responses to temperature of developmental processes.

    PubMed

    Parent, B; Turc, O; Gibon, Y; Stitt, M; Tardieu, F

    2010-05-01

    Temperature fluctuates rapidly and affects all developmental and metabolic processes. This often obscures the effects of developmental trends or of other environmental conditions when temperature fluctuates naturally. A method is proposed for modelling temperature-compensated rates, based on the coordination of temperature responses of developmental processes. In a data set comprising 41 experiments in the greenhouse, growth chamber, or the field, the temperature responses in the range of 6-36 degrees C for different processes were compared in three species, maize, rice, and Arabidopsis thaliana. Germination, cell division, expansive growth rate, leaf initiation, and phenology showed coordinated temperature responses and followed common laws within each species. The activities of 10 enzymes involved in carbon metabolism exhibited monotonous exponential responses across the whole range 10-40 degrees C. Hence, the temperature dependence of developmental processes is not explained by a simple relationship to central metabolism. Temperature-compensated rates of development were calculated from the equations of response curve, by expressing rates per unit equivalent time at 20 degrees C. This resulted in stable rates when temperatures fluctuated over a large range (for which classical thermal time was inefficient), and in time courses of leaf development which were common to several experiments with different temperature scenarios. PMID:20194927

  5. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. PMID:26686485

  6. Toy Stories: Modeling Rates

    ERIC Educational Resources Information Center

    Swanson, Patricia E.

    2015-01-01

    Elementary school mathematics is increasingly recognized for its crucial role in developing the foundational skills and understandings for algebra. In this article, the author uses a lesson to introduce the concept of "rates"--comparing two different types and units of measure--and how to graph them. Described is the lesson and shared…

  7. The effect of an instructional program based on health belief model in decreasing cesarean rate among primiparous pregnant mothers

    PubMed Central

    Hassani, Laleh; Aghamolaei, Teamur; Ghanbarnejad, Amin; Dadipoor, Sakineh

    2016-01-01

    Introduction: Although cesarean section has saved many mothers’ and infants’ lives, the problem is in its increasing prevalence. According to recent statistics, the current rate of cesarean in Iran is in fact 3–4 times as more than the standard rate defined by WHO. Therefore, the present study is aimed to estimate the effect of an instructional program based on health belief model on reducing cesarean rate among primiparous pregnant women. Materials and Methods: In this semi-experimental research, 60 primiparous women who had visited Bandar Abbas Healthcare Centers were selected as the subjects. They were in their 26–30th week of pregnancy. They were selected in a multi-stage cluster sampling method (a combination of clustering and simple randomization), and were divided into two groups, subjects and control group. The data were gathered using a valid and reliable questionnaire. The instructional intervention was done after the completion of the pretest questionnaire based on the sub-constructs of the health belief model in six instructional sessions. 1 month after the intervention, posttest questionnaires were completed by the subjects in both groups. The data were analyzed using descriptive statistics, standard deviation, independent t-test, and paired t-test. The significance level was set at <0.05. Results: Two groups had a significant difference between awareness score, perceived sensitivity, intensity, benefits, barriers, self-efficacy, and the performance (P < 0.001). In the experimental group, nine subjects (30%) had a natural delivery. Conclusion: According to the findings of the current research, an instructional program illuminated (designed) by the health belief model can significantly influence pregnant women's awareness, intention, and choice of delivery type. PMID:27512693

  8. Agent-based mathematical modeling as a tool for estimating Trypanosoma cruzi vector-host contact rates.

    PubMed

    Yong, Kamuela E; Mubayi, Anuj; Kribs, Christopher M

    2015-11-01

    The parasite Trypanosoma cruzi, spread by triatomine vectors, affects over 100 mammalian species throughout the Americas, including humans, in whom it causes Chagas' disease. In the U.S., only a few autochthonous cases have been documented in humans, but prevalence is high in sylvatic hosts (primarily raccoons in the southeast and woodrats in Texas). The sylvatic transmission of T. cruzi is spread by the vector species Triatoma sanguisuga and Triatoma gerstaeckeri biting their preferred hosts and thus creating multiple interacting vector-host cycles. The goal of this study is to quantify the rate of contacts between different host and vector species native to Texas using an agent-based model framework. The contact rates, which represent bites, are required to estimate transmission coefficients, which can be applied to models of infection dynamics. In addition to quantitative estimates, results confirm host irritability (in conjunction with host density) and vector starvation thresholds and dispersal as determining factors for vector density as well as host-vector contact rates. PMID:26215127

  9. Modeling the shear rate and pressure drop in a hydrodynamic cavitation reactor with experimental validation based on KI decomposition studies.

    PubMed

    Badve, Mandar P; Alpar, Tibor; Pandit, Aniruddha B; Gogate, Parag R; Csoka, Levente

    2015-01-01

    A mathematical model describing the shear rate and pressure variation in a complex flow field created in a hydrodynamic cavitation reactor (stator and rotor assembly) has been depicted in the present study. The design of the reactor is such that the rotor is provided with surface indentations and cavitational events are expected to occur on the surface of the rotor as well as within the indentations. The flow characteristics of the fluid have been investigated on the basis of high accuracy compact difference schemes and Navier-Stokes method. The evolution of streamlining structures during rotation, pressure field and shear rate of a Newtonian fluid flow have been numerically established. The simulation results suggest that the characteristics of shear rate and pressure area are quite different based on the magnitude of the rotation velocity of the rotor. It was observed that area of the high shear zone at the indentation leading edge shrinks with an increase in the rotational speed of the rotor, although the magnitude of the shear rate increases linearly. It is therefore concluded that higher rotational speeds of the rotor, tends to stabilize the flow, which in turn results into less cavitational activity compared to that observed around 2200-2500RPM. Experiments were carried out with initial concentration of KI as 2000ppm. Maximum of 50ppm of iodine liberation was observed at 2200RPM. Experimental as well as simulation results indicate that the maximum cavitational activity can be seen when rotation speed is around 2200-2500RPM. PMID:24924259

  10. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2014-04-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  11. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  12. Towards optimal design of sport footwear based on muscle activity and minimum loading rate using simplified model.

    PubMed

    Khassetarash, Arash; Hassannejad, Reza

    2015-08-01

    Recently, researchers have been looking for a model to relate the ground reaction force to the vibration parameters of sport shoes. A 4-degrees-of-freedom mass-spring-damper model and a dimensionless objective function have been used in the literature in order to simulate the functionality of the central nervous system based on muscle tuning paradigm. In this study, a two-phase optimization method was proposed in order to better simulate the functionality of the central nervous system. A comparison between the results of this method and those in the literature demonstrates more accuracy. Additionally, it has been reported that the loading rate in the time history of the ground reaction force has a role in overuse injuries in the tibia and tarsals; therefore, in order to introduce an optimum design for sport shoes, in this study, a new objective function was introduced and the proposed method was used in order to find the best possible range of shoe parameters for pre-fatigued and post-fatigued conditions. Results indicated that there is no possible solution which brings the loading rate to the desired value; however, the region in which the least values for the objective function achieved was introduced as the safe region. It was shown that the size of this safe region shrinks as muscles fatigue. The results also demonstrated high dependency of the safe region to the mass distribution and touchdown velocities of the lower extremities. PMID:26238788

  13. Beyond The Blueprint: Development Of Genome-Informed Trait-Based Models For Prediction Of Microbial Dynamics And Biogeochemical Rates

    NASA Astrophysics Data System (ADS)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.

    2014-12-01

    In soils and sediments microorganisms perform essential ecosystem services through their roles in regulating the stability of carbon and the flux of nutrients, and the purification of water. But these are complex systems with the physical, chemical and biological components all intimately connected. Components of this complexity are gradually being uncovered and our understanding of the extent of microbial functional diversity in particular has been enhanced greatly with the development of cultivation independent approaches. However we have not moved far beyond a descriptive and correlative use of this powerful resource. As the ability to reconstruct thousands of genomes from microbial populations using metagenomic techniques gains momentum, the challenge will be to develop an understanding of how these metabolic blueprints serve to influence the fitness of organisms within these complex systems and how populations emerge and impact the physical and chemical properties of their environment. In the presentation we will discuss the development of a trait-based model of microbial activity that simulates coupled guilds of microorganisms that are parameterized including traits extracted from large-scale metagenomic data. Using a reactive transport framework we simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for respiration, biomass development and exo-enzyme production. Each group within a functional guild is parameterized with a unique combination of traits governing organism fitness under dynamic environmental conditions. This presentation will address our latest developments in the estimation of trait values related to growth rate and the identification and linkage of key fitness traits associated with respiratory and fermentative pathways, macromolecule depolymerization enzymes and nitrogen fixation from metagenomic data. We are testing model sensitivity to initial microbial composition and intra

  14. Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location.

    PubMed

    Graupner, Michael; Brunel, Nicolas

    2012-03-01

    Multiple stimulation protocols have been found to be effective in changing synaptic efficacy by inducing long-term potentiation or depression. In many of those protocols, increases in postsynaptic calcium concentration have been shown to play a crucial role. However, it is still unclear whether and how the dynamics of the postsynaptic calcium alone determine the outcome of synaptic plasticity. Here, we propose a calcium-based model of a synapse in which potentiation and depression are activated above calcium thresholds. We show that this model gives rise to a large diversity of spike timing-dependent plasticity curves, most of which have been observed experimentally in different systems. It accounts quantitatively for plasticity outcomes evoked by protocols involving patterns with variable spike timing and firing rate in hippocampus and neocortex. Furthermore, it allows us to predict that differences in plasticity outcomes in different studies are due to differences in parameters defining the calcium dynamics. The model provides a mechanistic understanding of how various stimulation protocols provoke specific synaptic changes through the dynamics of calcium concentration and thresholds implementing in simplified fashion protein signaling cascades, leading to long-term potentiation and long-term depression. The combination of biophysical realism and analytical tractability makes it the ideal candidate to study plasticity at the synapse, neuron, and network levels. PMID:22357758

  15. Event-based minimum-time control of oscillatory neuron models: phase randomization, maximal spike rate increase, and desynchronization.

    PubMed

    Danzl, Per; Hespanha, João; Moehlis, Jeff

    2009-12-01

    We present an event-based feedback control method for randomizing the asymptotic phase of oscillatory neurons. Phase randomization is achieved by driving the neuron's state to its phaseless set, a point at which its phase is undefined and is extremely sensitive to background noise. We consider the biologically relevant case of a fixed magnitude constraint on the stimulus signal, and show how the control objective can be accomplished in minimum time. The control synthesis problem is addressed using the minimum-time-optimal Hamilton-Jacobi-Bellman framework, which is quite general and can be applied to any spiking neuron model in the conductance-based Hodgkin-Huxley formalism. We also use this methodology to compute a feedback control protocol for optimal spike rate increase. This framework provides a straightforward means of visualizing isochrons, without actually calculating them in the traditional way. Finally, we present an extension of the phase randomizing control scheme that is applied at the population level, to a network of globally coupled neurons that are firing in synchrony. The applied control signal desynchronizes the population in a demand-controlled way. PMID:19911192

  16. Investigation of the mechanical behavior of kangaroo humeral head cartilage tissue by a porohyperelastic model based on the strain-rate-dependent permeability.

    PubMed

    Thibbotuwawa, Namal; Oloyede, Adekunle; Senadeera, Wijitha; Li, Tong; Gu, YuanTong

    2015-11-01

    Solid-interstitial fluid interaction, which depends on tissue permeability, is significant to the strain-rate-dependent mechanical behavior of humeral head (shoulder) cartilage. Due to anatomical and biomechanical similarities to that of the human shoulder, kangaroos present a suitable animal model. Therefore, indentation experiments were conducted on kangaroo shoulder cartilage tissues from low (10(-4)/s) to moderately high (10(-2)/s) strain-rates. A porohyperelastic model was developed based on the experimental characterization; and a permeability function that takes into account the effect of strain-rate on permeability (strain-rate-dependent permeability) was introduced into the model to investigate the effect of rate-dependent fluid flow on tissue response. The prediction of the model with the strain-rate-dependent permeability was compared with those of the models using constant permeability and strain-dependent permeability. Compared to the model with constant permeability, the models with strain-dependent and strain-rate-dependent permeability were able to better capture the experimental variation at all strain-rates (p < 0.05). Significant differences were not identified between models with strain-dependent and strain-rate-dependent permeability at strain-rate of 5 × 10(-3)/s (p = 0.179). However, at strain-rate of 10(-2)/s, the model with strain-rate-dependent permeability was significantly better at capturing the experimental results (p < 0.005). The findings thus revealed the significance of rate-dependent fluid flow on tissue behavior at large strain-rates, which provides insights into the mechanical deformation mechanisms of cartilage tissues. PMID:26275487

  17. Gas ultrasonic flow rate measurement through genetic-ant colony optimization based on the ultrasonic pulse received signal model

    NASA Astrophysics Data System (ADS)

    Hou, Huirang; Zheng, Dandan; Nie, Laixiao

    2015-04-01

    For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.

  18. The degree of heart rate asymmetry is crucial for the validity of the deceleration and acceleration capacity indices of heart rate: A model-based study.

    PubMed

    Pan, Qing; Zhou, Gongzhan; Wang, Ruofan; Yu, Yihua; Li, Feng; Fang, Luping; Yan, Jing; Ning, Gangmin

    2016-09-01

    The deceleration capacity (DC) and acceleration capacity (AC) of heart rate are a pair of indices used for evaluating the autonomic nervous system (ANS). We assessed the role of heart rate asymmetry (HRA) in defining the relative performance of DC and AC using a mathematical model, which is able to generate a realistic RR interval (RRI) time series with controlled ANS states. The simulation produced a set of RRI series with random sympathetic and vagal activities. The multi-scale DCs and ACs were computed from the RRI series, and the correlation of DC and AC with the ANS functions was analyzed to evaluate the performance of the indices. In the model, the HRA level was modified by changing the inspiration/expiration (I/E) ratio to examine the influence of HRA on the performances of DC and AC. The results show that on the conventional scales (T=1, s=2), an HRA level above 50% results in a stronger association of DC with the ANS, compared with AC. On higher scales (T=4, s=6), there was no HRA and DC showed a similar performance to AC for all I/E ratios. The data suggest that the HRA level determines which of DC or AC is the optimal index for expressing ANS functions. Future clinical applications of DC and AC should be accompanied by an HRA analysis to provide a better index for assessing ANS. PMID:27392228

  19. Reliability of delirium rating scale (DRS) and delirium rating scale-revised-98 (DRS-R98) using variance-based multivariate modelling.

    PubMed

    Adamis, Dimitrios; Slor, Chantal J; Leonard, Maeve; Witlox, Joost; de Jonghe, Jos F M; Macdonald, Alastair J D; Trzepacz, Paula; Meagher, David

    2013-07-01

    Delirium's characteristic fluctuation in symptom severity complicates the assessment of test-retest reliability of scales using classical analyses, but application of modelling to longitudinal data offers a new approach. We evaluated test-retest reliability of the delirium rating scale (DRS) and delirium rating scale-revised-98 (DRS-R98), two widely used instruments with high validity and inter-rater reliability. Two existing longitudinal datasets for each scale included DSM-IV criteria for delirium diagnosis and repeated measurements using the DRS or DRS-R98. To estimate the reliability coefficients RT and RΛ for each scale we used a macros provided by Dr. Laenen at http://www.ibiostat.be/software/measurement.asp. For each dataset a linear mixed-effects model was fitted to estimate the variance-covariance parameters. A total of 531 cases with between 4 and 9 measurement points across studies including both delirious and non-delirious patients. Comorbid dementia in the datasets varied from 27% to 55%. Overall RT for the DRS were 0.71 and 0.50 and for DRS-R98 0.75 and 0.84. RΛ values for DRS were 0.99 and 0.98 and for DRS-R98 were 0.92 and 0.96. Individual RT measures for DRS-R98 and DRS across visits within studies showed more range than overall values. Our models found high overall reliability for both scales. Multiple factors impact a scale's reliability values including sample size, repeated measurements, patient population, etc in addition to rater variability. PMID:23522935

  20. An Extension to the Constructivist Coding Hypothesis as a Learning Model for Selective Feedback when the Base Rate Is High

    ERIC Educational Resources Information Center

    Ghaffarzadegan, Navid; Stewart, Thomas R.

    2011-01-01

    Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…

  1. [Prediction model of net photosynthetic rate of ginseng under forest based on optimized parameters support vector machine].

    PubMed

    Wu, Hai-wei; Yu, Hai-ye; Zhang, Lei

    2011-05-01

    Using K-fold cross validation method and two support vector machine functions, four kernel functions, grid-search, genetic algorithm and particle swarm optimization, the authors constructed the support vector machine model of the best penalty parameter c and the best correlation coefficient. Using information granulation technology, the authors constructed P particle and epsilon particle about those factors affecting net photosynthetic rate, and reduced these dimensions of the determinant. P particle includes the percent of visible spectrum ingredients. Epsilon particle includes leaf temperature, scattering radiation, air temperature, and so on. It is possible to obtain the best correlation coefficient among photosynthetic effective radiation, visible spectrum and individual net photosynthetic rate by this technology. The authors constructed the training set and the forecasting set including photosynthetic effective radiation, P particle and epsilon particle. The result shows that epsilon-SVR-RBF-genetic algorithm model, nu-SVR-linear-grid-search model and nu-SVR-RBF-genetic algorithm model obtain the correlation coefficient of up to 97% about the forecasting set including photosynthetic effective radiation and P particle. The penalty parameter c of nu-SVR-linear-grid-search model is the minimum, so the model's generalization ability is the best. The authors forecasted the forecasting set including photosynthetic effective radiation, P particle and epsilon particle by the model, and the correlation coefficient is up to 96%. PMID:21800612

  2. Further tests of a model-based scheme for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Rickard, W. W.; Levison, W. H.

    1981-01-01

    A methodology was demonstrated for assessing longitudinal-axis handling qualities of transport aircraft on the basis of closed-loop criteria. Six longitudinal-axis approach configurations were studied covering a range of handling quality problems that included the presence of flexible aircraft modes. Using closed-loop performance requirements derived from task analyses and pilot interviews, predictions of performance/workload tradeoffs were obtained using an analytical pilot/vehicle model. A subsequent manned simulation study yielded objective performance measures and Cooper-Harper pilot ratings that were largely consistent with each other and with analytic predictions.

  3. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  4. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  5. A fuzzy-logic-based model to predict biogas and methane production rates in a pilot-scale mesophilic UASB reactor treating molasses wastewater.

    PubMed

    Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan

    2010-10-15

    A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98. PMID:20609515

  6. Modeling kinetic rate variation in third generation DNA sequencing data to detect putative modifications to DNA bases.

    PubMed

    Schadt, Eric E; Banerjee, Onureena; Fang, Gang; Feng, Zhixing; Wong, Wing H; Zhang, Xuegong; Kislyuk, Andrey; Clark, Tyson A; Luong, Khai; Keren-Paz, Alona; Chess, Andrew; Kumar, Vipin; Chen-Plotkin, Alice; Sondheimer, Neal; Korlach, Jonas; Kasarskis, Andrew

    2013-01-01

    Current generation DNA sequencing instruments are moving closer to seamlessly sequencing genomes of entire populations as a routine part of scientific investigation. However, while significant inroads have been made identifying small nucleotide variation and structural variations in DNA that impact phenotypes of interest, progress has not been as dramatic regarding epigenetic changes and base-level damage to DNA, largely due to technological limitations in assaying all known and unknown types of modifications at genome scale. Recently, single-molecule real time (SMRT) sequencing has been reported to identify kinetic variation (KV) events that have been demonstrated to reflect epigenetic changes of every known type, providing a path forward for detecting base modifications as a routine part of sequencing. However, to date no statistical framework has been proposed to enhance the power to detect these events while also controlling for false-positive events. By modeling enzyme kinetics in the neighborhood of an arbitrary location in a genomic region of interest as a conditional random field, we provide a statistical framework for incorporating kinetic information at a test position of interest as well as at neighboring sites that help enhance the power to detect KV events. The performance of this and related models is explored, with the best-performing model applied to plasmid DNA isolated from Escherichia coli and mitochondrial DNA isolated from human brain tissue. We highlight widespread kinetic variation events, some of which strongly associate with known modification events, while others represent putative chemically modified sites of unknown types. PMID:23093720

  7. Base Rates: Both Neglected and Intuitive

    ERIC Educational Resources Information Center

    Pennycook, Gordon; Trippas, Dries; Handley, Simon J.; Thompson, Valerie A.

    2014-01-01

    Base-rate neglect refers to the tendency for people to underweight base-rate probabilities in favor of diagnostic information. It is commonly held that base-rate neglect occurs because effortful (Type 2) reasoning is required to process base-rate information, whereas diagnostic information is accessible to fast, intuitive (Type 1) processing…

  8. A process-based model to estimate gas exchange and monoterpene emission rates in the mediterranean maquis - comparisons between modelled and measured fluxes at different scales

    NASA Astrophysics Data System (ADS)

    Vitale, M.; Matteucci, G.; Fares, S.; Davison, B.

    2009-02-01

    This paper concerns the application of a process-based model (MOCA, Modelling of Carbon Assessment) as an useful tool for estimating gas exchange, and integrating the empirical algorithms for calculation of monoterpene fluxes, in a Mediterranean maquis of central Italy (Castelporziano, Rome). Simulations were carried out for a range of hypothetical but realistic canopies of the evergreen Quercus ilex (holm oak), Arbutus unedo (strawberry tree) and Phillyrea latifolia. More, the dependence on total leaf area and leaf distribution of monoterpene fluxes at the canopy scale has been considered in the algorithms. Simulation of the gas exchange rates showed higher values for P. latifolia and A. unedo (2.39±0.30 and 3.12±0.27 gC m-2 d-1, respectively) with respect to Q. ilex (1.67±0.08 gC m-2 d-1) in the measuring campaign (May-June). Comparisons of the average Gross Primary Production (GPP) values with those measured by eddy covariance were well in accordance (7.98±0.20 and 6.00±1.46 gC m-2 d-1, respectively, in May-June), although some differences (of about 30%) were evident in a point-to-point comparison. These differences could be explained by considering the non uniformity of the measuring site where diurnal winds blown S-SW direction affecting thus calculations of CO2 and water fluxes. The introduction of some structural parameters in the algorithms for monoterpene calculation allowed to simulate monoterpene emission rates and fluxes which were in accord to those measured (6.50±2.25 vs. 9.39±4.5μg g-1DW h-1 for Q. ilex, and 0.63±0.207μg g-1DW h-1 vs. 0.98±0.30μg g-1DW h-1 for P. latifolia). Some constraints of the MOCA model are discussed, but it is demonstrated to be an useful tool to simulate physiological processes and BVOC fluxes in a very complicated plant distributions and environmental conditions, and necessitating also of a low number of input data.

  9. [A model-based meta-analysis to compare urate-lowering response rate of febuxostat and allopurinol in gout patient].

    PubMed

    Sun, Yi; Li, Liang; Zhou, Tian-Yan; Lu, Wei

    2014-12-01

    This study aims to compare the urate-lowering response rate of febuxostat and allopurinol in gout patient using a model-based meta-analysis. The literature search identified 22 clinical trials of gout with a total of 43 unique treatment arms that met our inclusion criteria, and a total of 6 365 gout patients were included in the study. The response rates of allopuriol and febuxostat were characterized by Tmax model and Emax model respectively, and the effect of baseline serum uric acid (sUA) and patient type on the drug effect was tested. The results showed that allopurinol can reach an average maximum response rate of 50.8% while febuxostat can reach a 100% response rate within a very short time, and the ED50 was 34.3 mg. Covariate analysis revealed that baseline sUA has a negative effect on response rate of allopurinol, and a positive effect on the predicted ED50 of febuxostat. For patients who had shown inadequate response to prior allopurinol treatment, the average response rate was about half that of the allopurinol responder patients. PMID:25920196

  10. Rate my data: a hierarchical approach to quantifying the relative value of ecological data for the development of process-based models of the terrestrial carbon cycle

    NASA Astrophysics Data System (ADS)

    Keenan, T. F.; Richardson, A. D.; Davidson, E. A.; Munger, J. W.

    2011-12-01

    The proliferation of ecological observation networks over the past two decades has led to the accumulation of large amounts of data at different spatial and temporal scales. Process-based models of the terrestrial carbon cycle have been adopted as the most effective way of scaling this point based information through space and time. Given the large amounts of data available, model developers have begun to update the statistical and analytical tools they use, relying more heavily on techniques such as data mining and model-data fusion. Such techniques are useful in that they can synchronously use all measurements available to give a more complete integration of models with data, shedding light on model weaknesses and highlighting model aspects in need of further development. Although modelers and organizers of measurement campaigns are focused on similar questions of terrestrial carbon cycling, cooperative efforts between the two are rare. Modelers generally use a limited set of measurements, with large assumptions as to what measurements are most effective in reducing uncertainty in model projections. On the other hand, those involved in field work are often motivated by hypothesis driven science, and commonly do not have information as to what measurements would be most useful for modelers. The lack of information flow between the two communities is clearly sub-optimal. Here we address this problem by providing a hierarchical rating of the value of different data sources for reducing uncertainty in model estimates of terrestrial carbon cycling. We do so using a model-data fusion framework to iteratively integrate different data streams (both real data from Harvard forest, MA, USA, and synthetic data) with a process-based model of terrestrial carbon cycling. At each stage, the data source that leads to the greatest reduction in uncertainty in model projections is retained, and the additional benefit of each other data stream is tested independently. This process is

  11. A physics-based two-dimensional comprehensive mathematical model to predict non-uniform regression rate in solid fuels for hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Antoniou, Antonis

    A numerical study using a comprehensive physics based mathematical model is conducted to predict the fuel regression rate in hybrid rocket fuels. The physical model adopted for the study is based on an unsteady, two-domain (solid fuel and gaseous oxidizer coupled through a moving interface) concept where both domains are assumed to be two-dimensional. The oxidizer gas flow is assumed to be compressible and turbulent with Navier-Stokes Assumptions. The radiative heat transfer is incorporated to the energy equation for the gas domain using the Rosseland diffusion approximation. Fuel is assumed to be a nontransparent isotropic solid. The two domains are coupled through an energy balance at the interface that includes heat transfer due to radiation, conduction, and ablation. The regression rate of the fuel surface due to ablation is modeled using the first-order Arrhenius Equation. The combustion of the ablated fuel is modeled by single step, three species chemical reaction equation of second order Arrhenius type. The solution to the governing differential equations of the present model is obtained by first transform the solution domain using a time and space dependent transformation. In the gas domain the transformed set of differential equations is discretized by a fully implicit finite-difference technique then linearized by using Newton linearization method. The resulting set of algebraic equations are transformed by the Coupled Modified Strongly Implicit Procedure (CMSIP) for the primitive variables of the problem. Validation of the solution algorithm and the CMSIP that is developed for this study is validated through the study of two bench mark cases: driven cavity and flow through channel. Furthermore, the results of the comprehensive model are compared to those of the parabolic incompressible model. Finally the proposed comprehensive mathematical model is used to predict the unsteady temperature and pressure distributions, and the velocity field in the gas

  12. Rating Scale Analysis with Latent Class Models.

    ERIC Educational Resources Information Center

    Rost, Jurgen

    1988-01-01

    A general approach for analyzing rating data with latent class models is described, paralleling rating models in the framework of latent trait theory. A general rating model and a two-parameter model with location and dispersion parameters are derived and illustrated. (Author/SLD)

  13. Influence of the formation- and passivation rate of boron-oxygen defects for mitigating carrier-induced degradation in silicon within a hydrogen-based model

    NASA Astrophysics Data System (ADS)

    Hallam, Brett; Abbott, Malcolm; Nampalli, Nitin; Hamer, Phill; Wenham, Stuart

    2016-02-01

    A three-state model is used to explore the influence of defect formation- and passivation rates of carrier-induced degradation related to boron-oxygen complexes in boron-doped p-type silicon solar cells within a hydrogen-based model. The model highlights that the inability to effectively mitigate carrier-induced degradation at elevated temperatures in previous studies is due to the limited availability of defects for hydrogen passivation, rather than being limited by the defect passivation rate. An acceleration of the defect formation rate is also observed to increase both the effectiveness and speed of carrier-induced degradation mitigation, whereas increases in the passivation rate do not lead to a substantial acceleration of the hydrogen passivation process. For high-throughput mitigation of such carrier-induced degradation on finished solar cell devices, two key factors were found to be required, high-injection conditions (such as by using high intensity illumination) to enable an acceleration of defect formation whilst simultaneously enabling a rapid passivation of the formed defects, and a high temperature to accelerate both defect formation and defect passivation whilst still ensuring an effective mitigation of carrier-induced degradation.

  14. Disturbance Distance: Using a process based ecosystem model to estimate and map potential thresholds in disturbance rates that would give rise to fundamentally altered ecosystems

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.; Hurtt, G. C.; Fisk, J.; Flanagan, S.; LePage, Y.; Sahajpal, R.

    2014-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. As recent studies highlight novel disturbance regimes resulting from pollution, invasive pests and climate change, there is a need to include these alterations in predictions of future forest function and structure. The Ecosystem Demography (ED) model is a mechanistic model of forest ecosystem dynamics in which individual-based forest dynamics can be efficiently implemented over regional to global scales due to advanced scaling methods. We utilize ED to characterize the sensitivity of potential vegetation structure and function to changes in rates of density independent mortality. Disturbance rate within ED can either be altered directly or through the development of sub-models. Disturbance sub-models in ED currently include fire, land use and hurricanes. We use a tiered approach to understand the sensitivity of North American ecosystems to changes in background density independent mortality. Our first analyses were conducted at half-degree spatial resolution with a constant rate of disturbance in space and time, which was altered between runs. Annual climate was held constant at the site level and the land use and fire sub-models were turned off. Results showed an ~ 30% increase in non-forest area across the US when disturbance rates were changed from 0.6% a year to 1.2% a year and a more than 3.5 fold increase in non-forest area when disturbance rates doubled again from 1.2% to 2.4%. Continued runs altered natural background disturbance rates with the existing fire and hurricane sub models turned on as well as historic and future land use. By quantify differences between model outputs that characterize ecosystem structure and function related to the carbon cycle across the US, we

  15. A flexible cure rate model for spatially correlated survival data based on generalized extreme value distribution and Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Dey, Dipak K

    2016-09-01

    Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa. PMID:27225466

  16. Gaussian Mixture Model of Heart Rate Variability

    PubMed Central

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  17. QSAR models for oxidation of organic micropollutants in water based on ozone and hydroxyl radical rate constants and their chemical classification.

    PubMed

    Sudhakaran, Sairam; Amy, Gary L

    2013-03-01

    Ozonation is an oxidation process for the removal of organic micropollutants (OMPs) from water and the chemical reaction is governed by second-order kinetics. An advanced oxidation process (AOP), wherein the hydroxyl radicals (OH radicals) are generated, is more effective in removing a wider range of OMPs from water than direct ozonation. Second-order rate constants (k(OH) and k(O3) are good indices to estimate the oxidation efficiency, where higher rate constants indicate more rapid oxidation. In this study, quantitative structure activity relationships (QSAR) models for O(3) and AOP processes were developed, and rate constants, k(OH) and [Formula: see text] , were predicted based on target compound properties. The k(O3) and k(OH) values ranged from 5 * 10(-4) to 10(5) M(-1)s(-1) and 0.04 to 18 * (10(9)) M(-1) s(-1), respectively. Several molecular descriptors which potentially influence O(3) and OH radical oxidation were identified and studied. The QSAR-defining descriptors were double bond equivalence (DBE), ionisation potential (IP), electron-affinity (EA) and weakly-polar component of solvent accessible surface area (WPSA), and the chemical and statistical significance of these descriptors was discussed. Multiple linear regression was used to build the QSAR models, resulting in high goodness-of-fit, r(2) (>0.75). The models were validated by internal and external validation along with residual plots. PMID:23260175

  18. Fetal heart rate classification using generative models.

    PubMed

    Dash, Shishir; Quirk, J Gerald; Djurić, Petar M

    2014-11-01

    This paper presents novel methods for classification of fetal heart rate (FHR) signals into categories that are meaningful for clinical implementation. They are based on generative models (GMs) and Bayesian theory. Instead of using scalar features that summarize information obtained from long-duration data, the models allow for explicit use of feature sequences derived from local patterns of FHR evolution. We compare our methods with a deterministic expert system for classification and with a support vector machine approach that relies on system-identification and heart rate variability features. We tested the classifiers on 83 retrospectively collected FHR records, with the gold-standard true diagnosis defined using umbilical cord pH values. We found that our methods consistently performed as well as or better than these, suggesting that the use of GMs and the Bayesian paradigm can bring significant improvement to automatic FHR classification approaches. PMID:24951678

  19. The Potential Geographical Distribution of bactrocera Dorsalis (Diptera: Tephrididae) in China Based on Emergence Rate Model and Arcgis

    NASA Astrophysics Data System (ADS)

    Wang, Ningbo; Li, Zhihong; Wu, Jiajiao; Rajotte, Edwin G.; Wan, Fanghao; Wang, Zhiling

    Precision agriculture is an important choice for the future agriculture. It is the base for precision agriculture development to change the state of small-scale farmland production and weak agricultural foundation in China gradually. Combined with the poorness of village in China, the variation of farmland and the dominance of small-scale peasant economy, this paper analyzed the adaptability of farmland landscape pattern to precision agriculture based on literatures and farmland landscape survey. With the requirements of precision agricultural production, this paper put forward the standards on cultivated field scale and shape, farmland corridor structure, cultivated field matrix and farmland landscape protection in order to make farmland landscape suitable for precision agriculture and to provide references for the sustainable development of precision agriculture in China.

  20. An "Emergent Model" for Rate of Change

    ERIC Educational Resources Information Center

    Herbert, Sandra; Pierce, Robyn

    2008-01-01

    Does speed provide a "model for" rate of change in other contexts? Does JavaMathWorlds (JMW), animated simulation software, assist in the development of the "model for" rate of change? This project investigates the transference of understandings of rate gained in a motion context to a non-motion context. Students were 27 14-15 year old students at…

  1. Base Rates, Contingencies, and Prediction Behavior

    ERIC Educational Resources Information Center

    Kareev, Yaakov; Fiedler, Klaus; Avrahami, Judith

    2009-01-01

    A skew in the base rate of upcoming events can often provide a better cue for accurate predictions than a contingency between signals and events. The authors study prediction behavior and test people's sensitivity to both base rate and contingency; they also examine people's ability to compare the benefits of both for prediction. They formalize…

  2. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  3. Associations of Perfluoroalkyl Substances (PFAS) with Lower Birth Weight: An Evaluation of Potential Confounding by Glomerular Filtration Rate Using a Physiologically Based Pharmacokinetic Model (PBPK)

    PubMed Central

    Loccisano, Anne E.; Morken, Nils-Halvdan; Yoon, Miyoung; Wu, Huali; McDougall, Robin; Maisonet, Mildred; Marcus, Michele; Kishi, Reiko; Miyashita, Chihiro; Chen, Mei-Huei; Hsieh, Wu-Shiun; Andersen, Melvin E.; Clewell, Harvey J.; Longnecker, Matthew P.

    2015-01-01

    Background Prenatal exposure to perfluoroalkyl substances (PFAS) has been associated with lower birth weight in epidemiologic studies. This association could be attributable to glomerular filtration rate (GFR), which is related to PFAS concentration and birth weight. Objectives We used a physiologically based pharmacokinetic (PBPK) model of pregnancy to assess how much of the PFAS–birth weight association observed in epidemiologic studies might be attributable to GFR. Methods We modified a PBPK model to reflect the association of GFR with birth weight (estimated from three studies of GFR and birth weight) and used it to simulate PFAS concentrations in maternal and cord plasma. The model was run 250,000 times, with variation in parameters, to simulate a population. Simulated data were analyzed to evaluate the association between PFAS levels and birth weight due to GFR. We compared simulated estimates with those from a meta-analysis of epidemiologic data. Results The reduction in birth weight for each 1-ng/mL increase in simulated cord plasma for perfluorooctane sulfonate (PFOS) was 2.72 g (95% CI: –3.40, –2.04), and for perfluorooctanoic acid (PFOA) was 7.13 g (95% CI: –8.46, –5.80); results based on maternal plasma at term were similar. Results were sensitive to variations in PFAS level distributions and the strength of the GFR–birth weight association. In comparison, our meta-analysis of epidemiologic studies suggested that each 1-ng/mL increase in prenatal PFOS and PFOA levels was associated with 5.00 g (95% CI: –21.66, –7.78) and 14.72 g (95% CI: –8.92, –1.09) reductions in birth weight, respectively. Conclusion Results of our simulations suggest that a substantial proportion of the association between prenatal PFAS and birth weight may be attributable to confounding by GFR and that confounding by GFR may be more important in studies with sample collection later in pregnancy. Citation Verner MA, Loccisano AE, Morken NH, Yoon M, Wu H, Mc

  4. SEE Rate Estimation: Model Complexity and Data Requirements

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2008-01-01

    Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.

  5. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics.

    PubMed

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J; Rubio, Roberto F; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make "deadly force decisions" in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  6. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics

    PubMed Central

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  7. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    PubMed

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana. PMID:25741459

  8. Rate-based modeling of reactive absorption of CO{sub 2} and H{sub 2}S into aqueous methyldiethanolamine

    SciTech Connect

    Pacheco, M.A.; Rochelle, G.T.

    1998-10-01

    A general framework was developed to model the transport processes that take place during reactive absorption when both rate- and equilibrium-controlled reactions occur in the liquid phase. This framework was applied to the selective absorption of H{sub 2}S from fuel gas containing CO{sub 2} using aqueous methyldiethanolamine. A rate-based distillation column module was used for the column integration. The Maxwell-Stefan and enhancement factor theories were utilized. In packed columns, CO{sub 2} absorption is controlled by diffusion with fast chemical reactions; in trayed columns it is controlled primarily by physical absorption. Gas-film resistance is never significant for CO{sub 2} absorption. For H{sub 2}S absorption, gas- and liquid-film resistances are important, and diffusion of bisulfide controls the liquid-film resistance. Heat effects produce temperatures bulges that can cause equilibrium pinches at the maximum temperature. This phenomenon gives an optimum packing height for the H{sub 2}S removal. Trayed columns are more selective than packed columns for H{sub 2}S removal, primarily because of the larger number of liquid-film mass transfer units.

  9. Rate control algorithm based on frame complexity estimation for MVC

    NASA Astrophysics Data System (ADS)

    Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang

    2010-07-01

    Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.

  10. Minority Utility Rate Design Assessment Model

    Energy Science and Technology Software Center (ESTSC)

    2003-01-20

    Econometric model simulates consumer demand response to various user-supplied, two-part tariff electricity rate designs and assesses their economic welfare impact on black, hispanic, poor and majority households.

  11. Single crystal plasticity by modeling dislocation density rate behavior

    SciTech Connect

    Hansen, Benjamin L; Bronkhorst, Curt; Beyerlein, Irene; Cerreta, E. K.; Dennis-Koller, Darcie

    2010-12-23

    The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. The elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.

  12. Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning

    ERIC Educational Resources Information Center

    Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus

    2008-01-01

    When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…

  13. Development of experimental design approach and ANN-based models for determination of Cr(VI) ions uptake rate from aqueous solution onto the solid biodiesel waste residue.

    PubMed

    Shanmugaprakash, M; Sivakumar, V

    2013-11-01

    In the present work, the evaluation capacities of two optimization methodologies such as RSM and ANN were employed and compared for predication of Cr(VI) uptake rate using defatted pongamia oil cake (DPOC) in both batch and column mode. The influence of operating parameters was investigated through a central composite design (CCD) of RSM using Design Expert 8.0.7.1 software. The same data was fed as input in ANN to obtain a trained the multilayer feed-forward networks back-propagation algorithm using MATLAB. The performance of the developed ANN models were compared with RSM mathematical models for Cr(VI) uptake rate in terms of the coefficient of determination (R(2)), root mean square error (RMSE) and absolute average deviation (AAD). The estimated values confirm that ANN predominates RSM representing the superiority of a trained ANN models over RSM models in order to capture the non-linear behavior of the given system. PMID:24080294

  14. Introducing AORN's new model for evidence rating.

    PubMed

    Spruce, Lisa; Van Wicklin, Sharon A; Hicks, Rodney W; Conner, Ramona; Dunn, Debra

    2014-02-01

    Nurses today are expected to implement evidence-based practices in the perioperative setting to assess and implement practice changes. All evidence-based practice begins with a question, a practice problem to address, or a needed change that is identified. To assess the question, a literature search is performed and relevant literature is identified and appraised. The types of evidence used to inform practice can be scientific research (eg, randomized controlled trials, systematic reviews) or nonresearch evidence (eg, regulatory and accrediting agency requirements, professional association practice standards and guidelines, quality improvement project reports). The AORN recommended practices are a synthesis of related knowledge on a given topic, and the authorship process begins with a systematic review of the literature conducted in collaboration with a medical librarian. At least two appraisers independently evaluate the applicable literature for quality and strength by using the AORN Research Appraisal Tool and AORN Non-Research Appraisal Tool. To collectively appraise the evidence supporting particular practice recommendations, the AORN recommended practices authors have implemented a new evidence rating model that is appropriate for research and nonresearch literature and that is relevant to the perioperative setting. PMID:24472588

  15. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    SciTech Connect

    Ballester, Facundo; Carlsson Tedgren, Åsa; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Rivard, Mark J.; Siebert, Frank-André; Sloboda, Ron S.; and others

    2015-06-15

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by

  16. The relationship between specific absorption rate and temperature elevation in anatomically based human body models for plane wave exposure from 30 MHz to 6 GHz

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Laakso, Ilkka; Oizumi, Takuya; Hanatani, Ryuto; Chan, Kwok Hung; Wiart, Joe

    2013-02-01

    According to the international safety guidelines/standard, the whole-body-averaged specific absorption rate (Poljak et al 2003 IEEE Trans. Electromagn. Compat. 45 141-5) and the peak spatial average SAR are used as metrics for human protection from whole-body and localized exposures, respectively. The IEEE standard (IEEE 2006 IEEE C95.1) indicates that the upper boundary frequency, over which the whole-body-averaged SAR is deemed to be the basic restriction, has been reduced from 6 to 3 GHz, because radio-wave energy is absorbed around the body surface when the frequency is increased. However, no quantitative discussion has been provided to support this description especially from the standpoint of temperature elevation. It is of interest to investigate the maximum temperature elevation in addition to the core temperature even for a whole-body exposure. In the present study, using anatomically based human models, we computed the SAR and the temperature elevation for a plane-wave exposure from 30 MHz to 6 GHz, taking into account the thermoregulatory response. As the primary result, we found that the ratio of the core temperature elevation to the whole-body-averaged SAR is almost frequency independent for frequencies below a few gigahertz; the ratio decreases above this frequency. At frequencies higher than a few gigahertz, core temperature elevation for the same whole-body averaged SAR becomes lower due to heat convection from the skin to air. This lower core temperature elevation is attributable to skin temperature elevation caused by the power absorption around the body surface. Then, core temperature elevation even for whole-body averaged SAR of 4 W kg-1 with the duration of 1 h was at most 0.8 °C, which is smaller than a threshold considered in the safety guidelines/standard. Further, the peak 10 g averaged SAR is correlated with the maximum body temperature elevations without extremities and pinna over the frequencies considered. These findings were confirmed

  17. Modeling of Kashmir Aftershock Decay Based on Static Coulomb Stress Changes and Laboratory-Derived Rate-and-State Dependent Friction Law

    NASA Astrophysics Data System (ADS)

    Javed, F.; Hainzl, S.; Aoudia, A.; Qaisar, M.

    2016-05-01

    We model the spatial and temporal evolution of October 8, 2005 Kashmir earthquake's aftershock activity using the rate-and-state dependent friction model incorporating uncertainties in computed coseismic stress perturbations. We estimated the best possible value for frictional resistance " Aσ n", background seismicity rate " r" and coefficient of stress variation "CV" using maximum log-likelihood method. For the whole Kashmir earthquake sequence, we measure a frictional resistance Aσ n ~ 0.0185 MPa, r ~ 20 M3.7+ events/year and CV = 0.94 ± 0.01. The spatial and temporal forecasted seismicity rate of modeled aftershocks fits well with the spatial and temporal distribution of observed aftershocks that occurred in the regions with positive static stress changes as well as in the apparent stress shadow region. To quantify the effect of secondary aftershock triggering, we have re-run the estimations for 100 stochastically declustered catalogs showing that the effect of aftershock-induced secondary stress changes is obviously minor compared to the overall uncertainties, and that the stress variability related to uncertain slip model inversions and receiver mechanisms remains the major factor to provide a reasonable data fit.

  18. Evolution of migration rate in a spatially realistic metapopulation model.

    PubMed

    Heino, M; Hanski, I

    2001-05-01

    We use an individual-based, spatially realistic metapopulation model to study the evolution of migration rate. We first explore the consequences of habitat change in hypothetical patch networks on a regular lattice. If the primary consequence of habitat change is an increase in local extinction risk as a result of decreased local population sizes, migration rate increases. A nonmonotonic response, with migration rate decreasing at high extinction rate, was obtained only by assuming very frequent catastrophes. If the quality of the matrix habitat deteriorates, leading to increased mortality during migration, the evolutionary response is more complex. As long as habitat patch occupancy does not decrease markedly with increased migration mortality, reduced migration rate evolves. However, once mortality becomes so high that empty patches remain uncolonized for a long time, evolution tends to increase migration rate, which may lead to an "evolutionary rescue" in a fragmented landscape. Kin competition has a quantitative effect on the evolution of migration rate in our model, but these patterns in the evolution of migration rate appear to be primarily caused by spatiotemporal variation in fitness and mortality during migration. We apply the model to real habitat patch networks occupied by two checkerspot butterfly (Melitaea) species, for which sufficient data are available to estimate rigorously most of the model parameters. The model-predicted migration rate is not significantly different from the empirically observed one. Regional variation in patch areas and connectivities leads to regional variation in the optimal migration rate, predictions that can be tested empirically. PMID:18707258

  19. Modeling heart rate variability by stochastic feedback

    NASA Technical Reports Server (NTRS)

    Amaral, L. A.; Goldberger, A. L.; Stanley, H. E.

    1999-01-01

    We consider the question of how the cardiac rhythm spontaneously self-regulates and propose a new mechanism as a possible answer. We model the neuroautonomic regulation of the heart rate as a stochastic feedback system and find that the model successfully accounts for key characteristics of cardiac variability, including the 1/f power spectrum, the functional form and scaling of the distribution of variations of the interbeat intervals, and the correlations in the Fourier phases which indicate nonlinear dynamics.

  20. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  1. Acoustically based fetal heart rate monitor

    NASA Technical Reports Server (NTRS)

    Baker, Donald A.; Zuckerwar, Allan J.

    1991-01-01

    The acoustically based fetal heart rate monitor permits an expectant mother to perform the fetal Non-Stress Test in her home. The potential market would include the one million U.S. pregnancies per year requiring this type of prenatal surveillance. The monitor uses polyvinylidene fluoride (PVF2) piezoelectric polymer film for the acoustic sensors, which are mounted in a seven-element array on a cummerbund. Evaluation of the sensor ouput signals utilizes a digital signal processor, which performs a linear prediction routine in real time. Clinical tests reveal that the acoustically based monitor provides Non-Stress Test records which are comparable to those obtained with a commercial ultrasonic transducer.

  2. Actuarial models of life insurance with stochastic interest rate

    NASA Astrophysics Data System (ADS)

    Wei, Xiang; Hu, Ping

    2009-07-01

    On the basis of general actuarial model of life insurance, this article has carried on research to continuous life insurance actuarial models under the stochastic interest rate separately. And it provide net single premium for life insurance and life annuity due over a period based on that de Moivre law of mortality and Makeham's law of mortality separately.

  3. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  4. Sphaleron rate in the minimal standard model.

    PubMed

    D'Onofrio, Michela; Rummukainen, Kari; Tranberg, Anders

    2014-10-01

    We use large-scale lattice simulations to compute the rate of baryon number violating processes (the sphaleron rate), the Higgs field expectation value, and the critical temperature in the standard model across the electroweak phase transition temperature. While there is no true phase transition between the high-temperature symmetric phase and the low-temperature broken phase, the crossover is sharp and located at temperature T(c) = (159.5 ± 1.5)  GeV. The sphaleron rate in the symmetric phase (T>T(c)) is Γ/T(4) = (18 ± 3)α(W)(5), and in the broken phase in the physically interesting temperature range 130 GeV < T < T(c) it can be parametrized as log(Γ/T(4)) = (0.83 ± 0.01)T/GeV-(147.7 ± 1.9). The freeze-out temperature in the early Universe, where the Hubble rate wins over the baryon number violation rate, is T* = (131.7 ± 2.3) GeV. These values, beyond being intrinsic properties of the standard model, are relevant for, e.g., low-scale leptogenesis scenarios. PMID:25325629

  5. Liver cancer mortality rate model in Thailand

    NASA Astrophysics Data System (ADS)

    Sriwattanapongse, Wattanavadee; Prasitwattanaseree, Sukon

    2013-09-01

    Liver Cancer has been a leading cause of death in Thailand. The purpose of this study was to model and forecast liver cancer mortality rate in Thailand using death certificate reports. A retrospective analysis of the liver cancer mortality rate was conducted. Numbering of 123,280 liver cancer causes of death cases were obtained from the national vital registration database for the 10-year period from 2000 to 2009, provided by the Ministry of Interior and coded as cause-of-death using ICD-10 by the Ministry of Public Health. Multivariate regression model was used for modeling and forecasting age-specific liver cancer mortality rates in Thailand. Liver cancer mortality increased with increasing age for each sex and was also higher in the North East provinces. The trends of liver cancer mortality remained stable in most age groups with increases during ten-year period (2000 to 2009) in the Northern and Southern. Liver cancer mortality was higher in males and increase with increasing age. There is need of liver cancer control measures to remain on a sustained and long-term basis for the high liver cancer burden rate of Thailand.

  6. The relationship between specific absorption rate and temperature elevation in anatomically based human body models for plane wave exposure from 30 MHz to 6 GHz.

    PubMed

    Hirata, Akimasa; Laakso, Ilkka; Oizumi, Takuya; Hanatani, Ryuto; Chan, Kwok Hung; Wiart, Joe

    2013-02-21

    According to the international safety guidelines/standard, the whole-body-averaged specific absorption rate (Poljak et al 2003 IEEE Trans. Electromagn. Compat. 45 141-5) and the peak spatial average SAR are used as metrics for human protection from whole-body and localized exposures, respectively. The IEEE standard (IEEE 2006 IEEE C95.1) indicates that the upper boundary frequency, over which the whole-body-averaged SAR is deemed to be the basic restriction, has been reduced from 6 to 3 GHz, because radio-wave energy is absorbed around the body surface when the frequency is increased. However, no quantitative discussion has been provided to support this description especially from the standpoint of temperature elevation. It is of interest to investigate the maximum temperature elevation in addition to the core temperature even for a whole-body exposure. In the present study, using anatomically based human models, we computed the SAR and the temperature elevation for a plane-wave exposure from 30 MHz to 6 GHz, taking into account the thermoregulatory response. As the primary result, we found that the ratio of the core temperature elevation to the whole-body-averaged SAR is almost frequency independent for frequencies below a few gigahertz; the ratio decreases above this frequency. At frequencies higher than a few gigahertz, core temperature elevation for the same whole-body averaged SAR becomes lower due to heat convection from the skin to air. This lower core temperature elevation is attributable to skin temperature elevation caused by the power absorption around the body surface. Then, core temperature elevation even for whole-body averaged SAR of 4 W kg(-1) with the duration of 1 h was at most 0.8 °C, which is smaller than a threshold considered in the safety guidelines/standard. Further, the peak 10 g averaged SAR is correlated with the maximum body temperature elevations without extremities and pinna over the frequencies considered. These findings

  7. Diagnosis of Photochemical Ozone Production Rates and Limiting Factors based on Observation-based Modeling Approach over East Asia: Impact of Radical Chemistry Mechanism and Ozone-Control Implications

    NASA Astrophysics Data System (ADS)

    Kanaya, Y.

    2015-12-01

    Growth of tropospheric ozone, causing health and climate impacts, is concerned over East Asia, because emissions of precursors have dramatically increased. Photochemical production rates of ozone and limiting factors, primarily studied for urban locations, have been poorly assessed within a perspective of regional-scale air pollution over East Asia. We performed comprehensive observations of ozone precursors at several locations with regional representativeness and made such assessment based on the observation-based modeling approach. Here, diagnosis at Fukue Island (32.75°N, 128.68°E) remotely located in western Japan (May 2009) is highlighted, where the highest 10% of hourly ozone concentrations reached 72‒118 ppb during May influenced by Asian continental outflow. The average in-situ ozone production rate was estimated to be 6.8 ppb per day, suggesting that in-travel production was still active, while larger buildup must have occurred beforehand. Information on the chemical status of the air mass arriving in Japan is important, because it affects how further ozone production occurs after precursor addition from Japanese domestic emissions. The main limiting factor of ozone production was usually NOx, suggesting that domestic NOx emission control is important in reducing further ozone production and the incidence of warning issuance (>120 ppb). VOCs also increased the ozone production rate, and occasionally (14% of time) became dominant. This analysis implies that the VOC reduction legislation recently enacted should be effective. The uncertainty in the radical chemistry mechanism governing ozone production had a non-negligible impact, but the main conclusion relevant to policy was not altered. When chain termination was augmented by HO2-H2O + NO/NO2 reactions and by heterogeneous loss of HO2 on aerosol particle surfaces, the daily ozone production rate decreased by <24%, and the fraction of hours when the VOC-limited condition occurred varied from 14% to 13

  8. Rate controlling model for bioremediation of oil contaminated soil

    SciTech Connect

    Li, K.Y.; Annamali, S.N.; Hopper, J.R. )

    1993-11-01

    A mathematical model of bio-remediation of hydrocarbons in a soil matrix has been developed to predict the rate controlling step and the remediation rate during the bioremediation of a contaminated soil. The model is based on mass transfer of oxygen and oil into the aqueous solution in the soil matrix and the biodegradation of the hydrocarbons in the aqueous solution. Monod's equation was used to describe the biodegradation rate in aqueous solution while the mass transfer equations were used to describe the mass transfer rates of oxygen and oil in the soil matrix. Results from model calculations indicate that the bio-remediation rate increases and approaches a limiting value when one of the rates becomes controlling. When the parameters of the site soil samples are measured and the solubilities of oxygen and oil in aqueous solution are obtained, the bioremediation rate can be predicted by this model. The rate controlling step of the bioremediation site may be identified quickly and steps to improve the bioremediation rate can be recommended. 8 refs., 7 figs.

  9. Modeling of asteroidal dust production rates

    NASA Technical Reports Server (NTRS)

    Durda, Daniel D.; Dermott, Stanley F.; Gustafson, Bo A. S.

    1992-01-01

    The production rate of dust associated with the prominent Hirayama asteroid families and the background asteroidal population are modeled with the intent of using the families as a calibrator of mainbelt dust production. However, the dust production rates of asteroid families may be highly stochastic; there is probably more than an order of magnitude variation in the total area of dust associated with a family. Over 4.5 x 10(exp 9) years of collisional evolution, the volume (mass) of a family is ground down by an order of magnitude, suggesting a similar loss from the entire mainbelt population. Our collisional models show that the number of meteoroids deliverable to Earth also varies stochastically, but only by a factor of 2 to 3.

  10. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  11. Modelling rating curves using remotely sensed LiDAR data

    USGS Publications Warehouse

    Nathanson, Marcus; Kean, Jason W.; Grabs, Thomas J.; Seibert, Jan; Laudon, Hjalmar; Lyon, Steve W.

    2012-01-01

    Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage-discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics-based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m-wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90-m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a 'hybrid model' rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see 'below' the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics-based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote

  12. Empirical analysis of quantum finance interest rates models

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yang, Cao

    2009-07-01

    Empirical forward interest rates drive the debt markets. Libor and Euribor futures data is used to calibrate and test models of interest rates based on the formulation of quantum finance. In particular, all the model parameters, including interest rate volatilities, are obtained from market data. The random noise driving the forward interest rates is taken to be a Euclidean two dimension quantum field. We analyze two models, namely the bond forward interest rates, which is a linear theory and the Libor Market Model, which is a nonlinear theory. Both the models are analyzed using Libor and Euribor data, with various approximations to match the linear and nonlinear models. The results are quite good, with the linear model having an accuracy of about 99% and the nonlinear model being slightly less accurate. We extend our analysis by directly using the Zero Coupon Yield Curve (ZCYC) data for Libor and for bonds; but due to some technical difficulties we could not derive the models parameters directly from the ZCYC data.

  13. Trends in Incidence Rates during 1999-2008 and Prevalence in 2008 of Childhood Type 1 Diabetes Mellitus in GERMANY – Model-Based National Estimates

    PubMed Central

    Kiess, Wieland; Kapellen, Thomas Michael; Stange, Thoralf; Manuwald, Ulf; Salzsieder, Eckhard; Holl, Reinhard Walter; Schoffer, Olaf; Stahl-Pehe, Anna; Giani, Guido; Ehehalt, Stefan; Neu, Andreas; Rosenbauer, Joachim

    2015-01-01

    Aims To estimate the national incidence rate and trend of type 1 diabetes (T1DM) in Germany from 1999 to 2008 and the national prevalence in 2008 in the age group 0–14 years. Methods Data were taken from a nationwide registry for incident cases of T1DM in the ages 0–4 years and 3 regional registries (North-Rhine-Westphalia, Baden-Wuerttemberg and Saxony) for incident cases of T1DM in the ages 0–14 years covering 41% of the child population in Germany. The degree of ascertainment was ≥ 97% in all registries. Incident and prevalent cases were grouped by region, sex, age (0–4, 5–9, 10–14 years), and, for incident data, additionally by two 5-year periods (1999–2003, 2004–2008). Poisson regression models were fitted to the data to derive national estimates of incidence rate trends and prevalence in the age groups 5–9, 10–14 and 0–14 years. We used direct age-standardization. Results The estimated national incidence rate in 0-14-year-olds increased significantly by 18.1% (95%CI: 11.6–25.0%, p<0.001) from 1999–2003 to 2004–2008, independent of sex, corresponding to an average annual increase of 3.4% (95%-CI: 2.2–4.6%). The overall incidence rate was estimated at 22.9 per 100,000 person-years and we identified a within-country west-east-gradient previously unknown. The national prevalence in the ages 0–14 years on 31/12/2008 was estimated to be 148.1 per 100,000 persons. Conclusions The national incidence rate of childhood T1DM in Germany is higher than in many other countries around the world. Importantly, the estimated trend of the incidence rate confirms the international data of a global increase of T1DM incidences. PMID:26181330

  14. Modeling the dissipation rate in rotating turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Raj, Rishi; Gatski, Thomas B.

    1990-01-01

    A variety of modifications to the modeled dissipation rate transport equation that have been proposed during the past two decades to account for rotational strains are examined. The models are subjected to two crucial test cases: the decay of isotropic turbulence in a rotating frame and homogeneous shear flow in a rotating frame. It is demonstrated that these modifications do not yield substantially improved predictions for these two test cases and in many instances give rise to unphysical behavior. An alternative proposal, based on the use of the tensor dissipation rate, is made for the development of improved models.

  15. A Symmetric Time-Varying Cluster Rate of Descent Model

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2015-01-01

    A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.

  16. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  17. Towards a Model for Protein Production Rates

    NASA Astrophysics Data System (ADS)

    Dong, J. J.; Schmittmann, B.; Zia, R. K. P.

    2007-07-01

    In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two "bottlenecks" (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel "edge" effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.

  18. Template based low data rate speech encoder

    NASA Astrophysics Data System (ADS)

    Fransen, Lawrence

    1993-09-01

    The 2400-b/s linear predictive coder (LPC) is currently being widely deployed to support tactical voice communication over narrowband channels. However, there is a need for lower-data-rate voice encoders for special applications: improved performance in high bit-error conditions, low-probability-of-intercept (LPI) voice communication, and narrowband integrated voice/data systems. An 800-b/s voice encoding algorithm is presented which is an extension of the 2400-b/s LPC. To construct template tables, speech samples of 420 speakers uttering 8 sentences each were excerpted from the Texas Instrument - Massachusetts Institute of Technology (TIMIT) Acoustic-Phonetic Speech Data Base. Speech intelligibility of the 800-b/s voice encoding algorithm measured by the diagnostic rhyme test (DRT) is 91.5 for three male speakers. This score compares favorably with the 2400-b/s LPC of a few years ago.

  19. Modeled Estimates of Soil and Dust Ingestion Rates for Children

    EPA Science Inventory

    Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust inge...

  20. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  1. Prediction of interest rate using CKLS model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-01

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ(j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j'-th time point where j≤j'≤j+n. To model the variation of φ(j), we assume that φ(j) depends on φ(j-m), φ(j-m+1),…, φ(j-1) and the interest rate rj+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value rj+n+1 of the interest rate at the next time point when the value rj+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate rj+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  2. A Model Explaining Genotypic and Ontogenetic Variation of Leaf Photosynthetic Rate in Rice (Oryza sativa) Based on Leaf Nitrogen Content and Stomatal Conductance

    PubMed Central

    Ohsumi, Akihiro; Hamasaki, Akihiro; Nakagawa, Hiroshi; Yoshida, Hiroe; Shiraiwa, Tatsuhiko; Horie, Takeshi

    2007-01-01

    Backgrounds and Aims Identification of physiological traits associated with leaf photosynthetic rate (Pn) is important for improving potential productivity of rice (Oryza sativa). The objectives of this study were to develop a model which can explain genotypic variation and ontogenetic change of Pn in rice under optimal conditions as a function of leaf nitrogen content per unit area (N) and stomatal conductance (gs), and to quantify the effects of interaction between N and gs on the variation of Pn. Methods Pn, N and gs were measured at different developmental stages for the topmost fully expanded leaves in ten rice genotypes with diverse backgrounds grown in pots (2002) and in the field (2001 and 2002). A model of Pn that accounts for carboxylation and CO2 diffusion processes, and assumes that the ratio of internal conductance to gs is constant, was constructed, and its goodness of fit was examined. Key Results Considerable genotypic differences in Pn were evident for rice throughout development in both the pot and field experiments. The genotypic variation of Pn was correlated with that of gs at a given stage, and the change of Pn with plant development was closely related to the change of N. The variation of gs among genotypes was independent of that of N. The model explained well the variation in Pn of the ten genotypes grown under different conditions at different developmental stages. Conclusions The response of Pn to increased N differs with gs, and the increase in Pn of genotypes with low gs is smaller than that of genotypes with high gs. Therefore, simultaneous improvements of these two traits are essential for an effective breeding of rice genotypes with increased Pn. PMID:17204541

  3. Ground-Based Remote Retrievals of Cumulus Entrainment Rates

    SciTech Connect

    Wagner, Timothy J.; Turner, David D.; Berg, Larry K.; Krueger, Steven K.

    2013-07-26

    While fractional entrainment rates for cumulus clouds have typically been derived from airborne observations, this limits the size and scope of available data sets. To increase the number of continental cumulus entrainment rate observations available for study, an algorithm for retrieving them from ground-based remote sensing observations has been developed. This algorithm, called the Entrainment Rate In Cumulus Algorithm (ERICA), uses the suite of instruments at the Southern Great Plains (SGP) site of the United States Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility as inputs into a Gauss-Newton optimal estimation scheme, in which an assumed guess of the entrainment rate is iteratively adjusted through intercomparison of modeled liquid water path and cloud droplet effective radius to their observed counterparts. The forward model in this algorithm is the Explicit Mixing Parcel Model (EMPM), a cloud parcel model that treats entrainment as a series of discrete entrainment events. A quantified value for measurement uncertainty is also returned as part of the retrieval. Sensitivity testing and information content analysis demonstrate the robust nature of this method for retrieving accurate observations of the entrainment rate without the drawbacks of airborne sampling. Results from a test of ERICA on three months of shallow cumulus cloud events show significant variability of the entrainment rate of clouds in a single day and from one day to the next. The mean value of 1.06 km-¹ for the entrainment rate in this dataset corresponds well with prior observations and simulations of the entrainment rate in cumulus clouds.

  4. Direct regression models for longitudinal rates of change

    PubMed Central

    Bryan, Matthew; Heagerty, Patrick J.

    2014-01-01

    Comparing rates of growth, or rates of change, across covariate-defined subgroups is a primary objective for many longitudinal studies. In the special case of a linear trend over time, the interaction between a covariate and time will characterize differences in longitudinal rates of change. However, in the presence of a non-linear longitudinal trajectory, the standard mean regression approach does not permit parsimonious description or inference regarding differences in rates of change. Therefore, we propose regression methodology for longitudinal data that allows a direct, structured comparison of rates across subgroups even in the presence of a non-linear trend over time. Our basic longitudinal rate regression method assumes a proportional difference across covariate groups in the rate of change across time, but this assumption can be relaxed. Rates are compared relative to a generally specified time trend for which we discuss both parametric and non-parametric estimating approaches. We develop mixed model longitudinal methodology that explicitly characterizes subject-to-subject variation in rates, as well as a marginal estimating equation-based method. In addition, we detail a score test to detect violations of the proportionality assumption, and we allow time-varying rate effects as a natural generalization. Simulation results demonstrate potential gains in power for the longitudinal rate regression model relative to a linear mixed effects model in the presence of a non-linear trend in time. We apply our method to a study of growth among infants born to HIV infected mothers, and conclude with a discussion of possible extensions for our methods. PMID:24497427

  5. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  6. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM. PMID:26737125

  7. A generalized Prandtl-Ishlinskii model for characterizing the rate-independent and rate-dependent hysteresis of piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Gan, Jinqiang; Zhang, Xianmin; Wu, Heng

    2016-03-01

    In this paper, a generalized hysteresis model is developed to describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. Based on the classical Prandtl-Ishlinskii (P-I) model, the developed model adds a quadratic polynomial and makes other small changes. When it is used to describe rate-independent hysteresis, the parameters of the model are constants, which can be identified by self-adaptive particle swarm optimization. The effectiveness of this rate-independent modified P-I model is demonstrated by comparing simulation results of the developed model and the classic Prandtl-Ishlinskii model. Simulation results suggest that the rate-independent modified P-I model can describe hysteresis more precisely. Compared with the classical P-I model, the rate-independent modified P-I model reduces modeling error by more than 50%. When it is used to describe rate-independent hysteresis, a one-side operator is adopted and the parameters are functions with input frequency. The results of the experiments and simulations have shown that the proposed models can accurately describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators.

  8. A generalized Prandtl-Ishlinskii model for characterizing the rate-independent and rate-dependent hysteresis of piezoelectric actuators.

    PubMed

    Gan, Jinqiang; Zhang, Xianmin; Wu, Heng

    2016-03-01

    In this paper, a generalized hysteresis model is developed to describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. Based on the classical Prandtl-Ishlinskii (P-I) model, the developed model adds a quadratic polynomial and makes other small changes. When it is used to describe rate-independent hysteresis, the parameters of the model are constants, which can be identified by self-adaptive particle swarm optimization. The effectiveness of this rate-independent modified P-I model is demonstrated by comparing simulation results of the developed model and the classic Prandtl-Ishlinskii model. Simulation results suggest that the rate-independent modified P-I model can describe hysteresis more precisely. Compared with the classical P-I model, the rate-independent modified P-I model reduces modeling error by more than 50%. When it is used to describe rate-independent hysteresis, a one-side operator is adopted and the parameters are functions with input frequency. The results of the experiments and simulations have shown that the proposed models can accurately describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. PMID:27036808

  9. Modeling Equity for Alternative Water Rate Structures

    NASA Astrophysics Data System (ADS)

    Griffin, R.; Mjelde, J.

    2011-12-01

    The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to

  10. Decoding Nitric Oxide Release Rates of Amine-Based Diazeniumdiolates

    PubMed Central

    Wang, Yan-Ni; Collins, Jack; Holland, Ryan J.; Keefer, Larry K.; Ivanic, Joseph

    2013-01-01

    Amine-based diazeniumdiolates (NONOates) have garnered widespread use as nitric oxide (NO) donors and their potential for nitroxyl (HNO) release has more recently been realized. While NO release rates can vary significantly with the type of amine, half-lives of seconds to days under physiological conditions, there is as yet no way to determine a priori the NO or HNO production rates of a given species and no discernible trends have manifested other than that secondary amines produce only NO (i.e., no HNO). As a step to understanding these complex systems, here we describe a procedure for modeling amine-based NONOates in water solvent that provides an excellent correlation (R2 = 0.94) between experimentally measured dissociation rates of seven secondary amine species and their computed NO release activation energies. The significant difference in behavior of NONOates in the gas and solvent phases is also rigorously demonstrated via explicit additions of quantum mechanical water molecules. The presented results suggest that the as-yet unsynthesized simplest amine-based NONOate, the diazeniumdiolated ammonia anion [H2N-N(O)=NO−], could serve as an unperturbed HNO donor. These results provide a step forward toward the accurate modeling of general NO and/or HNO donors as well as for the identification of tailored prodrug candidates. PMID:23834533

  11. Probabilistic precipitation rate estimates with ground-based radar networks

    NASA Astrophysics Data System (ADS)

    Kirstetter, Pierre-Emmanuel; Gourley, Jonathan J.; Hong, Yang; Zhang, Jian; Moazamigoodarzi, Saber; Langston, Carrie; Arthur, Ami

    2015-03-01

    The uncertainty structure of radar quantitative precipitation estimation (QPE) is largely unknown at fine spatiotemporal scales near the radar measurement scale. By using the WSR-88D radar network and gauge data sets across the conterminous US, an investigation of this subject has been carried out within the framework of the NOAA/NSSL ground radar-based Multi-Radar Multi-Sensor (MRMS) QPE system. A new method is proposed and called PRORATE for probabilistic QPE using radar observations of rate and typology estimates. Probability distributions of precipitation rates are computed instead of deterministic values using a model quantifying the relation between radar reflectivity and the corresponding "true" precipitation. The model acknowledges the uncertainty arising from many factors operative at the radar measurement scale and from the correction algorithm. Ensembles of reflectivity-to-precipitation rate relationships accounting explicitly for precipitation typology were derived at a 5 min/1 km scale. This approach conditions probabilistic quantitative precipitation estimates (PQPE) on the precipitation rate and type. The model components were estimated on the basis of a 1 year long data sample over the CONUS. This PQPE model provides the basis for precipitation probability maps and the generation of radar precipitation ensembles. Maps of the precipitation exceedance probability for specific thresholds (e.g., precipitation return periods) are computed. Precipitation probability maps are accumulated to the hourly time scale and compare favorably to the deterministic QPE. As an essential property of precipitation, the impact of the temporal correlation on the hourly accumulation is examined. This approach to PQPE can readily apply to other systems including space-based passive and active sensor algorithms.

  12. 77 FR 24198 - Notice of Revocation of Market-Based Rate Authority and Termination of Market-Based Rate Tariffs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-23

    ... Energy Regulatory Commission Notice of Revocation of Market-Based Rate Authority and Termination of Market-Based Rate Tariffs Docket Nos. Aleph One, Inc ER04-686-000 Alpha Domestic Power Trading, LLC... market-based rate authority of the public utilities listed in the caption of that order, which had...

  13. Discovery of novel mGluR1 antagonists: a multistep virtual screening approach based on an SVM model and a pharmacophore hypothesis significantly increases the hit rate and enrichment factor.

    PubMed

    Li, Guo-Bo; Yang, Ling-Ling; Feng, Shan; Zhou, Jian-Ping; Huang, Qi; Xie, Huan-Zhang; Li, Lin-Li; Yang, Sheng-Yong

    2011-03-15

    Development of glutamate non-competitive antagonists of mGluR1 (Metabotropic glutamate receptor subtype 1) has increasingly attracted much attention in recent years due to their potential therapeutic application for various nervous disorders. Since there is no crystal structure reported for mGluR1, ligand-based virtual screening (VS) methods, typically pharmacophore-based VS (PB-VS), are often used for the discovery of mGluR1 antagonists. Nevertheless, PB-VS usually suffers a lower hit rate and enrichment factor. In this investigation, we established a multistep ligand-based VS approach that is based on a support vector machine (SVM) classification model and a pharmacophore model. Performance evaluation of these methods in virtual screening against a large independent test set, M-MDDR, show that the multistep VS approach significantly increases the hit rate and enrichment factor compared with the individual SB-VS and PB-VS methods. The multistep VS approach was then used to screen several large chemical libraries including PubChem, Specs, and Enamine. Finally a total of 20 compounds were selected from the top ranking compounds, and shifted to the subsequent in vitro and in vivo studies, which results will be reported in the near future. PMID:21316965

  14. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  15. Review of composite propellant burn rate modeling

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.

    1979-01-01

    Steady-state combustion modeling of composite solid propellants is discussed with emphasis on the Beckstead-Derr-Price (BDP) model. The BDP model and some revisions are considered with respect to the analysis of monomodal ammonium perchlorate/inert binder propellants: topics examined include continuity relations, surface area relations, characteristic surface dimension, flame heights, and energy balance. Application of the BDP model to more complicated propellants containing multiple active ingredients is described. These propellants include multimodal, mixed oxidizer, active binder, aluminized, catalyzed, and nitramine propellants. Example cases of modeling (with comparison to experimental data) are presented, and strengths and weaknesses of current modeling approaches are evaluated.

  16. Hierarchical Classes Modeling of Rating Data

    ERIC Educational Resources Information Center

    Van Mechelen, Iven; Lombardi, Luigi; Ceulemans, Eva

    2007-01-01

    Hierarchical classes (HICLAS) models constitute a distinct family of structural models for N-way N-mode data. All members of the family include N simultaneous and linked classifications of the elements of the N modes implied by the data; those classifications are organized in terms of hierarchical, if-then-type relations. Moreover, the models are…

  17. Recurrent personality factors based on trait ratings.

    PubMed

    Tupes, E C; Christal, R E

    1992-06-01

    Intercorrelations among ratings on 35 personality traits, selected as representative of the personality domain, were obtained for eight samples. These samples differed in length of acquaintanceship from 3 days to more than a year; in kind of acquaintanceship from assessment programs in a military training course to a fraternity house situation; in type of subject from airmen with only a high-school education to male and female undergraduate students to first-year graduate students; and in type of rater from very naive persons to clinical psychologists and psychiatrists with years of experience in the evaluation of personality. Centroid or multiple-group factors were extracted and rotated orthogonally to simple structure. For one study, an independent solution was obtained in which analytic rotations were accomplished on an IBM 650 computer using Kaiser's normal varimax criterion. Five fairly strong and recurrent factors emerged from each analysis, labeled as (a) Surgency, (b) Agreeableness, (c) Dependability, (d) Emotional Stability, and (e) Culture. PMID:1635043

  18. Etch rate Modeling and Real-time Control

    NASA Astrophysics Data System (ADS)

    Hershkowitz, N.; Sarfaty, M.; Baum, C.; Harper, M.; Shohet, J. L.

    1997-11-01

    The relatively high process rates in high density plasma tools as well as the shrinking thickness of the films, require fast estimate of the process state in order to implement real-time advanced process control. The fast etch rate estimate, within one second, in a spot size of 1-2 mm and the time averaged rates across the wafer are obtained by a combined use of an in-situ two-color laser interferometer and a full wafer image interferometer, respectively. The gas phase state is monitored by optical emission spectroscopy and a residual gas analyzer. The magnetically confined ICP tool state, including gas flow, pressure, and RF power to the antenna and the electrostatic chuck, is computer controlled and monitored. The absolute thickness of the film is determined during the process, thus providing an end-point prediction. Splitting the two-color laser beam to two spots on the wafer that are coated with different films provides real-time etch selectivity. The advantages of two-color laser interferometry for real-time process monitoring, development and control will be described. Langmuir kinetics modeling of the measured etch rates of polysilicon and SiO2 films in Cl2 and CF4 discharges using tool state parameters will be described. The etch rate model enabled us to develop a model-based real-time control algorithm. The achieved real-time control of plasma etch rates of un-patterned SiO2 and polysilicon films will be described. This work is funded by NSF grant No. EEC-8721545.

  19. Tantalum strength model incorporating temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Triple-α reaction rate constrained by stellar evolution models

    NASA Astrophysics Data System (ADS)

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-01

    We investigate the quantitative constraint on the triple-α reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8<=M/Msolar<=25 and in the metallicity range between Z = 0 and Z = 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10Msolar) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8<=M/Msolar<=6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-α reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least ν > 10 at T = 1-1.2×108K where the cross section is proportional to Tν. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than ~ 10-29 cm6 s-1 mole-2 at ~ 107.8 K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  1. Triple-{alpha} reaction rate constrained by stellar evolution models

    SciTech Connect

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-12

    We investigate the quantitative constraint on the triple-{alpha} reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}25 and in the metallicity range between Z= 0 and Z= 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10M{sub Circled-Dot-Operator }) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-{alpha} reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least {nu} > 10 at T = 1-1.2 Multiplication-Sign 10{sup 8}K where the cross section is proportional to T{sup {nu}}. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than {approx} 10{sup -29} cm{sup 6} s{sup -1} mole{sup -2} at Almost-Equal-To 10{sup 7.8} K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  2. Software for emission rate modeling of accidental toxic releases

    SciTech Connect

    Kumar, A.; Vashisth, S.

    1999-08-01

    This book fulfills the need for Section 112(r) of the Clean Air Act Amendments of 1990. This software is based on the guidelines released by the USEPA. It includes manual and proprietary software on CDROM. Contents include release scenario description (two-phase and single-phase choked/unchoked gas release, two-phase pressurized and refrigerated liquid release, single-phase high and low volatility liquid release); emission rate model development for each release class; software design and software evaluation and application.

  3. Comparison of field theory models of interest rates with market data

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Srikant, Marakani

    2004-03-01

    We calibrate and test various variants of field theory models of the interest rate with data from Eurodollar futures. Models based on psychological factors are seen to provide the best fit to the market. We make a model independent determination of the volatility function of the forward rates from market data.

  4. Empirical rate equation model and rate calculations of hydrogen generation for Hanford tank waste

    SciTech Connect

    HU, T.A.

    1999-07-13

    Empirical rate equations are derived to estimate hydrogen generation based on chemical reactions, radiolysis of water and organic compounds, and corrosion processes. A comparison of the generation rates observed in the field with the rates calculated for twenty eight tanks shows agreement with in a factor of two to three.

  5. Individual-Based Completion Rates for Apprentices. Technical Paper

    ERIC Educational Resources Information Center

    Karmel, Tom

    2011-01-01

    Low completion rates for apprentices and trainees have received considerable attention recently and it has been argued that NCVER seriously understates completion rates. In this paper Tom Karmel uses NCVER data on recommencements to estimate individual-based completion rates. It is estimated that around one-quarter of trade apprentices swap…

  6. Dynamic mathematical model of high rate algal ponds (HRAP).

    PubMed

    Jupsin, H; Praet, E; Vasel, J L

    2003-01-01

    This article presents a mathematical model to describe High-Rate Algal Ponds (HRAPs). The hydrodynamic behavior of the reactor is described as completely mixed tanks in series with recirculation. The hydrodynamic pattern is combined with a subset of River Water Quality Model 1 (RWQM1), including the main processes in liquid phase. Our aim is to develop models for WSPs and aerated lagoons, too, but we focused on HRAPs first for several reasons: Sediments are usually less abundant in HRAP and can be neglected, Stratification is not observed and state variables are constant in a reactor cross section, Due to the system's geometry, the reactor is quite similar to a plugflow type reactor with recirculation, with a simple advection term. The model is based on mass balances and includes the following processes: *Phytoplankton growth with NO3-, NO2- and death, *Aerobic growth of heterotrophs with NO3-, NH4+ and respiration, *Anoxic growth of heterotrophs with NO3-, NO2- and anoxic respiration, *Growth of nitrifiers (two stages) and respiration. The differences with regard to RWQM1 are that we included a limiting term associated with inorganic carbon on the growth rate of algae and nitrifiers, gas transfers are taken into account by the familiar Adeney equation, and a subroutine calculates light intensity at the water surface. This article presents our first simulations. PMID:14510211

  7. Model based manipulator control

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1989-01-01

    The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.

  8. Program documentation: Surface heating rate of thin skin models (THNSKN)

    NASA Technical Reports Server (NTRS)

    Mcbryde, J. D.

    1975-01-01

    Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.

  9. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 592.510 Section 592.510 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate....

  10. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 592.510 Section 592.510 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate....

  11. Reasoning with base rates is routine, relatively effortless, and context dependent.

    PubMed

    Pennycook, Gordon; Thompson, Valerie A

    2012-06-01

    We tested models of base rate "neglect" using a novel paradigm. Participants (N = 62) judged the probability that a hypothetical person belonged to one of two categories (e.g., nurse/doctor) on the basis of either a personality description alone (NoBR) or the personality description and a base rate probability (BR). When base rates and descriptions were congruent, judgments in the BR condition were higher and more uniform than those in the NoBR condition. In contrast, base rates had a polarizing effect on judgments when they were incongruent with the descriptions, such that estimates were either consistent with the base rates or discrepant with them. These data suggest that the form of base rate use (i.e., whether base rates will be integrated with diagnostic information) is context dependent. In addition, judgments made under instructions to respond intuitively were influenced by the base rates and took the same length of time in the two conditions. These data suggest that the use of base rates is routine and effortless and that base rate "neglect" is really a mixture of two strategies, one that is informed primarily by the base rate and the other by the personality description. PMID:22427266

  12. Satellite altimetry based rating curves throughout the entire Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present

  13. Item Response Models for Local Dependence among Multiple Ratings

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Su, Chi-Ming; Qiu, Xue-Lan

    2014-01-01

    Ratings given to the same item response may have a stronger correlation than those given to different item responses, especially when raters interact with one another before giving ratings. The rater bundle model was developed to account for such local dependence by forming multiple ratings given to an item response as a bundle and assigning…

  14. ODP based UPT model

    NASA Astrophysics Data System (ADS)

    Berre, A. J.; Handegard, T.; Loevnes, K.; Skjellaug, B.; Aagedal, J. O.

    1994-01-01

    The report documents the experiments with object oriented modelling of Universal Personal Telecommunication (UPT) in a telecommunication environment based on the basic principles of open distributed processing (ODP). Through the object-oriented analysis and design technique Object Modelling Technique (OMT) the service is modelled as a collection of software objects distributed across multiple network nodes. A software platform provides the mechanisms for application objects to interact. The platform builds on the basic facilities in the native computing and communication environments, but hides the heterogeneity of these environments and provides distribution transparency at the application programmer's interface. The report closes with some thoughts about applying the paradigm of ODP to intelligent networks (IN), and the experience with OMT as a modelling technique for real time distributed applications.

  15. Solutions of two-factor models with variable interest rates

    NASA Astrophysics Data System (ADS)

    Li, Jinglu; Clemons, C. B.; Young, G. W.; Zhu, J.

    2008-12-01

    The focus of this work is on numerical solutions to two-factor option pricing partial differential equations with variable interest rates. Two interest rate models, the Vasicek model and the Cox-Ingersoll-Ross model (CIR), are considered. Emphasis is placed on the definition and implementation of boundary conditions for different portfolio models, and on appropriate truncation of the computational domain. An exact solution to the Vasicek model and an exact solution for the price of bonds convertible to stock at expiration under a stochastic interest rate are derived. The exact solutions are used to evaluate the accuracy of the numerical simulation schemes. For the numerical simulations the pricing solution is analyzed as the market completeness decreases from the ideal complete level to one with higher volatility of the interest rate and a slower mean-reverting environment. Simulations indicate that the CIR model yields more reasonable results than the Vasicek model in a less complete market.

  16. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  17. Bioremediation of oil-contaminated soil -- A rate model

    SciTech Connect

    Li, K.Y.; Zhang, Y.; Xu, T.

    1995-12-31

    Three rate equations, a modified Monod equation and two mass transfer rate equations, were used to calculate the biodegradation rate, oxygen transfer rate and oil transfer rate during a bioremediation process of oil-contaminated soil. Based on experimental rate constants, these three rates were calculated and compared. It was found the bioremediation rate of oil-contaminated soil could be controlled by the mass transfer process of oil into aqueous solution (0.12 mg BOD/(1-h)). When the oil transfer rate is enhanced by at least 10 times, the oxygen transfer process (0.1--1.0 mg BOD/(1-h)) becomes the rate-controlling step. For most of the cases, the biodegradation of oil in aqueous solution is not the limiting step unless the microbial population in the aqueous solution is less than 100 mg VSS/1.

  18. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  19. A model for the burning rates of composite propellants

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.; Strand, L. D.

    1980-01-01

    An analytical model of the steady-state burning of composite solid propellants is presented. An improved burning rate model is achieved by incorporating an improved AP monopropellant model, a separate energy balance for the binder in which a portion of the diffusion flame is used to heat the binder, proper use of the binder regression rate in the model, and a model for the combustion of the energetic binder component of CMDB propellants. Also, an improved correlation and model of aluminum agglomeration is developed which properly describes compositional trends.

  20. Teachers' Characteristics and Ratings for Evidence-Based Behavioral Interventions

    ERIC Educational Resources Information Center

    Stormont, Melissa; Reinke, Wendy; Herman, Keith

    2011-01-01

    The vast majority of schools today are not prepared to support children's social behavior needs. One challenge is that teachers may not be knowledgeable of evidence-based practices that can be utilized with children. This study explored teachers' agreement ratings for evidence-based and nonevidence-based behavior management practices for children…

  1. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  2. Rate-based ABR flow control using two timescale SPSA

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Shalabh; Fu, Michael C.; Marcus, Steven I.

    1999-08-01

    In this paper, a two timescale simultaneous perturbation stochastic approximation algorithm is developed and applied to closed loop rate based available bit rate flow control. The relevant convergence results are stated and explained. Numerical experiments demonstrate fast convergence even in the presence of significant delays and a large number of parameterized policy levels.

  3. Development of an Interview-Based Geriatric Depression Rating Scale.

    ERIC Educational Resources Information Center

    Jamison, Christine; Scogin, Forrest

    1992-01-01

    Developed interview-based Geriatric Depression Rating Scale (GDRS) and administered 35-item GDRS to 68 older adults with range of affective disturbance. Found scale to have internal consistency and split-half reliability comparable to those of Hamilton Rating Scale for Depression and Geriatric Depression Scale. Concurrent validity, construct…

  4. Additive-multiplicative rates model for recurrent events.

    PubMed

    Liu, Yanyan; Wu, Yuanshan; Cai, Jianwen; Zhou, Haibo

    2010-07-01

    Recurrent events are frequently encountered in biomedical studies. Evaluating the covariates effects on the marginal recurrent event rate is of practical interest. There are mainly two types of rate models for the recurrent event data: the multiplicative rates model and the additive rates model. We consider a more flexible additive-multiplicative rates model for analysis of recurrent event data, wherein some covariate effects are additive while others are multiplicative. We formulate estimating equations for estimating the regression parameters. The estimators for these regression parameters are shown to be consistent and asymptotically normally distributed under appropriate regularity conditions. Moreover, the estimator of the baseline mean function is proposed and its large sample properties are investigated. We also conduct simulation studies to evaluate the finite sample behavior of the proposed estimators. A medical study of patients with cystic fibrosis suffered from recurrent pulmonary exacerbations is provided for illustration of the proposed method. PMID:20229314

  5. Markov models and the ensemble Kalman filter for estimation of sorption rates.

    SciTech Connect

    Vugrin, Eric D.; McKenna, Sean Andrew; Vugrin, Kay White

    2007-09-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

  6. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  7. New model describing the dynamical behaviour of penetration rates

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru; Minagawa, Hiroe; Chiba, Michiko

    2013-02-01

    We propose a hierarchical logistic equation as a model to describe the dynamical behaviour of a penetration rate of a prevalent stuff. In this model, a memory, how many people who already possess it a person who does not process it yet met, is considered, which does not exist in the logistic model. As an application, we apply this model to iPod sales data, and find that this model can approximate the data much better than the logistic equation.

  8. Modeling Large-Strain, High-Rate Deformation in Metals

    SciTech Connect

    Lesuer, D R; Kay, G J; LeBlanc, M M

    2001-07-20

    The large strain deformation response of 6061-T6 and Ti-6Al-4V has been evaluated over a range in strain rates from 10{sup -4} s{sup -1} to over 10{sup 4} s{sup -1}. The results have been used to critically evaluate the strength and damage components of the Johnson-Cook (JC) material model. A new model that addresses the shortcomings of the JC model was then developed and evaluated. The model is derived from the rate equations that represent deformation mechanisms active during moderate and high rate loading. Another model that accounts for the influence of void formation on yield and flow behavior of a ductile metal (the Gurson model) was also evaluated. The characteristics and predictive capabilities of these models are reviewed.

  9. Model Based Definition

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  10. Cooling rate based on schreibersite growth for the Emery mesosiderite

    NASA Technical Reports Server (NTRS)

    Kulpecz, A. A., Jr.; Hewins, R. H.

    1978-01-01

    Computer simulation of diffusion-controlled growth of the large Ni-rich grains of schreibersite found in the Emery mesosiderite indicates that exsolution from kamacite occurred during cooling at the rate of 0.1 C/Myr. This finding agrees with the mesosiderite cooling rate determined by Powell (1969) from taenite-kamacite data. The cooling rate is the lowest found for any meteorite group, and implications for the cooling history, with a possibility of reheating, are considered. The procedure for computing a family of cooling rate curves is based on Randich's (1975) method.

  11. Model analysis of the link between interest rates and crashes

    NASA Astrophysics Data System (ADS)

    Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft

    2016-09-01

    We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.

  12. A Modeling Perspective on Interpreting Rates of Change in Context

    ERIC Educational Resources Information Center

    Ärlebäck, Jonas B.; Doerr, Helen M.; O'Neil, AnnMarie H.

    2013-01-01

    Functions provide powerful tools for describing change, but research has shown that students find difficulty in using functions to create and interpret models of changing phenomena. In this study, we drew on a models and modeling perspective to design an instructional approach to develop students' abilities to describe and interpret rates of…

  13. Stable large-scale CO2 storage in defiance of an energy system based on renewable energy - Modelling the impact of varying CO2 injection rates on reservoir behavior

    NASA Astrophysics Data System (ADS)

    Bannach, Andreas; Hauer, Rene; Martin, Streibel; Stienstra, Gerard; Kühn, Michael

    2015-04-01

    The IPCC Report 2014 strengthens the need for CO2 storage as part of CCS or BECCS to reach ambitious climate goals despite growing energy demand in the future. The further expansion of renewable energy sources is a second major pillar. As it is today in Germany the weather becomes the controlling factor for electricity production by fossil fuelled power plants which lead to significant fluctuations of CO2-emissions which can be traced in injection rates if the CO2 were captured and stored. To analyse the impact of such changing injection rates on a CO2 storage reservoir. two reservoir simulation models are applied: a. An (smaller) reservoir model approved by gas storage activities for decades, to investigate the dynamic effects in the early stage of storage filling (initial aquifer displacement). b. An anticline structure big enough to accommodate a total amount of ≥ 100 Mega tons CO2 to investigate the dynamic effects for the entire operational life time of the storage under particular consideration of very high filling levels (highest aquifer compression). Therefore a reservoir model was generated. The defined yearly injection rate schedule is based on a study performed on behalf of IZ Klima (DNV GL, 2014). According to this study the exclusive consideration of a pool of coal-fired power plants causes the most intensive dynamically changing CO2 emissions and hence accounts for variations of a system which includes industry driven CO2 production. Besides short-term changes (daily & weekly cycles) seasonal influences are also taken into account. Simulation runs cover a variation of injection points (well locations at the top vs. locations at the flank of the structure) and some other largely unknown reservoir parameters as aquifer size and aquifer mobility. Simulation of a 20 year storage operation is followed by a post-operational shut-in phase which covers approximately 500 years to assess possible effects of changing injection rates on the long-term reservoir

  14. Molecule-based approach for computing chemical-reaction rates in upper atmosphere hypersonic flows.

    SciTech Connect

    Gallis, Michail A.; Bond, Ryan Bomar; Torczynski, John Robert

    2009-08-01

    This report summarizes the work completed during FY2009 for the LDRD project 09-1332 'Molecule-Based Approach for Computing Chemical-Reaction Rates in Upper-Atmosphere Hypersonic Flows'. The goal of this project was to apply a recently proposed approach for the Direct Simulation Monte Carlo (DSMC) method to calculate chemical-reaction rates for high-temperature atmospheric species. The new DSMC model reproduces measured equilibrium reaction rates without using any macroscopic reaction-rate information. Since it uses only molecular properties, the new model is inherently able to predict reaction rates for arbitrary nonequilibrium conditions. DSMC non-equilibrium reaction rates are compared to Park's phenomenological non-equilibrium reaction-rate model, the predominant model for hypersonic-flow-field calculations. For near-equilibrium conditions, Park's model is in good agreement with the DSMC-calculated reaction rates. For far-from-equilibrium conditions, corresponding to a typical shock layer, the difference between the two models can exceed 10 orders of magnitude. The DSMC predictions are also found to be in very good agreement with measured and calculated non-equilibrium reaction rates. Extensions of the model to reactions typically found in combustion flows and ionizing reactions are also found to be in very good agreement with available measurements, offering strong evidence that this is a viable and reliable technique to predict chemical reaction rates.

  15. Surface analyses and modelling of rate multiplicity and instabilities

    SciTech Connect

    Harold, M.P.; Conner, W.C.

    1990-11-01

    Catalytic partial and complete oxidations of chemical species are a quite important class of reactions in the production of many commercial chemicals and in the elimination of environmental pollutants. This research focuses on a sub-class of oxidation reactions in which CO is a key player -- be it a reactant, intermediate, or product -- in the catalytic sequence and chemistry. The first three years of our research has followed two parallel paths which have a common destination and which together provide a framework for the proposed new research. The first path has involved the development of a combined experimental/modelling and analysis methodology for constructing feasible mechanistic sequences and their corresponding kinetic models of catalytic reactions that exhibit multiple rate behavior. The rather well-studied Pt catalyzed CO oxidation served as the test reaction. Our approach involves the measurement of basic kinetic features (apparent reaction orders, activation energy) and multiplicity features (location of ignition and extinction points) over a wide range of conditions (catalyst temperature, total pressure, feed composition), and a kinetic modelling part, in which potential reaction sequences are constructed and screened based on their ability to predict the experimentally observed kinetic and multiplicity features over a wide range of conditions. The second path has involved the development of an under-utilized (in catalysis) spectroscopic technique, Fourier transform infrared emission spectroscopy (FTIRES), to monitor the surface of a catalyst during reaction. Main accomplishments from both studies are summarized.

  16. On Estimation of GPS-based Indonesian Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Susilo, Susilo; Abidin, Hasanuddin Z.; Meilano, Irwan; Sapiie, Benyamin; Wijanarto, Antonius B.

    2016-04-01

    Using the GPS-derived rates at survey mode (sGPS) stations and continuous GPS stations across Indonesian region, covering the 22 years period from 1993 to 2014, the linear deformation velocities with an accuracy of about 2 to 3 mm/year level are derived. These velocities are corrected to the coseismic and postseismic deformation caused by significant earthquakes in that period. In this study, we use this GPS velocities field to construct a crustal strain rate map without including the physical model yet. An interpolation method was used to compute the velocity model. By differentiation of the continuous velocity model, we derive the strain rate map of Indonesia. At present, our result is only the magnitude of the strain rate. The Indonesian strain rate map is very important for studying the deformation characteristics in the region and to establish a deformation (velocity) model for supporting the implementation of the Indonesian Geospatial Reference System 2013 (IGRS 2013). This is a new semi-dynamic geocentric datum of Indonesia, which uses the global ITRF2008 reference frame, with a reference epoch of 1 January 2012. A deformation (velocity) model is required to transform coordinates from an observation epoch to or from this reference epoch.

  17. The Random-Effect Generalized Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wu, Shiu-Lien

    2011-01-01

    Rating scale items have been widely used in educational and psychological tests. These items require people to make subjective judgments, and these subjective judgments usually involve randomness. To account for this randomness, Wang, Wilson, and Shih proposed the random-effect rating scale model in which the threshold parameters are treated as…

  18. STATISTICAL MODEL OF LABORATORY DEATH RATE MEASUREMENTS FOR AIRBORNE BACTERIA

    EPA Science Inventory

    From 270 published laboratory airborne death rate measurements, two regression models relating the death rate constant for 15 bacterial species to aerosol age in the dark, Gram reaction, temperature, and an evaporation factor which is a function of RH and temperature were obtaine...

  19. Rate dependent constitutive models for fiber reinforced polymer composites

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.

    1990-01-01

    A literature survey was conducted to assess the state-of-the-art in rate dependent constitutive models for continuous fiber reinforced polymer matrix composite (PMC) materials. Several recent models which include formulations for describing plasticity, viscoelasticity, viscoplasticity, and rate-dependent phenomenon such as creep and stress relaxation are outlined and compared. When appropriate, these comparisons include brief descriptions of the mathematical formulations, the test procedures required for generating material constants, and details of available data comparing test results to analytical predictions.

  20. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  1. The Wealth Distribution Model with the Kickback Rate

    NASA Astrophysics Data System (ADS)

    Zhang, Yujie; He, Mingfeng

    We define an asset exchange model by adding the kickback rate to the trade, and discuss the Gini index with different kickback rates. It is found that for every kickback rate, the related Gini index tends to be steady; thus, the kickback rate — Gini index curve may be obtained. Furthermore, it is shown that the Gini index decreases when the kickback rate increases, so that the fair degree of social wealth distribution gets better. The Gini index reaches a minimum when the kickback rate is 0.58, and then it increases, as the accretion of the kickback rate destroys the fair degree of social wealth distribution. However, in all situations, the Gini index with kickback rate is less than the one without kickback. This means that the introduction of kickback rate is favorable to the raising of the fair degree of wealth distribution. We also define a moral index similar to the Gini index to weigh the differences of social moral level, and find that the differences of social moral level increase with time for the model with kickback rate.

  2. First principles modeling of nonlinear incidence rates in seasonal epidemics.

    PubMed

    Ponciano, José M; Capistrán, Marcos A

    2011-02-01

    In this paper we used a general stochastic processes framework to derive from first principles the incidence rate function that characterizes epidemic models. We investigate a particular case, the Liu-Hethcote-van den Driessche's (LHD) incidence rate function, which results from modeling the number of successful transmission encounters as a pure birth process. This derivation also takes into account heterogeneity in the population with regard to the per individual transmission probability. We adjusted a deterministic SIRS model with both the classical and the LHD incidence rate functions to time series of the number of children infected with syncytial respiratory virus in Banjul, Gambia and Turku, Finland. We also adjusted a deterministic SEIR model with both incidence rate functions to the famous measles data sets from the UK cities of London and Birmingham. Two lines of evidence supported our conclusion that the model with the LHD incidence rate may very well be a better description of the seasonal epidemic processes studied here. First, our model was repeatedly selected as best according to two different information criteria and two different likelihood formulations. The second line of evidence is qualitative in nature: contrary to what the SIRS model with classical incidence rate predicts, the solution of the deterministic SIRS model with LHD incidence rate will reach either the disease free equilibrium or the endemic equilibrium depending on the initial conditions. These findings along with computer intensive simulations of the models' Poincaré map with environmental stochasticity contributed to attain a clear separation of the roles of the environmental forcing and the mechanics of the disease transmission in shaping seasonal epidemics dynamics. PMID:21379320

  3. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified

  4. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J., II; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  5. Equivalence of interest rate models and lattice gases

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t1,t2)=-Cov[x(t1),x(t2)]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e-γ|x-y|-e-γ(x+y)). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  6. Faculty Ratings as Part of a Competency-Based Evaluation Clinic Grading System.

    ERIC Educational Resources Information Center

    Chambers, David W.

    1999-01-01

    Describes a quarterly rating system developed to replace daily grading in a dental school with a competency-based educational model. Presents results from an early administration of the ratings. These results, for 126 students, show excellent face validity and rater consistency and satisfy the school's standard for grade defensibility. (SLD)

  7. Cosmogenic Ne-21 Production Rates in H-Chondrites Based on Cl-36 - Ar-36 Ages

    NASA Technical Reports Server (NTRS)

    Leya, I.; Graf, Th.; Nishiizumi, K.; Guenther, D.; Wieler, R.

    2000-01-01

    We measured Ne-21 production rates in 14 H-chondrites in good agreement with model calculations. The production rates are based on Ne-21 concentrations measured on bulk samples or the non-magnetic fraction and Cl-36 - Ar-36 ages determined from the metal phase.

  8. A rumor spreading model with variable forgetting rate

    NASA Astrophysics Data System (ADS)

    Zhao, Laijun; Xie, Wanlin; Gao, H. Oliver; Qiu, Xiaoyan; Wang, Xiaoli; Zhang, Shuhai

    2013-12-01

    A rumor spreading model with the consideration of forgetting rate changing over time is examined in small-world networks. The mean-field equations are derived to describe the dynamics of rumor spreading in small-world networks. Further, numerical solutions are conducted on LiveJournal, an online social blogging platform, to better understand the performance of the model. Results show that the forgetting rate has a significant impact on the final size of rumor spreading: the larger the initial forgetting rate or the faster the forgetting speed, the smaller the final size of the rumor spreading. Numerical solutions also show that the final size of rumor spreading is much larger under a variable forgetting rate compared to that under a constant forgetting rate.

  9. Rates and potentials of soil organic carbon sequestration in agricultural lands in Japan: an assessment using a process-based model and spatially-explicit land-use change inventories

    NASA Astrophysics Data System (ADS)

    Yagasaki, Y.; Shirato, Y.

    2013-11-01

    to other land-use types by abandoning or urbanization accompanied by substantial changes in the rate of organic carbon input to soils, could cause a greater or comparable influence on country-scale SCSC compared with changes in management of agricultural lands. A net-net based accounting on SCSC showed potential influence of variations in future climate on SCSC, that highlighted importance of application of process-based model for estimation of this quantity. Whereas a baseline-based accounting on SCSC was shown to have robustness over variations in future climate and effectiveness to factor out direct human-induced influence on SCSC. Validation of the system's function to estimate SCSC in agricultural lands, by comparing simulation output with data from nation-wide stationary monitoring conducted during year 1979-1998, suggested that the system has an acceptable levels of validity, though only for limited range of conditions at current stage. In addition to uncertainties in estimation of the rate of organic carbon input to soils in different land-use types at large-scale, time course of SOC sequestration, supposition on land-use change pattern in future, as well as feasibility of agricultural policy planning are considered as important factors that need to be taken account in estimation on a potential of country-scale SCSC.

  10. Strain Rate Dependent Modeling of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Stouffer, Donald C.

    1999-01-01

    A research program is in progress to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. Strain rate dependent inelastic constitutive equations have been developed to model the polymer matrix, and have been incorporated into a micromechanics approach to analyze polymer matrix composites. The Hashin failure criterion has been implemented within the micromechanics results to predict ply failure strengths. The deformation model has been implemented within LS-DYNA, a commercially available transient dynamic finite element code. The deformation response and ply failure stresses for the representative polymer matrix composite AS4/PEEK have been predicted for a variety of fiber orientations and strain rates. The predicted results compare favorably to experimentally obtained values.

  11. Modeling the minimum creep rate of discontinuous lamellar- reinforced composites

    NASA Astrophysics Data System (ADS)

    Bartholomeusz, Michael F.; Wert, John A.

    1995-12-01

    An analytical model has been developed to predict the creep rate of discontinuous lamellar-reinforced composites in which both phases plastically deform. The model incorporates effects associated with lamellar orientation relative to the uniaxial stress axis. For modest to large differences between matrix and reinforcement creep rates, lamellar aspect ratio has a significant impact on composite creep rate. For a prescribed reinforcing phase volume fraction, microstructural inhomogeneity can have a pronounced effect on composite creep properties. In the case of uniaxially aligned rigid lamellar-reinforced composites, an inhomogeneous distribution of reinforcing lamellae in the microstructure substantially increases the composite creep rate. Model results demonstrate that there is no significant improvement in creep resistance for aligned fiber-reinforced composites compared to aligned lamellar-reinforced composites, unless the reinforcing phase is essentially nondeforming relative to the matrix phase.

  12. Spray Combustion Modeling with VOF and Finite-Rate Chemistry

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Wang, Ten-See

    1996-01-01

    A spray atomization and combustion model is developed based on the volume-of-fluid (VOF) transport equation with finite-rate chemistry model. The gas-liquid interface mass, momentum and energy conservation laws are modeled by continuum surface force mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed range flows. The objectives of the present study are: (1) to develop and verify the fractional volume-of-fluid (VOF) cell partitioning approach into a predictor-corrector algorithm to deal with multiphase (gas-liquid) free surface flow problems; (2) to implement the developed unified algorithm in a general purpose computational fluid dynamics (CFD) code, Finite Difference Navier-Stokes (FDNS), with droplet dynamics and finite-rate chemistry models; and (3) to demonstrate the effectiveness of the present approach by simulating benchmark problems of jet breakup/spray atomization and combustion. Modeling multiphase fluid flows poses a significant challenge because a required boundary must be applied to a transient, irregular surface that is discontinuous, and the flow regimes considered can range from incompressible to highspeed compressible flows. The flow-process modeling is further complicated by surface tension, interfacial heat and mass transfer, spray formation and turbulence, and their interactions. The major contribution of the present method is to combine the novel feature of the Volume of Fluid (VOF) method and the Eulerian/Lagrangian method into a unified algorithm for efficient noniterative, time-accurate calculations of multiphase free surface flows valid at all speeds. The proposed method reformulated the VOF equation to strongly couple two distinct phases (liquid and gas), and tracks droplets on a Lagrangian frame when spray model is required, using a unified predictor-corrector technique to account for the non-linear linkages through the convective contributions of VOF. The discontinuities within the

  13. An Effective Collision Rate Model for Atomtronic Devices

    NASA Astrophysics Data System (ADS)

    Straatsma, Cameron J. E.; Chow, Weng W.; Anderson, Dana Z.

    2015-05-01

    We demonstrate application of a model, previously developed for the detailed study of quantum electronic systems, to atomtronic devices utilizing finite temperature Bose-condensed gases. The numerical approach is based on the relaxation rate approximation where collisions effectively drive the system towards a dynamical (non-thermal) equilibrium distribution. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to non-equilibrium state evolution, as well as the much longer time durations typical of steady-state device operation. The model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase-locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied. This work is supported by the National Science Foundation, the Air Force Office of Scientific Research, and the Sandia LDRD program.

  14. A frictional population model of seismicity rate change

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.

    2005-01-01

    We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.

  15. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  16. Modeling and predicting drug resistance rate and strength.

    PubMed

    Fullybright, R; Dwivedi, A; Mallawaarachchi, I; Sinsin, B

    2016-08-01

    Drug resistance has been worsening in human infectious diseases medicine over the past several decades. Our ability to successfully control resistance depends to a large extent on our understanding of the features characterizing the process. Part of that understanding includes the rate at which new resistance has been emerging in pathogens. Along that line, resistance data covering 90 infectious diseases, 118 pathogens, and 337 molecules, from 1921 through 2007, are modeled using various statistical tools to generate regression models for the rate of new resistance emergence and for cumulative resistance build-up in pathogens. Thereafter, the strength of the association between the number of molecules put on the market and the number of resulting cases of resistance is statistically tested. Predictive models are presented for the rate at which new resistance has been emerging in infectious diseases medicine, along with predictive models for the rate of cumulative resistance build-up in the aggregate of 118 pathogens as well as in ten individual pathogens. The models are expressed as a function of time and/or as a function of the number of molecules put on the market by the pharmaceutical industry. It is found that molecules significantly induce resistance in pathogens and that new or cumulative drug resistance across infectious diseases medicine has been arising at exponential rates. PMID:27209288

  17. Converting copepod vital rates into units appropriate for biogeochemical models

    NASA Astrophysics Data System (ADS)

    Frangoulis, C.; Carlotti, F.; Eisenhauer, L.; Zervoudaki, S.

    2010-01-01

    The conversion of units is one of the difficulties of model parameterisation. Conversion errors may result not only from incorrect choices of conversion factors, but also from incorrect choices of the value itself. In biogeochemical models, mesozooplankton, is the highest trophic level of the food web, and it is very often reduced to a single variable generally considered as a representation of the copepod community, the dominant taxa in mesozooplankton. If this simplifies the information to be obtained for the stock, a correct parameterisation of the processes related to the copepod community is already a tricky task due to the wide range of copepod species, sizes, stages and behaviour. The goal of this paper is to improve the communication between experimentalists and modellers by giving indications for the conversion of copepod vital rates from experimental to biogeochemical model units. This includes the choice of values, conversion factors, terminology distinction and the scale transfer. To begin with, we briefly address the common problem of the conversion of a rate per individual to a rate per mass. Then, we focus on unit conversion problems for each specific rate and give recommendations. Finally, we discuss the problem of scale transfer between the level of organisation at which the rate value is measured at characteristic time and space-scales versus the level of representation of the corresponding process in the model, with its different characteristic time and space-scales .

  18. Evaluation and refinement of leak-rate estimation models

    SciTech Connect

    Paul, D.D.; Ahmad, J.; Scott, P.M.; Flanigan, L.F.; Wilkowski, G.M. )

    1991-04-01

    Leak-rate estimation models are important elements in developing a leak-before-break methodology in piping integrity and safety analyses. Existing thermal-hydraulic and crack-opening-area models used in current leak-rate estimations have been incorporated into a single computer code for leak-rate estimation. The code is called SQUIRT, which stands for Seepage Quantification of Upsets In Reactor Tubes. The SQUIRT program has been validated by comparing its thermal-hydraulic predictions with the limited experimental data that have been published on two-phase flow through slits and cracks, and by comparing its crack-opening-area predictions with data from the Degraded Piping Program. In addition, leak-rate experiments were conducted to obtain validation data for a circumferential fatigue crack in a carbon steel pipe girth weld. 56 refs., 30 figs., 4 tabs.

  19. Increasing Response Rates to Web-Based Surveys

    ERIC Educational Resources Information Center

    Monroe, Martha C.; Adams, Damian C.

    2012-01-01

    We review a popular method for collecing data--Web-based surveys. Although Web surveys are popular, one major concern is their typically low response rates. Using the Dillman et al. (2009) approach, we designed, pre-tested, and implemented a survey on climate change with Extension professionals in the Southeast. The Dillman approach worked well,…

  20. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND INSPECTION SERVICE ADMINISTRATIVE PROVISIONS FEES AND CHARGES FOR INSPECTION SERVICES AND...

  1. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND INSPECTION SERVICE ADMINISTRATIVE PROVISIONS FEES AND CHARGES FOR INSPECTION SERVICES AND...

  2. 75 FR 20796 - Market-Based Rate Affiliate Restrictions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ...The Federal Energy Regulatory Commission (Commission) proposes to amend its regulations to revise its regulations governing market- based rates for public utilities pursuant to section 205 of the Federal Power Act (FPA). The Commission proposes to clarify that employees that determine the timing of scheduled outages, or that engage in economic dispatch, fuel procurement, or resource planning......

  3. 76 FR 4569 - Market-Based Rate Affiliate Restrictions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ...The Federal Energy Regulatory Commission (Commission) withdraws a notice of proposed rulemaking, which proposed to amend its regulations governing market-based rates for public utilities pursuant to section 205 of the Federal Power Act (FPA) to include in the regulatory text the clarification that employees that determine the timing of scheduled outages or that engage in economic dispatch,......

  4. Removal rate model for magnetorheological finishing of glass.

    PubMed

    Degroote, Jessica E; Marino, Anne E; Wilson, John P; Bishop, Amy L; Lambropoulos, John C; Jacobs, Stephen D

    2007-11-10

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magnetorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time, to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses. PMID:17994145

  5. Removal Rate Model for Magnetorheological Finishing of Glass

    SciTech Connect

    DeGroote, J.E.; Marino, A.E.; WIlson, J.P.; Bishop, A.L.; Lambropoulos, J.C.; Jacobs, S.D.

    2007-11-14

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magntorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time, to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.

  6. A model for reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Chinitz, W.; Evans, J. S.

    1984-01-01

    To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.

  7. Modeling the intracellular pathogen-immune interaction with cure rate

    NASA Astrophysics Data System (ADS)

    Dubey, Balram; Dubey, Preeti; Dubey, Uma S.

    2016-09-01

    Many common and emergent infectious diseases like Influenza, SARS, Hepatitis, Ebola etc. are caused by viral pathogens. These infections can be controlled or prevented by understanding the dynamics of pathogen-immune interaction in vivo. In this paper, interaction of pathogens with uninfected and infected cells in presence or absence of immune response are considered in four different cases. In the first case, the model considers the saturated nonlinear infection rate and linear cure rate without absorption of pathogens into uninfected cells and without immune response. The next model considers the effect of absorption of pathogens into uninfected cells while all other terms are same as in the first case. The third model incorporates innate immune response, humoral immune response and Cytotoxic T lymphocytes (CTL) mediated immune response with cure rate and without absorption of pathogens into uninfected cells. The last model is an extension of the third model in which the effect of absorption of pathogens into uninfected cells has been considered. Positivity and boundedness of solutions are established to ensure the well-posedness of the problem. It has been found that all the four models have two equilibria, namely, pathogen-free equilibrium point and pathogen-present equilibrium point. In each case, stability analysis of each equilibrium point is investigated. Pathogen-free equilibrium is globally asymptotically stable when basic reproduction number is less or equal to unity. This implies that control or prevention of infection is independent of initial concentration of uninfected cells, infected cells, pathogens and immune responses in the body. The proposed models show that introduction of immune response and cure rate strongly affects the stability behavior of the system. Further, on computing basic reproduction number, it has been found to be minimum for the fourth model vis-a-vis other models. The analytical findings of each model have been exemplified by

  8. Constraining kinetic rates of mineral reactions using reactive transport models

    NASA Astrophysics Data System (ADS)

    Bolton, E. W.; Wang, Z.; Ague, J.; Bercovici, D.; Cai, Z.; Karato, S.; Oristaglio, M. L.; Qiu, L.

    2012-12-01

    We use a reactive transport model to better understand results of experiments to obtain kinetic rates of mineral reactions in closed systems. Closed system experiments pose special challenges in that secondary minerals may form that modify the fluid composition evolution and may grow on the dissolving minerals thus armoring the surface. Even so, such closed system experiments provide critical data for what minerals would actually form in field applications and how coupled dissolution and precipitation mineral reactions are strongly linked. Comparing to experimental observations can test the reactive transport model, and the experimental observations can be better understood by comparing the results to the modeling. We apply a 0D end member of the model to understand the dissolution of single crystals of forsterite in a variety of settings (low pH, high pH, or NaHCO3 initial fluids, at 100 C and 1 bar, or 200 C and 150 bar). Depending on the initial conditions, we observe the precipitation of talc, brucite, amorphous silica, chrysotile, or magnesite, in various combinations. We compare simulation results to fluid compositions and the presence of secondary minerals experimentally sampled at various times. Insight from the simulations helped create an inverse model to extract the rates of forsterite dissolution and to create a simple forward model useful for exploring the influence of system size, secondary mineral surface areas, etc. Our reactive transport model allows secondary minerals to armor the forsterite surface, which can strongly decrease the dissolution rate as the system evolves. Tuning our model with experimentally derived rates and assuring relevant processes are included so as to reproduce experimental observations is necessary before upscaling to heterogeneous field conditions. The reactive transport model will be used for field-scale sequestration simulations and coupled with a geomechanical model that includes the influence of deformation.

  9. A geodetic plate motion and Global Strain Rate Model

    NASA Astrophysics Data System (ADS)

    Kreemer, Corné; Blewitt, Geoffrey; Klein, Elliot C.

    2014-10-01

    present a new global model of plate motions and strain rates in plate boundary zones constrained by horizontal geodetic velocities. This Global Strain Rate Model (GSRM v.2.1) is a vast improvement over its predecessor both in terms of amount of data input as in an increase in spatial model resolution by factor of ˜2.5 in areas with dense data coverage. We determined 6739 velocities from time series of (mostly) continuous GPS measurements; i.e., by far the largest global velocity solution to date. We transformed 15,772 velocities from 233 (mostly) published studies onto our core solution to obtain 22,511 velocities in the same reference frame. Care is taken to not use velocities from stations (or time periods) that are affected by transient phenomena; i.e., this data set consists of velocities best representing the interseismic plate velocity. About 14% of the Earth is allowed to deform in 145,086 deforming grid cells (0.25° longitude by 0.2° latitude in dimension). The remainder of the Earth's surface is modeled as rigid spherical caps representing 50 tectonic plates. For 36 plates we present new GPS-derived angular velocities. For all the plates that can be compared with the most recent geologic plate motion model, we find that the difference in angular velocity is significant. The rigid-body rotations are used as boundary conditions in the strain rate calculations. The strain rate field is modeled using the Haines and Holt method, which uses splines to obtain an self-consistent interpolated velocity gradient tensor field, from which strain rates, vorticity rates, and expected velocities are derived. We also present expected faulting orientations in areas with significant vorticity, and update the no-net rotation reference frame associated with our global velocity gradient field. Finally, we present a global map of recurrence times for Mw=7.5 characteristic earthquakes.

  10. The Technical Adequacy of Curriculum-Based and Rating-Based Measures of Written Expression for Elementary School Students

    ERIC Educational Resources Information Center

    Gansle, Kristin A.; VanDerHeyden, Amanda M.; Noell, George H.; Resetar, Jennifer L.; Williams, Kashunda L.

    2006-01-01

    Five hundred thirty-eight elementary school students participated in a study designed to examine the technical characteristics of curriculum-based measures (CBMs) for the assessment of writing. In addition, the study investigated rating-based measures of writing using the Six Trait model, an assessment instrument and writing program in use in many…

  11. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  12. Exploring Latent Class Based on Growth Rates in Number Sense Ability

    ERIC Educational Resources Information Center

    Kim, Dongil; Shin, Jaehyun; Lee, Kijyung

    2013-01-01

    The purpose of this study was to explore latent class based on growth rates in number sense ability by using latent growth class modeling (LGCM). LGCM is one of the noteworthy methods for identifying growth patterns of the progress monitoring within the response to intervention framework in that it enables us to analyze latent sub-groups based not…

  13. A comparison of analytic models for estimating dose equivalent rates in shielding with beam spill measurements

    SciTech Connect

    Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.

    1992-12-31

    A comparison of 800-MeV proton beam spill measurements at the Los Alamos Meson Physics Facility (LAMPF) with analytical model calculations of neutron dose equivalent rates (DER) show agreement within factors of 2-3 for simple shielding geometries. The DER estimates were based on a modified Moyer model for transverse angles and a Monte Carlo based forward angle model described in the proceeding paper.

  14. Finite driving rate and anisotropy effects in landslide modeling

    SciTech Connect

    Piegari, E.; Cataudella, V.; Di Maio, R.; Milano, L.; Nicodemi, M.

    2006-02-15

    In order to characterize landslide frequency-size distributions and individuate hazard scenarios and their possible precursors, we investigate a cellular automaton where the effects of a finite driving rate and the anisotropy are taken into account. The model is able to reproduce observed features of landslide events, such as power-law distributions, as experimentally reported. We analyze the key role of the driving rate and show that, as it is increased, a crossover from power-law to non-power-law behaviors occurs. Finally, a systematic investigation of the model on varying its anisotropy factors is performed and the full diagram of its dynamical behaviors is presented.

  15. Modeling heart rate variability including the effect of sleep stages

    NASA Astrophysics Data System (ADS)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  16. Modeling heart rate variability including the effect of sleep stages.

    PubMed

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that-in comparison with real data-the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed. PMID:26931582

  17. Micromechanics-Based Permeability Evolution in Brittle Materials at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Perol, Thibaut; Bhat, Harsha S.

    2016-08-01

    We develop a micromechanics-based permeability evolution model for brittle materials at high strain rates (≥ 100 s^{-1}). Extending for undrained deformation the mechanical constitutive description of brittle solids, whose constitutive response is governed by micro-cracks, we now relate the damage-induced strains to micro-crack aperture. We then use an existing permeability model to evaluate the permeability evolution. This model predicts both the percolative and connected regime of permeability evolution of Westerly Granite during triaxial loading at high strain rate. This model can simulate pore pressure history during earthquake coseismic dynamic ruptures under undrained conditions.

  18. A constitutive model for ballistic gelatin at surgical strain rates.

    PubMed

    Ravikumar, Nishant; Noble, Christopher; Cramphorn, Edward; Taylor, Zeike A

    2015-07-01

    This paper describes a constitutive model for ballistic gelatin at the low strain rates experienced, for example, by soft tissues during surgery. While this material is most commonly associated with high speed projectile penetration and impact investigations, it has also been used extensively as a soft tissue simulant in validation studies for surgical technologies (e.g. surgical simulation and guidance systems), for which loading speeds and the corresponding mechanical response of the material are quite different. We conducted mechanical compression experiments on gelatin specimens at strain rates spanning two orders of magnitude (~0.001-0.1s(-1)) and observed a nonlinear load-displacement history and strong strain rate-dependence. A compact and efficient visco-hyperelastic constitutive model was then formulated and found to fit the experimental data well. An Ogden type strain energy density function was employed for the elastic component. A single Prony exponential term was found to be adequate to capture the observed rate-dependence of the response over multiple strain rates. The model lends itself to immediate use within many commercial finite element packages. PMID:25863009

  19. Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2015-12-01

    The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.

  20. High rate constitutive modeling of aluminium alloy tube

    NASA Astrophysics Data System (ADS)

    Salisbury, C. P.; Worswick, M. J.; Mayer, R.

    2006-08-01

    As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.

  1. A microphysical model explains rate-and-state friction

    NASA Astrophysics Data System (ADS)

    Chen, Jianye; Spiers, Christopher J.

    2015-04-01

    The rate-and-state friction (RSF) laws were originally developed as a phenomenological description of the frictional behavior observed in lab experiments. In previous studies, the empirical RSF laws have been extensively and quite successfully applied to fault mechanisms. However, these laws can not readily be envisioned in terms of the underlying physics. There are several critical discrepancies between seismological constraints on RSF behavior associated with earthquakes and lab-derived RSF parameters, in particular regarding the static stress drop and characteristic slip distance associated with seismic events. Moreover, lab friction studies can address only limited fault topographies, displacements, experimental durations and P-T conditions, which means that scale issues, and especially processes like dilatation and fluid-rock interaction, cannot be fully taken into account. Without a physical basis accounting for such effects, extrapolation of lab-derived RSF data to nature involves significant, often unknown uncertainties. In order to more reliably apply experimental results to natural fault zones, and notably to extrapolate lab data beyond laboratory pressure, temperature and velocity conditions, an understanding of the microphysical mechanisms governing fault frictional behavior is required. Here, following some pioneering efforts (e.g. Niemeijer and Spiers, 2007; Den Hartog and Spiers, 2014), a mechanism-based microphysical model is developed for describing the frictional behavior of carbonate fault gouge, assuming that the frictional behavior seen in lab experiments is controlled by competing processes of intergranular slip versus contact creep by pressure solution. The model basically consists of two governing equations derived from energy/entropy balance considerations and the kinematic relations that apply to a granular fault gouge undergoing shear and dilation/compaction. These two equations can be written as ˙τ/K = Vimp- Lt[λ˙γsbps +(1-

  2. Error Growth Rate in the MM5 Model

    NASA Astrophysics Data System (ADS)

    Ivanov, S.; Palamarchuk, J.

    2006-12-01

    The goal of this work is to estimate model error growth rates in simulations of the atmospheric circulation by the MM5 model all the way from the short range to the medium range and beyond. The major topics are addressed to: (i) search the optimal set of parameterization schemes; (ii) evaluate the spatial structure and scales of the model error for various atmospheric fields; (iii) determine geographical regions where model errors are largest; (iv) define particular atmospheric patterns contributing to the fast and significant model error growth. Results are presented for geopotential, temperature, relative humidity and horizontal wind components fields on standard surfaces over the Atlantic-European region during winter 2002. Various combinations of parameterization schemes for cumulus, PBL, moisture and radiation are used to identify which one provides a lesser difference between the model state and analysis. The comparison of the model fields is carried out versus ERA-40 reanalysis of the ECMWF. Results show that the rate, at which the model error grows as well as its magnitude, varies depending on the forecast range, atmospheric variable and level. The typical spatial scale and structure of the model error also depends on the particular atmospheric variable. The distribution of the model error over the domain can be separated in two parts: the steady and transient. The first part is associated with a few high mountain regions including Greenland, where model error is larger. The transient model error mainly moves along with areas of high gradients in the atmospheric flow. Acknowledgement: This study has been supported by NATO Science for Peace grant #981044. The MM5 modelling system used in this study has been provided by UCAR. ERA-40 re-analysis data have been obtained from the ECMWF data server.

  3. A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study

    PubMed Central

    Souza, JP; Betran, AP; Dumont, A; de Mucio, B; Gibbs Pickens, CM; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, JG; Vogel, JP; Jayaratne, K; Leal, MC; Gissler, M; Morisaki, N; Lack, N; Oladapo, OT; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, AD; Marcolin, AC; Zongo, A; Blondel, B; Hernández, B; Hogue, CJ; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, ECD; Vieira, EM; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, ML; Torloni, MR; Kramer, MR; Borges, P; Olkhanud, PB; Pérez-Cuevas, R; Agampodi, SB; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, AM

    2016-01-01

    Objective To generate a global reference for caesarean section (CS) rates at health facilities. Design Cross-sectional study. Setting Health facilities from 43 countries. Population/Sample Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10 045 875 women giving birth from 43 countries for model testing. Methods We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Main outcome measures Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. Results According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). Conclusions This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. PMID:26259689

  4. Stochastic heart-rate model can reveal pathologic cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kuusela, Tom

    2004-03-01

    A simple one-dimensional Langevin-type stochastic difference equation can simulate the heart-rate fluctuations in a time scale from minutes to hours. The model consists of a deterministic nonlinear part and a stochastic part typical of Gaussian noise, and both parts can be directly determined from measured heart-rate data. Data from healthy subjects typically exhibit the deterministic part with two or more stable fixed points. Studies of 15 congestive heart-failure subjects reveal that the deterministic part of pathologic heart dynamics has no clear stable fixed points. Direct simulations of the stochastic model for normal and pathologic cases can produce statistical parameters similar to those of real subjects. Results directly indicate that pathologic situations simplify the heart-rate control system.

  5. Stochastic Models of Tropical Rain-Rate Fields

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.

    2003-01-01

    Because of the extreme variability of rain rate in space and time and the difficulties with remote sensing methods of measuring rain rates, accurate determination of rainfall over large areas and time periods has long been a problem for hydrologists, meteorogists, and climatologists. A number of statistical models of rain have been developed in order to investigate the impact of rain variability on satellite remote sensing methods, validation of satellite rain products, and generation of rain maps with accompanying error estimates. These models may be useful in examining 'sub-grid scale' issues in representing precipitation in numerical mdoels. A stochastic model will first be described which can generate time-dependent high-resolution spatial rain fields with space and time correlations similar to those seen in rain data, as well as representing the presence of areas with zero rain rate and log-normally distributed rain rates where there is rain. A simpler model derived from this, formulated in the spectral domain, seems to imply fractal-like rain statistics at small scales when fit to rain data.

  6. Mixed Rasch Modeling of the Self-Rating Depression Scale

    ERIC Educational Resources Information Center

    Hong, Sehee; Min, Sae-Young

    2007-01-01

    In this study, mixed Rasch modeling was used on the Self-Rating Depression Scale (SDS), a widely used measure of depression, among a non-Western sample of 618 Korean college students. The results revealed three latent classes and confirmed the unidimensionality of the SDS. In addition, there was a significant effect for gender in terms of class…

  7. Markov Models and the Ensemble Kalman Filter for Estimation of Sorption Rates

    NASA Astrophysics Data System (ADS)

    Vugrin, E. D.; McKenna, S. A.; White Vugrin, K.

    2007-12-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000. This work was supported under the Sandia Laboratory Directed Research and Development program.

  8. Estimating digital information throughput rates for radiology networks. A model.

    PubMed

    Cox, G G; Templeton, A W; Anderson, W H; Cook, L T; Hensley, K S; Dwyer, S J

    1986-02-01

    The design and implementation of a digital radiology image management system requires the definition, evaluation, and comparison of appropriate measures of system performance. The mean throughput rate is an important measure of the actual performance of a finished system. The mean throughput rate identifies the transmission of digital information either in bits/second or tasks/second. It is dependent on software, database management, equipment interface designs, number of users and display stations, and communications media. The mean throughput rate can document resource allocation bottlenecks within a given system. A model for estimating the mean throughput rate and its application in helping us design our radiology digital image networks is described. PMID:3957590

  9. A master curve-mechanism based approach to modeling the effects of constraint, loading rate and irradiation on the toughness-temperature behavior of a V-4Cr-4Ti alloy

    SciTech Connect

    Odette, G.R.; Donahue, E.; Lucas, G.E.; Sheckherd, J.W.

    1996-10-01

    The influence of loading rate and constraint on the effective fracture toughness as a function of temperature [K{sub e}(T)] of the fusion program heat of V-4Cr-4Ti was measured using subsized, three point bend specimens. The constitutive behavior was characterized as a function of temperature and strain rate using small tensile specimens. Data in the literature on this alloy was also analysed to determine the effect of irradiation on K{sub e}(T) and the energy temperature (E-T) curves measured in subsized Charpy V-notch tests. It was found that V-4Cr-4Ti undergoes {open_quotes}normal{close_quotes} stress-controlled cleavage fracture below a temperature marking a sharp ductile-to-brittle transition. The transition temperature is increased by higher loading rates, irradiation hardening and triaxial constraint. Shifts in a reference transition temperature due to higher loading rates and irradiation can be reasonably predicted by a simple equivalent yield stress model. These results also suggest that size and geometry effects, which mediate constraint, can be modeled by combining local critical stressed area {sigma}*/A* fracture criteria with finite element method simulations of crack tip stress fields. The fundamental understanding reflected in these models will be needed to develop K{sub e}(T) curves for a range of loading rates, irradiation conditions, structural size scales and geometries relying (in large part) on small specimen tests. Indeed, it may be possible to develop a master K{sub e}(T) curve-shift method to account for these variables. Such reliable and flexible failure assessment methods are critical to the design and safe operation of defect tolerant vanadium structures.

  10. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a