Science.gov

Sample records for rate model based

  1. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  2. Rate-based degradation modeling of lithium-ion cells

    SciTech Connect

    E.V. Thomas; I. Bloom; J.P. Christophersen; V.S. Battaglia

    2012-05-01

    Accelerated degradation testing is commonly used as the basis to characterize battery cell performance over a range of stress conditions (e.g., temperatures). Performance is measured by some response that is assumed to be related to the state of health of the cell (e.g., discharge resistance). Often, the ultimate goal of such testing is to predict cell life at some reference stress condition, where cell life is defined to be the point in time where performance has degraded to some critical level. These predictions are based on a degradation model that expresses the expected performance level versus the time and conditions under which a cell has been aged. Usually, the degradation model relates the accumulated degradation to the time at a constant stress level. The purpose of this article is to present an alternative framework for constructing a degradation model that focuses on the degradation rate rather than the accumulated degradation. One benefit of this alternative approach is that prediction of cell life is greatly facilitated in situations where the temperature exposure is not isothermal. This alternative modeling framework is illustrated via a family of rate-based models and experimental data acquired during calendar-life testing of high-power lithium-ion cells.

  3. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S

  4. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  5. The Estimation and Control of the Electroslag Remelting Melt Rate by Mechanism-Based Modeling

    NASA Astrophysics Data System (ADS)

    Li, Wanzhou; Wang, Weiyu; Hu, Yuechen; Chen, Yixing

    2012-04-01

    The process control of industrial electroslag remelting production is addressed in this work. This article proposes a mechanism-based model using electrode displacement to estimate the melt rate, designs the remelting process control system, and uses practical application data to verify the validity of the model. The soft measurement of the melt rate based on mechanism modeling is proved to be an economical and reliable solution to the online melt rate estimation and control for large industrial electroslag remelting furnaces.

  6. Acid–base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer

    PubMed Central

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.

    2012-01-01

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid–base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030

  7. A model-based technique for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1982-01-01

    A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat "unconventional" aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method was able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.

  8. A model-based technique for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1980-01-01

    A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat 'unconventional' aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method is able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.

  9. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  10. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  11. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter

    PubMed Central

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  12. Estimating base rates of impairment in neuropsychological test batteries: a comparison of quantitative models.

    PubMed

    Decker, Scott L; Schneider, W Joel; Hale, James B

    2012-01-01

    Neuropsychologists frequently rely on a battery of neuropsychological tests which are normally distributed to determine impaired functioning. The statistical likelihood of Type I error in clinical decision-making is in part determined by the base rate of normative individuals obtaining atypical performance on neuropsychological tests. Base rates are most accurately obtained by co-normed measures, but this is rarely accomplished in neuropsychological testing. Several statistical methods have been proposed to estimate base rates for tests that are not co-normed. This study compared two statistical approaches (binomial and Monte Carlo models) used to estimate the base rates for flexible test batteries. The two approaches were compared against empirically derived base rates for a multitest co-normed battery of cognitive measures. Estimates were compared across a variety of conditions including age and different α levels (N =3,356). Monte Carlo R(2) estimates ranged from .980 to .997 across five different age groups, indicating a good fit. In contrast, the binomial model fit estimates ranged from 0.387 to 0.646. Results confirm that the binomial model is insufficient for estimating base rates because it does not take into account correlations among measures in a multitest battery. Although the Monte Carlo model produced more accurate results, minor biases occurred that are likely due to skewess and kurtosis of test variables. Implications for future research and applied practice are discussed. PMID:22172567

  13. Modeling Low-Dose-Rate Effects in Irradiated Bipolar-Base Oxides

    SciTech Connect

    Cirba, C.R.; Fleetwood, D.M.; Graves, R.J.; Michez, A.; Milanowski, R.J.; Saigne, F.; Schrimpf, R.D.; Witczak, S.C.

    1998-10-26

    A physical model is developed to quantify the contribution of oxide-trapped charge to enhanced low-dose-rate gain degradation in bipolar junction transistors. Multiple-trapping simulations show that space charge limited transport is partially responsible for low-dose-rate enhancement. At low dose rates, more holes are trapped near the silicon-oxide interface than at high dose rates, resulting in larger midgap voltage shifts at lower dose rates. The additional trapped charge near the interface may cause an exponential increase in excess base current, and a resultant decrease in current gain for some NPN bipolar technologies.

  14. A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Zhang, Junhua; Lohmann, Ulrike

    2002-01-01

    The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.

  15. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.

    2016-06-01

    In this work, we develop a tantalum strength model that incorporates effects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate effects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa. The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.

  16. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGESBeta

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  17. Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate

    NASA Astrophysics Data System (ADS)

    Li, Chun-Hsien

    2015-06-01

    This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.

  18. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    SciTech Connect

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.

  19. Modelling Heart Rate Kinetics

    PubMed Central

    Zakynthinaki, Maria S.

    2015-01-01

    The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual’s cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164

  20. Modelling heart rate kinetics.

    PubMed

    Zakynthinaki, Maria S

    2015-01-01

    The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual's cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164

  1. Effect of control sampling rates on model-based manipulator control schemes

    NASA Technical Reports Server (NTRS)

    Khosla, P. K.

    1987-01-01

    The effect of changing the control sampling period on the performance of the computed-torque and independent joint control schemes is discussed. While the former utilizes the complete dynamics model of the manipulator, the latter assumes a decoupled and linear model of the manipulator dynamics. Researchers discuss the design of controller gains for both the computed-torque and the independent joint control schemes and establish a framework for comparing their trajectory tracking performance. Experiments show that within each scheme the trajectory tracking accuracy varies slightly with the change of the sampling rate. However, at low sampling rates the computed-torque scheme outperforms the independent joint control scheme. Based on experimental results, researchers also conclusively establish the importance of high sampling rates as they result in an increased stiffness of the system.

  2. Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns.

    PubMed

    Tang, Chong-Jian; He, Rui; Zheng, Ping; Chai, Li-Yuan; Min, Xiao-Bo

    2013-04-15

    A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50-55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor. PMID:23434474

  3. Research and realization of ultrasonic gas flow rate measurement based on ultrasonic exponential model.

    PubMed

    Zheng, Dandan; Hou, Huirang; Zhang, Tao

    2016-04-01

    For ultrasonic gas flow rate measurement based on ultrasonic exponential model, when the noise frequency is close to that of the desired signals (called similar-frequency noise) or the received signal amplitude is small and unstable at big flow rate, local convergence of the algorithm genetic-ant colony optimization-3cycles may appear, and measurement accuracy may be affected. Therefore, an improved method energy genetic-ant colony optimization-3cycles (EGACO-3cycles) is proposed to solve this problem. By judging the maximum energy position of signal, the initial parameter range of exponential model can be narrowed and then the local convergence can be avoided. Moreover, a DN100 flow rate measurement system with EGACO-3cycles method is established based on NI PCI-6110 and personal computer. A series of experiments are carried out for testing the new method and the measurement system. It is shown that local convergence doesn't appear with EGACO-3cycles method when similar-frequency noises exist and flow rate is big. Then correct time of flight can be obtained. Furthermore, through flow calibration on this system, the measurement range ratio is achieved 500:1, and the measurement accuracy is 0.5% with a low transition velocity 0.3 m/s. PMID:26821309

  4. Model-based melt rate control during vacuum arc remelting of alloy 718

    NASA Astrophysics Data System (ADS)

    Williamson, Rodney L.; Melgaard, David K.; Shelmidine, Gregory J.; Beaman, Joseph J.; Morrison, Robert

    2004-02-01

    Vacuum arc remelting (VAR) is used widely throughout the specialty metals industry to produce superalloy and titanium alloy cast ingots. Optimum VAR casting requires that the electrode melting rate be controlled at all times during the process. This is especially difficult when process conditions are such that the temperature distribution in the electrode has not achieved, or has been driven away from, steady state. This condition is encountered during the beginning and closing stages of the VAR process, and also during some process disturbances such as when the melt zone passes through a transverse crack. To address these transient melting situations, a new method of VAR melt rate control has been developed that incorporates an accurate, low-order melting model to continually estimate the temperature distribution in the electrode. This method of model-based control was tested at Carpenter Technology Corporation. In the first test, two 0.43-m-diameter alloy 718 electrodes were melted into 0.51-m-diameter ingots. Aggressive start-up and hot-top procedures were used to test the dynamic capabilities of the control technique. Additionally, a transverse cut was placed in each electrode with an abrasive saw to mimic an electrode crack. Accurate melt rate control was demonstrated throughout each melt. The second test used an electrode size and grade proprietary to the host company. Because it was not stress relieved after the primary casting process, the electrode was known to possess multiple cracks that make accurate melt rate control impossible using standard VAR controller technology. This electrode was also successfully melted with good melt rate control using the model-based controller.

  5. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  6. Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis

    USGS Publications Warehouse

    Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John

    2009-01-01

    Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.

  7. Thermomechanics-based nonlinear rate-dependent coupled damage-plasticity granular micromechanics model

    NASA Astrophysics Data System (ADS)

    Misra, Anil; Singh, Viraj

    2015-09-01

    Thermomechanics and granular micromechanics approaches are combined to derive constitutive equations for modeling rate-dependent granular materials with damage and plasticity. The derivation is motivated by the recognition that the effect of micro-scale mechanisms upon the macro-scale behavior is known to be significant for granular materials. A general thermomechanical framework applicable to rate-dependent granular materials with damage and plasticity is developed. Based upon this framework, an expression for macro-scale Cauchy stress tensor is obtained in terms of the micro-scale grain interaction forces and the relationship between micro- and macro-scale kinematics. In addition, a Clausius-Duhem type inequality applicable to inter-granular interaction is derived, which is used to establish micro-scale constitutive relations for particular type of inter-granular interactions. The expression for Cauchy stress tensor and the micro-scale constitutive relations is then combined under a mean field kinematic assumption to obtain evolution-type macro-scale constitutive equations. The advantage of the granular micromechanics approach is that the damage and plasticity are defined using simple 1d functions at micro-scale, and complicated plastic potentials, damage functions and rules for their evolution are not required. The resultant model is applied to investigate primary, secondary and tertiary creep, creep-recovery as well as rate-dependent response under uniaxial compressive loading. Model applicability is also demonstrated for asymmetric tensile-compressive response under creep-recovery loading. The model is used to evaluate the evolution of elastic energy, and viscous, plastic and damage dissipation at the macro- and micro-scale with respect to creep time and loading level. The results show the development of loading-induced anisotropy due to damage and plasticity in these materials.

  8. Mechanical properties of methacrylate-based model dentin adhesives: effect of loading rate and moisture exposure.

    PubMed

    Singh, Viraj; Misra, Anil; Parthasarathy, Ranganathan; Ye, Qiang; Park, Jonggu; Spencer, Paulette

    2013-11-01

    The aim of this study is to investigate the mechanical behavior of model methacrylate-based dentin adhesives under conditions that simulate the wet oral environment. A series of monotonic and creep experiments were performed on rectangular beam samples of dentin adhesive in three-point bending configuration under different moisture conditions. The monotonic test results show a significant effect of loading rate on the failure strength and the linear limit (yield point) of the stress-strain response. In addition, these tests show that the failure strength is low, and the failure occurs at a smaller deformation when the test is performed under continuously changing moisture conditions. The creep test results show that under constant moisture conditions, the model dentin adhesives can have a viscoelastic response under certain low loading levels. However, when the moisture conditions vary under the same low loading levels, the dentin adhesives have an anomalous creep response accompanied by large secondary creep and high strain accumulation. PMID:23744598

  9. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  10. Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models

    NASA Astrophysics Data System (ADS)

    Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.

    2012-04-01

    The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.

  11. Learning to maximize reward rate: a model based on semi-Markov decision processes

    PubMed Central

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.

    2014-01-01

    When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252

  12. Logarithmic rate based elasto-viscoplastic cyclic constitutive model for soft biological tissues.

    PubMed

    Zhu, Yilin; Kang, Guozheng; Yu, Chao; Poh, Leong Hien

    2016-08-01

    Based on the logarithmic rate and piecewise linearization theory, a thermodynamically consistent elasto-viscoplastic constitutive model is developed in the framework of finite deformations to describe the nonlinear time-dependent biomechanical performances of soft biological tissues, such as nonlinear anisotropic monotonic stress-strain responses, stress relaxation, creep and ratchetting. In the proposed model, the soft biological tissue is assumed as a typical composites consisting of an isotropic matrix and anisotropic fiber aggregation. Accordingly, the free energy function and stress tensor are divided into two parts related to the matrix and fiber aggregation, respectively. The nonlinear biomechanical responses of the tissues are described by the piecewise linearization theory with hypo-elastic relations of fiber aggregation. The evolution equations of viscoplasticity are formulated from the dissipation inequalities by the co-directionality hypotheses. The anisotropy is considered in the hypo-elastic relations and viscoplastic flow rules by introducing some material parameters dependent on the loading direction. Then the capability of the proposed model to describe the nonlinear time-dependent deformation of soft biological tissues is verified by comparing the predictions with the corresponding experimental results of three tissues. It is seen that the predicted monotonic stress-strain responses, stress relaxation, creep and ratchetting of soft biological tissues are in good agreement with the corresponding experimental ones. PMID:27108349

  13. Evidence-Based Adequacy Model for School Funding: Success Rates in Illinois Schools that Meet Targets

    ERIC Educational Resources Information Center

    Murphy, Gregory J.

    2012-01-01

    This quantitative study explores the 2010 recommendation of the Educational Funding Advisory Board to consider the Evidence-Based Adequacy model of school funding in Illinois. This school funding model identifies and costs research based practices necessary in a prototypical school and sets funding levels based upon those practices. This study…

  14. Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique

    2016-05-01

    In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated

  15. Modeling Passing Rates on a Computer-Based Medical Licensing Examination: An Application of Survival Data Analysis

    ERIC Educational Resources Information Center

    de Champlain, Andre F.; Winward, Marcia L.; Dillon, Gerard F.; de Champlain, Judy E.

    2004-01-01

    The purpose of this article was to model United States Medical Licensing Examination (USMLE) Step 2 passing rates using the Cox Proportional Hazards Model, best known for its application in analyzing clinical trial data. The number of months it took to pass the computer-based Step 2 examination was treated as the dependent variable in the model.…

  16. A Micromechanics Based Constitutive Model For Brittle Failure at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Bhat, H. S.; Rosakis, A.; Sammis, C. G.

    2011-12-01

    The micromechanical damage mechanics formulated by Ashby and Sammis [1] and generalized by Desh- pande and Evans [2] has been extended to allow for a more generalized stress state and to incorporate an ex- perimentally motivated new crack growth (damage evo- lution) law that is valid over a wide range of loading rates. This law is sensitive to both the crack tip stress field and its time derivative. Incorporating this feature produces strain-rate sensitivity in the constitutive re- sponse. The model is also experimentally verified by predicting the failure strength of Dionysus-Pentelicon marble over strain rates ranging from ˜ 10-6 to 103 s-1. Model parameters determined from from quasi-static experiments were used to predict the failure strength at higher loading rates. Agreement with experimental results was excellent.

  17. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  18. An agent-based modeling approach for determining corn stover removal rate and transboundary effects.

    PubMed

    Gan, Jianbang; Langeveld, J W A; Smith, C T

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers. PMID:24276896

  19. An Agent-Based Modeling Approach for Determining Corn Stover Removal Rate and Transboundary Effects

    NASA Astrophysics Data System (ADS)

    Gan, Jianbang; Langeveld, J. W. A.; Smith, C. T.

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  20. Meta-Analytic Connectivity Modelling Revisited: Controlling for Activation Base Rates

    PubMed Central

    Langner, Robert; Rottschy, Claudia; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.

    2014-01-01

    Co-activation of distinct brain regions is a measure of functional interaction, or connectivity, between those regions. The co-activation pattern of a given region can be investigated using seed-based activation likelihood estimation meta-analysis of functional neuroimaging data stored in databases such as BrainMap. This method reveals inter-regional functional connectivity by determining brain regions that are consistently co-activated with a given region of interest (the “seed”) across a broad range of experiments. In current implementations of this meta-analytic connectivity modelling (MACM), significant spatial convergence (i.e. consistent co-activation) is distinguished from noise by comparing it against an unbiased null-distribution of random spatial associations between experiments according to which all grey-matter voxels have the same chance of convergence. As the a priori probability of finding activation in different voxels markedly differs across the brain, computing such a quasi-rectangular null-distribution renders the detection of significant convergence more likely in those voxels that are frequently activated. Here, we propose and test a modified MACM approach that takes this activation frequency bias into account. In this new specific co-activation likelihood estimation (SCALE) algorithm, a null-distribution is generated that reflects the base rate of reporting activation in any given voxel and thus equalizes the a priori chance of finding across-study convergence in each voxel of the brain. Using four exemplary seed regions (right visual area V4, left anterior insula, right intraparietal sulcus, and subgenual cingulum), our tests corroborated the enhanced specificity of the modified algorithm, indicating that SCALE may be especially useful for delineating distinct core networks of co-activation. PMID:24945668

  1. A Self-Organized Model for Cell-Differentiation Based on Variations of Molecular Decay Rates

    PubMed Central

    Hanel, Rudolf; Pöchacker, Manfred; Schölling, Manuel; Thurner, Stefan

    2012-01-01

    Systemic properties of living cells are the result of molecular dynamics governed by so-called genetic regulatory networks (GRN). These networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems. At any point in time only small subsets of these networks are active. Any active subset of the GRN leads to the expression of particular sets of molecules (expression modes). The subsets of active networks change over time, leading to the observed complex dynamics of expression patterns. Understanding of these dynamics becomes increasingly important in systems biology and medicine. While the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems, the understanding of the role of degradation of biochemical agents (mRNA, protein) in regulatory dynamics remains limited. Recent experimental data suggests that there exists a functional relation between mRNA and protein decay rates and expression modes. In this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the GRN. The model is able to reproduce key characteristics of molecular dynamics, including homeostasis, multi-stability, periodic dynamics, alternating activity, differentiability, and self-organized critical dynamics. Moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes. The model explains recent experimental observations that decay-rates (or turnovers) vary between differentiated tissue-classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation. PMID:22693554

  2. A self-organized model for cell-differentiation based on variations of molecular decay rates.

    PubMed

    Hanel, Rudolf; Pöchacker, Manfred; Schölling, Manuel; Thurner, Stefan

    2012-01-01

    Systemic properties of living cells are the result of molecular dynamics governed by so-called genetic regulatory networks (GRN). These networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems. At any point in time only small subsets of these networks are active. Any active subset of the GRN leads to the expression of particular sets of molecules (expression modes). The subsets of active networks change over time, leading to the observed complex dynamics of expression patterns. Understanding of these dynamics becomes increasingly important in systems biology and medicine. While the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems, the understanding of the role of degradation of biochemical agents (mRNA, protein) in regulatory dynamics remains limited. Recent experimental data suggests that there exists a functional relation between mRNA and protein decay rates and expression modes. In this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the GRN. The model is able to reproduce key characteristics of molecular dynamics, including homeostasis, multi-stability, periodic dynamics, alternating activity, differentiability, and self-organized critical dynamics. Moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes. The model explains recent experimental observations that decay-rates (or turnovers) vary between differentiated tissue-classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation. PMID:22693554

  3. Variation of crystal dissolution rate based on a dissolution stepwave model.

    PubMed

    Lasaga, A C; Luttge, A

    2001-03-23

    A formulation based on defect-generated dissolution stepwaves of the variation of dissolution rate with the degree of undersaturation is validated by near-atomic-scale observations of surfaces, Monte Carlo simulations, and experimental bulk dissolution rates. The dissolution stepwaves emanating from etch pits provide a train of steps similar to those of a spiral but with different behavior. Their role in accounting for the bulk dissolution rate of crystals provides a conceptual framework for mineral dissolution far from equilibrium. Furthermore, the law extends research to conditions closer to equilibrium and predicts a nonlinear decrease in the rate of dissolution as equilibrium is approached, which has implications for understanding artificial and natural processes involving solid-fluid reactions. PMID:11264534

  4. Source mass eruption rate retrieved from satellite-based data using statistical modelling

    NASA Astrophysics Data System (ADS)

    Gouhier, Mathieu; Guillin, Arnaud; Azzaoui, Nourddine; Eychenne, Julia; Valade, Sébastien

    2015-04-01

    Ash clouds emitted during volcanic eruptions have long been recognized as a major hazard likely to have dramatic consequences on aircrafts, environment and people. Thus, the International Civil Aviation Organization (ICAO) established nine Volcanic Ash Advisory Centers (VAACs) around the world, whose mission is to forecast the location and concentration of ash clouds over hours to days, using volcanic ash transport and dispersion models (VATDs). Those models use input parameters such as plume height (PH), particle size distribution (PSD), and mass eruption rate (MER), the latter being a key parameter as it directly controls the amount of ash injected into the atmosphere. The MER can be obtained rather accurately from detailed ground deposit studies, but this method does not match the operational requirements in case of a volcanic crisis. Thus, VAACs use empirical laws to determine the MER from the estimation of the plume height. In some cases, this method can be difficult to apply, either because plume height data are not available or because uncertainties related to this method are too large. We propose here an alternative method based on the utilization of satellite data to assess the MER at the source, during explosive eruptions. Satellite-based techniques allow fine ash cloud loading to be quantitatively retrieved far from the source vent. Those measurements can be carried out in a systematic and real-time fashion using geostationary satellite, in particular. We tested here the relationship likely to exist between the amount of fine ash dispersed in the atmosphere and of coarser tephra deposited on the ground. The sum of both contributions yielding an estimate of the MER. For this purpose we examined 19 eruptions (of known duration) in detail for which both (i) the amount of fine ash dispersed in the atmosphere, and (ii) the mass of tephra deposited on the ground have been estimated and published. We combined these data with contextual information that may

  5. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  6. Dislocation mechanism based model for stage II fatigue crack propagation rate

    NASA Technical Reports Server (NTRS)

    Mazumdar, P. K.

    1986-01-01

    Repeated plastic deformation, which of course depends on dislocation mechanism, at or near the crack tip leads to the fatigue crack propagation. By involving the theory of thermally activated flow and the cumulative plastic strain criterion, an effort is made here to model the stage II fatigue crack propagation rate in terms of the dislocation mechanism. The model, therefore, provides capability to ascertain: (1) the dislocation mechanism (and hence the near crack tip microstructures) assisting the crack growth, (2) the relative resistance of dislocation mechanisms to the crack growth, and (3) the fracture surface characteristics and its interpretation in terms of the dislocation mechanism. The local microstructure predicted for the room temperature crack growth in copper by this model is in good agreement with the experimental results taken from the literature. With regard to the relative stability of such dislocation mechanisms as the cross-slip and the dislocation intersection, the model suggests an enhancement of crack growth rate with an ease of cross-slip which in general promotes dislocation cell formation and is common in material which has high stacking fault energy (produces wavy slips). Cross-slip apparently enhances crack growth rate by promoting slip irreversibility and fracture surface brittleness to a greater degree.

  7. Modeling the evaporation rate of cesium off tungsten based controlled porosity dispenser photocathodes

    NASA Astrophysics Data System (ADS)

    Pan, Z.; Jensen, K. L.

    2013-04-01

    The evaporation of cesium from a tungsten surface is modeled using an effective one-dimensional potential well representation of the binding energy. The model accounts for both local and global interactions of cesium with the surface metal as well as with other cesium atoms. The theory is compared with the data of Taylor and Langmuir [Phys. Rev. 44, 423 (1933), 10.1103/PhysRev.44.423] comparing evaporation rates to sub-monolayer surface coverage of cesium, gives good agreement, and reproduces the nonlinear behavior of evaporation with varying coverage and temperature.

  8. Modelling of Short Term Interest Rate Based on Fractional Relaxation Equation

    NASA Astrophysics Data System (ADS)

    Jaworska, K.

    2008-09-01

    In this paper, we try to model the dynamics of short term interest rate using the fractional nonhomogeneous differential equation with stochastic free term. This type of equation is similar to one which represents the viscoelastic behavior of certain materials from rheologic point of view. As a final result we obtain the closed formula for prices of zero-coupon bonds. They are analogous to those in Vasiček model, where instead of the exponential functions we have the Mittag-Leffler ones.

  9. Evaluation of Finite-Rate GasSurface Interaction Models for a Carbon Based Ablator

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Goekcen, Tahir

    2015-01-01

    Two sets of finite-rate gas-surface interaction model between air and the carbon surface are studied. The first set is an engineering model with one-way chemical reactions, and the second set is a more detailed model with two-way chemical reactions. These two proposed models intend to cover the carbon surface ablation conditions including the low temperature rate-controlled oxidation, the mid-temperature diffusion-controlled oxidation, and the high temperature sublimation. The prediction of carbon surface recession is achieved by coupling a material thermal response code and a Navier-Stokes flow code. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and Ablation Program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting full Navier-Stokes equations using Data Parallel Line Relaxation method. Recession analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities with heat fluxes ranging from 45 to 1100 wcm2 are performed and compared with data for model validation. The ablating material used in these arc-jet tests is Phenolic Impregnated Carbon Ablator. Additionally, computational predictions of surface recession and shape change are in good agreement with measurement for arc-jet conditions of Small Probe Reentry Investigation for Thermal Protection System Engineering.

  10. The effects of composition on glass dissolution rates: The application of four models to a data base

    SciTech Connect

    Geldart, R.W.; Kindle, C.H.

    1988-01-01

    Four models have been applied to a data base to relate glass dissolution in distilled water to composition. The data base is used to compare the precisions obtained from the models in fitting actual data. The usefulness of the data base in formulating a model is also demonstrated. Two related models in which the composite or pH-adjusted free energy of hydration of the glass is the correlating parameter are compared with experimental data. In a structural model, the nonbridging oxygen content of the glasses is used to correlate glass dissolution rate to composition. In a model formulated for this report, the cation valence and the oxygen content of the glass are compared with observed dissolution rates. The models were applied to the 28-day normalized silica release at 90/sup 0/C for over 285 glass compositions with surface area to volume ratios of 10 m/sup -1/ (Materials Characterization Center MCC-1 glass durability test using distilled water). These glasses included the nonradioactive analogs of WV205 and SRL-165, as well as SRL-131, PNL 76-68, and a European glass, UK209. Predicted glass dissolution rates show similar fits to the data for all four models. The predictions of the models were also plotted for two subsets of the glasses: waste glasses and Savannah River Laboratory glasses. The model predictions fit the data for these groups much better than they fit the data for the entire set of glasses. 14 refs., 12 figs., 7 tabs.

  11. Modeling High Rate Phosphorus and Nitrogen Removal in a Vertical Flow Alum Sludge based Constructed Wetlands

    NASA Astrophysics Data System (ADS)

    Jeyakumar, Lordwin; Zhao, Yaqian

    2014-05-01

    Increased awareness of the impacts of diffuse pollution and their intensification has pushed forward the need for the development of low-cost wastewater treatment techniques. One of such efforts is the use of novel DASC (Dewatered Alum Sludge Cakes) based constructed wetlands (CWs) for removing nutrients, organics, trace elements and other pollutants from wastewater. Understanding of the processes in CWs requires a numerical model that describes the biochemical transformation and degradation processes in subsurface vertical flow (VF) CWs. Therefore, this research focuses on the development of a process-based model for phosphorus (P) and nitrogen (N) removal to achieve a stable performance by using DASC as a substrate in CWs treatment system. An object-oriented modelling tool known as "STELLA" which works based on the principle of system dynamics is used for the development of P and N model. The core objective of the modelling work is oriented towards understanding the process in DASC-based CWs and optimizes design criteria. The P and N dynamic model is developed for DASC-based CWs. The P model developed exclusively for DASC-based CW was able to simulate the effluent P concentration leaving the system satisfactorily. Moreover, the developed P dynamic model has identified the major P pathways as adsorption (72%) followed by plant uptake (20%) and microbial uptake (7%) in single-stage laboratory scale DASC-based CW. Similarly, P dynamic simulation model was developed to simulate the four-stage laboratory scale DASC-based CWs. It was found that simulated and observed values of P removal were in good agreement. The fate of P in all the four stages clearly shows that adsorption played a pivotal role in each stage of the system due to the use of the DASC as a substrate. P adsorption by wetland substrate/DASC represents 59-75% of total P reduction. Subsequently, plant uptake and microbial uptake have lesser role regarding P removal (as compared to adsorption).With regard

  12. Dynamic mechanical response and a constitutive model of Fe-based high temperature alloy at high temperatures and strain rates.

    PubMed

    Su, Xiang; Wang, Gang; Li, Jianfeng; Rong, Yiming

    2016-01-01

    The effects of strain rate and temperature on the dynamic behavior of Fe-based high temperature alloy was studied. The strain rates were 0.001-12,000 s(-1), at temperatures ranging from room temperature to 800 °C. A phenomenological constitutive model (Power-Law constitutive model) was proposed considering adiabatic temperature rise and accurate material thermal physical properties. During which, the effects of the specific heat capacity on the adiabatic temperature rise was studied. The constitutive model was verified to be accurate by comparison between predicted and experimental results. PMID:27186468

  13. A growth kinetics model of rate decomposition for Si1-xGex alloy based on dimer theory

    NASA Astrophysics Data System (ADS)

    Dai, Xian-Ying; Ji, Yao; Hao, Yue

    2014-01-01

    According to the dimer theory on semiconductor surface and chemical vapor deposition(CVD) growth characteristics of Si1-xGex, two mechanisms of rate decomposition and discrete flow density are proposed. Based on these two mechanisms, the Grove theory and Fick's first law, a CVD growth kinetics model of Si1-xGex alloy is established. In order to make the model more accurate, two growth control mechanisms of vapor transport and surface reaction are taken into account. The paper also considers the influence of the dimer structure on the growth rate. The results show that the model calculated value is consistent with the experimental values at different temperatures.

  14. Boosting Degree Completion and Transfer Rates: An Examination of Counseling/Advising Using the Relationship-Based Model

    ERIC Educational Resources Information Center

    Jefferson, Trevina

    2010-01-01

    The purpose of this pilot study was to explore another way to boost degree completion and transfer rates through helping students develop informed decision-making power; specifically, counseling methods in Los Angeles Community Colleges using the Relationship-Based model was examined. As part of this study, students would develop their research…

  15. Aftershock triggering by postseismic stresses: A study based on Coulomb rate-and-state models

    NASA Astrophysics Data System (ADS)

    Cattania, Camilla; Hainzl, Sebastian; Wang, Lifeng; Enescu, Bogdan; Roth, Frank

    2015-04-01

    The spatiotemporal clustering of earthquakes is a feature of medium- and short-term seismicity, indicating that earthquakes interact. However, controversy exists about the physical mechanism behind aftershock triggering: static stress transfer and reloading by postseismic processes have been proposed as explanations. In this work, we use a Coulomb rate-and-state model to study the role of coseismic and postseismic stress changes on aftershocks and focus on two processes: creep on the main shock fault plane (afterslip) and secondary aftershock triggering by previous aftershocks. We model the seismic response to Coulomb stress changes using the Dieterich constitutive law and focus on two events: the Parkfield, Mw = 6.0, and the Tohoku, Mw = 9.0, earthquakes. We find that modeling secondary triggering systematically improves the maximum log likelihood fit of the sequences. The effect of afterslip is more subtle and difficult to assess for near-fault events, where model errors are largest. More robust conclusions can be drawn for off-fault aftershocks: following the Tohoku earthquake, afterslip promotes shallow crustal seismicity in the Fukushima region. Simple geometrical considerations indicate that afterslip-induced stress changes may have been significant on trench parallel crustal fault systems following several of the largest recorded subduction earthquakes. Moreover, the time dependence of afterslip strongly enhances its triggering potential: seismicity triggered by an instantaneous stress change decays more quickly than seismicity triggered by gradual loading, and as a result we find afterslip to be particularly important between few weeks and few months after the main shock.

  16. A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity.

    PubMed

    Rachmuth, Guy; Shouval, Harel Z; Bear, Mark F; Poon, Chi-Sang

    2011-12-01

    Current advances in neuromorphic engineering have made it possible to emulate complex neuronal ion channel and intracellular ionic dynamics in real time using highly compact and power-efficient complementary metal-oxide-semiconductor (CMOS) analog very-large-scale-integrated circuit technology. Recently, there has been growing interest in the neuromorphic emulation of the spike-timing-dependent plasticity (STDP) Hebbian learning rule by phenomenological modeling using CMOS, memristor or other analog devices. Here, we propose a CMOS circuit implementation of a biophysically grounded neuromorphic (iono-neuromorphic) model of synaptic plasticity that is capable of capturing both the spike rate-dependent plasticity (SRDP, of the Bienenstock-Cooper-Munro or BCM type) and STDP rules. The iono-neuromorphic model reproduces bidirectional synaptic changes with NMDA receptor-dependent and intracellular calcium-mediated long-term potentiation or long-term depression assuming retrograde endocannabinoid signaling as a second coincidence detector. Changes in excitatory or inhibitory synaptic weights are registered and stored in a nonvolatile and compact digital format analogous to the discrete insertion and removal of AMPA or GABA receptor channels. The versatile Hebbian synapse device is applicable to a variety of neuroprosthesis, brain-machine interface, neurorobotics, neuromimetic computation, machine learning, and neural-inspired adaptive control problems. PMID:22089232

  17. An enhanced rate-based emission trading program for NOX: the Dutch model.

    PubMed

    Sholtz, A M; Van Amburg, B; Wochnick, V K

    2001-12-01

    Since 1997 government and industry in The Netherlands have been engaged in intensive policy discussions on how to design an emission trading program that would satisfy the Government's policy objectives within the national and international regulatory framework and accommodate industry's need for a flexible and cost-effective approach. Early on in the discussion the most promising solution was a rate-based approach, which dynamically allocated saleable emission credits based on a performance standard rate and actual energy used by facilities. All industrial facilities above a threshold of 20 MWth would be judged on their ability to meet this performance rate. Those "cleaner" than the standard can sell excess credits to others with an allocation that is less than their actual NOX emission. With some changes in law, such a design could be made to fit well into the national and EU legislative framework while at the same time uniquely meeting industry's requirement of flexibility toward economic growth and facility expansion. (An analysis of the legislative changes required will be given in a separate paper by Chris Dekkers.) However, the environmental outcome of such a system is not as certain as under an absolute emission cap. At the request of the Netherlands Ministry of Housing, Spatial Planning and the Environment (VROM), Automated Credit Exchange (ACE), in close cooperation with the working group of government and industry representatives introduced a number of features into the Dutch NOX program allowing full exploitation of market mechanisms while allowing intermediate adjustments in the performance standard rates. The design is geared toward meeting environmental targets without jeopardizing the trading market the program intends to create. The paper discusses the genesis of the two-tier credit system ACE helped to design, explains the differences between primary (fixed) and secondary (variable) credits, and outlines how the Dutch system is expected to

  18. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration rates

    PubMed Central

    Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James

    2014-01-01

    We build an agent-based model of incarceration based on the susceptible–infected–suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology. PMID:24966237

  19. Shell Model Based Reaction Rates for rp-PROCESS Nuclei in the Mass Range A=44-63

    NASA Astrophysics Data System (ADS)

    Fisker, J. L.; Barnard, V.; Görres, J.; Langanke, K.; Martínez-Pinedo, G.; Wiescher, M. C.

    2001-11-01

    We have used large-scale shell-model diagonalization calculations to determine the level spectra, proton spectroscopic factors, and electromagnetic transition probabilities for proton rich nuclei in the mass range A=44-63. Based on these results and the available experimental data, we calculated the resonances for proton capture reactions on neutron deficient nuclei in this mass range. We also calculated the direct capture processes on these nuclei in the framework of a Woods-Saxon potential model. Taking into account both resonant and direct contributions, we determined the ground-state proton capture reaction rates for these nuclei under hot hydrogen burning conditions for temperatures between 108 and 1010 K. The calculated compound-nucleus level properties and the reaction rates are presented here; the rates are also available in computer-readable format from the authors.

  20. A model-based evaluation of sedimentary reconstructions of 10Be production rates

    NASA Astrophysics Data System (ADS)

    Carney, Lewis; Plancherel, Yves; Khatiwala, Samar; Henderson, Gideon

    2016-04-01

    Atmospheric production of 10Be is small when solar activity and, therefore, solar magnetic field and total solar irradiance are strong. Variations in solar activity affect climate and the production of other climate-relevant isotopes, such as 14C. Solar activity is thus an important variable to constrain. Since 10Be production is clearly related to solar activity and the cycle of beryllium is simpler than that of carbon, 10Be records in ice cores have been used to reconstruct total solar irradiance variability. Unfortunately, 10Be records in ice cores are not only affected by variations in atmospheric production, but are also modulated by changes in wind patterns since spatiotemporal atmospheric 10Be gradients are quite large. In that context, sedimentary 10Be records from the abyssal ocean could be of great interest: since the residence time of 10Be in the ocean is thought to be comparable to the overturning time-scale of the ocean, spatial 10Be gradients may be relatively weaker than those in the atmosphere. Under these conditions, regional oceanic variability should only weakly affect the distribution of 10Be in the ocean and local sedimentary 10Be records are expected to represent the global average 10Be production better than 10Be measured in ice cores. We here show results from a global ocean model of 10Be that we use to investigate the spatial variability of simulated sedimentary 10Be records and test the sensitivity of the 10Be sedimentary flux to uncertainties in the circulation field and in the particle chemistry of beryllium. Our ocean model is based on the Transport Matrix method. The surface 10Be input fluxes are taken from atmospheric model simulations. Our model experiments, constrained by available dissolved 10Be data, show that there exist regions in the ocean where the sedimentary 10Be flux is relatively insensitive to changes in input patterns and magnitudes, assumed particle chemistry and flux patterns, and ocean circulation. We submit that

  1. Cosmogenic nuclide production rates as a function of latitude and altitude calculated via a physics based model and excitation functions

    NASA Astrophysics Data System (ADS)

    Argento, D.; Reedy, R. C.; Stone, J. O.

    2012-12-01

    Cosmogenic nuclides have been used to develop a set of tools critical to the quantification of a wide range of geomorphic and climatic processes and events (Dunai 2010). Having reliable absolute measurement methods has had great impact on research constraining ice age extents as well as providing important climatic data via well constrained erosion rates, etc. Continuing to improve CN methods is critical for these sciences. While significant progress has been made in the last two decades to reduce uncertainties (Dunai 2010; Gosse & Phillips 2001), numerous aspects still need to be refined in order to achieve the analytic resolution desired by glaciologists and geomorphologists. In order to investigate the finer details of the radiation responsible for cosmogenic nuclide production, we have developed a physics based model which models the radiation cascade of primary and secondary cosmic-rays through the atmosphere. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray (GCR) radiation impinging on the upper atmosphere. Beginning with a spectrum of high energy protons and alpha particles at the top of the atmosphere, the code tracks the primary and resulting secondary particles through a model of the Earth's atmosphere and into the lithosphere. Folding the neutron and proton flux results with energy dependent cross sections for nuclide production provides production rates for key cosmogenic nuclides (Argento et al. 2012, in press; Reedy 2012, in press). Our initial study for high latitude shows that nuclides scale at different rates for each nuclide (Argento 2012, in press). Furthermore, the attenuation length for each of these nuclide production rates increases with altitude, and again, they increase at different rates. This has the consequence of changing the production rate ratio as a function of altitude. The earth's geomagnetic field differentially filters low energy cosmic-rays by deflecting them away

  2. Applying constraints on model-based methods: estimation of rate constants in a second order consecutive reaction.

    PubMed

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents. PMID:23220674

  3. Strain-rate sensitivity of foam materials: A numerical study using 3D image-based finite element model

    NASA Astrophysics Data System (ADS)

    Sun, Yongle; Li, Q. M.; Withers, P. J.

    2015-09-01

    Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band) of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi) may not be sufficiently representative to capture the actual structural effect. To overcome these limitations, the strain-rate sensitivity of the compressive and tensile properties of closed-cell aluminium Alporas foam is investigated in this study by means of meso-scale realistic finite element (FE) simulations. The FE modelling method based on X-ray computed tomography (CT) image is introduced first, as well as its applications to foam materials. Then the compression and tension of Alporas foam at a wide variety of applied nominal strain-rates are simulated using FE model constructed from the actual cell geometry obtained from the CT image. The stain-rate sensitivity of compressive strength (collapse stress) and tensile strength (0.2% offset yield point) are evaluated when considering different cell-wall material properties. The numerical results show that the rate dependence of cell-wall material is the main cause of the strain-rate hardening of the compressive and tensile strengths at low and intermediate strain-rates. When the strain-rate is sufficiently high, shock compression is initiated, which significantly enhances the stress at the loading end and has complicated effect on the stress at the supporting end. The plastic tensile wave effect is evident at high strain-rates, but shock tension cannot develop in Alporas foam due to the softening associated with single fracture process zone occurring in tensile response. In all cases the micro inertia of individual cell walls subjected to localised deformation is found to

  4. Modelling temperature-compensated physiological rates, based on the co-ordination of responses to temperature of developmental processes.

    PubMed

    Parent, B; Turc, O; Gibon, Y; Stitt, M; Tardieu, F

    2010-05-01

    Temperature fluctuates rapidly and affects all developmental and metabolic processes. This often obscures the effects of developmental trends or of other environmental conditions when temperature fluctuates naturally. A method is proposed for modelling temperature-compensated rates, based on the coordination of temperature responses of developmental processes. In a data set comprising 41 experiments in the greenhouse, growth chamber, or the field, the temperature responses in the range of 6-36 degrees C for different processes were compared in three species, maize, rice, and Arabidopsis thaliana. Germination, cell division, expansive growth rate, leaf initiation, and phenology showed coordinated temperature responses and followed common laws within each species. The activities of 10 enzymes involved in carbon metabolism exhibited monotonous exponential responses across the whole range 10-40 degrees C. Hence, the temperature dependence of developmental processes is not explained by a simple relationship to central metabolism. Temperature-compensated rates of development were calculated from the equations of response curve, by expressing rates per unit equivalent time at 20 degrees C. This resulted in stable rates when temperatures fluctuated over a large range (for which classical thermal time was inefficient), and in time courses of leaf development which were common to several experiments with different temperature scenarios. PMID:20194927

  5. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. PMID:26686485

  6. Toy Stories: Modeling Rates

    ERIC Educational Resources Information Center

    Swanson, Patricia E.

    2015-01-01

    Elementary school mathematics is increasingly recognized for its crucial role in developing the foundational skills and understandings for algebra. In this article, the author uses a lesson to introduce the concept of "rates"--comparing two different types and units of measure--and how to graph them. Described is the lesson and shared…

  7. The effect of an instructional program based on health belief model in decreasing cesarean rate among primiparous pregnant mothers

    PubMed Central

    Hassani, Laleh; Aghamolaei, Teamur; Ghanbarnejad, Amin; Dadipoor, Sakineh

    2016-01-01

    Introduction: Although cesarean section has saved many mothers’ and infants’ lives, the problem is in its increasing prevalence. According to recent statistics, the current rate of cesarean in Iran is in fact 3–4 times as more than the standard rate defined by WHO. Therefore, the present study is aimed to estimate the effect of an instructional program based on health belief model on reducing cesarean rate among primiparous pregnant women. Materials and Methods: In this semi-experimental research, 60 primiparous women who had visited Bandar Abbas Healthcare Centers were selected as the subjects. They were in their 26–30th week of pregnancy. They were selected in a multi-stage cluster sampling method (a combination of clustering and simple randomization), and were divided into two groups, subjects and control group. The data were gathered using a valid and reliable questionnaire. The instructional intervention was done after the completion of the pretest questionnaire based on the sub-constructs of the health belief model in six instructional sessions. 1 month after the intervention, posttest questionnaires were completed by the subjects in both groups. The data were analyzed using descriptive statistics, standard deviation, independent t-test, and paired t-test. The significance level was set at <0.05. Results: Two groups had a significant difference between awareness score, perceived sensitivity, intensity, benefits, barriers, self-efficacy, and the performance (P < 0.001). In the experimental group, nine subjects (30%) had a natural delivery. Conclusion: According to the findings of the current research, an instructional program illuminated (designed) by the health belief model can significantly influence pregnant women's awareness, intention, and choice of delivery type. PMID:27512693

  8. Agent-based mathematical modeling as a tool for estimating Trypanosoma cruzi vector-host contact rates.

    PubMed

    Yong, Kamuela E; Mubayi, Anuj; Kribs, Christopher M

    2015-11-01

    The parasite Trypanosoma cruzi, spread by triatomine vectors, affects over 100 mammalian species throughout the Americas, including humans, in whom it causes Chagas' disease. In the U.S., only a few autochthonous cases have been documented in humans, but prevalence is high in sylvatic hosts (primarily raccoons in the southeast and woodrats in Texas). The sylvatic transmission of T. cruzi is spread by the vector species Triatoma sanguisuga and Triatoma gerstaeckeri biting their preferred hosts and thus creating multiple interacting vector-host cycles. The goal of this study is to quantify the rate of contacts between different host and vector species native to Texas using an agent-based model framework. The contact rates, which represent bites, are required to estimate transmission coefficients, which can be applied to models of infection dynamics. In addition to quantitative estimates, results confirm host irritability (in conjunction with host density) and vector starvation thresholds and dispersal as determining factors for vector density as well as host-vector contact rates. PMID:26215127

  9. Modeling the shear rate and pressure drop in a hydrodynamic cavitation reactor with experimental validation based on KI decomposition studies.

    PubMed

    Badve, Mandar P; Alpar, Tibor; Pandit, Aniruddha B; Gogate, Parag R; Csoka, Levente

    2015-01-01

    A mathematical model describing the shear rate and pressure variation in a complex flow field created in a hydrodynamic cavitation reactor (stator and rotor assembly) has been depicted in the present study. The design of the reactor is such that the rotor is provided with surface indentations and cavitational events are expected to occur on the surface of the rotor as well as within the indentations. The flow characteristics of the fluid have been investigated on the basis of high accuracy compact difference schemes and Navier-Stokes method. The evolution of streamlining structures during rotation, pressure field and shear rate of a Newtonian fluid flow have been numerically established. The simulation results suggest that the characteristics of shear rate and pressure area are quite different based on the magnitude of the rotation velocity of the rotor. It was observed that area of the high shear zone at the indentation leading edge shrinks with an increase in the rotational speed of the rotor, although the magnitude of the shear rate increases linearly. It is therefore concluded that higher rotational speeds of the rotor, tends to stabilize the flow, which in turn results into less cavitational activity compared to that observed around 2200-2500RPM. Experiments were carried out with initial concentration of KI as 2000ppm. Maximum of 50ppm of iodine liberation was observed at 2200RPM. Experimental as well as simulation results indicate that the maximum cavitational activity can be seen when rotation speed is around 2200-2500RPM. PMID:24924259

  10. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  11. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2014-04-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  12. Towards optimal design of sport footwear based on muscle activity and minimum loading rate using simplified model.

    PubMed

    Khassetarash, Arash; Hassannejad, Reza

    2015-08-01

    Recently, researchers have been looking for a model to relate the ground reaction force to the vibration parameters of sport shoes. A 4-degrees-of-freedom mass-spring-damper model and a dimensionless objective function have been used in the literature in order to simulate the functionality of the central nervous system based on muscle tuning paradigm. In this study, a two-phase optimization method was proposed in order to better simulate the functionality of the central nervous system. A comparison between the results of this method and those in the literature demonstrates more accuracy. Additionally, it has been reported that the loading rate in the time history of the ground reaction force has a role in overuse injuries in the tibia and tarsals; therefore, in order to introduce an optimum design for sport shoes, in this study, a new objective function was introduced and the proposed method was used in order to find the best possible range of shoe parameters for pre-fatigued and post-fatigued conditions. Results indicated that there is no possible solution which brings the loading rate to the desired value; however, the region in which the least values for the objective function achieved was introduced as the safe region. It was shown that the size of this safe region shrinks as muscles fatigue. The results also demonstrated high dependency of the safe region to the mass distribution and touchdown velocities of the lower extremities. PMID:26238788

  13. Beyond The Blueprint: Development Of Genome-Informed Trait-Based Models For Prediction Of Microbial Dynamics And Biogeochemical Rates

    NASA Astrophysics Data System (ADS)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.

    2014-12-01

    In soils and sediments microorganisms perform essential ecosystem services through their roles in regulating the stability of carbon and the flux of nutrients, and the purification of water. But these are complex systems with the physical, chemical and biological components all intimately connected. Components of this complexity are gradually being uncovered and our understanding of the extent of microbial functional diversity in particular has been enhanced greatly with the development of cultivation independent approaches. However we have not moved far beyond a descriptive and correlative use of this powerful resource. As the ability to reconstruct thousands of genomes from microbial populations using metagenomic techniques gains momentum, the challenge will be to develop an understanding of how these metabolic blueprints serve to influence the fitness of organisms within these complex systems and how populations emerge and impact the physical and chemical properties of their environment. In the presentation we will discuss the development of a trait-based model of microbial activity that simulates coupled guilds of microorganisms that are parameterized including traits extracted from large-scale metagenomic data. Using a reactive transport framework we simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for respiration, biomass development and exo-enzyme production. Each group within a functional guild is parameterized with a unique combination of traits governing organism fitness under dynamic environmental conditions. This presentation will address our latest developments in the estimation of trait values related to growth rate and the identification and linkage of key fitness traits associated with respiratory and fermentative pathways, macromolecule depolymerization enzymes and nitrogen fixation from metagenomic data. We are testing model sensitivity to initial microbial composition and intra

  14. Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location.

    PubMed

    Graupner, Michael; Brunel, Nicolas

    2012-03-01

    Multiple stimulation protocols have been found to be effective in changing synaptic efficacy by inducing long-term potentiation or depression. In many of those protocols, increases in postsynaptic calcium concentration have been shown to play a crucial role. However, it is still unclear whether and how the dynamics of the postsynaptic calcium alone determine the outcome of synaptic plasticity. Here, we propose a calcium-based model of a synapse in which potentiation and depression are activated above calcium thresholds. We show that this model gives rise to a large diversity of spike timing-dependent plasticity curves, most of which have been observed experimentally in different systems. It accounts quantitatively for plasticity outcomes evoked by protocols involving patterns with variable spike timing and firing rate in hippocampus and neocortex. Furthermore, it allows us to predict that differences in plasticity outcomes in different studies are due to differences in parameters defining the calcium dynamics. The model provides a mechanistic understanding of how various stimulation protocols provoke specific synaptic changes through the dynamics of calcium concentration and thresholds implementing in simplified fashion protein signaling cascades, leading to long-term potentiation and long-term depression. The combination of biophysical realism and analytical tractability makes it the ideal candidate to study plasticity at the synapse, neuron, and network levels. PMID:22357758

  15. Event-based minimum-time control of oscillatory neuron models: phase randomization, maximal spike rate increase, and desynchronization.

    PubMed

    Danzl, Per; Hespanha, João; Moehlis, Jeff

    2009-12-01

    We present an event-based feedback control method for randomizing the asymptotic phase of oscillatory neurons. Phase randomization is achieved by driving the neuron's state to its phaseless set, a point at which its phase is undefined and is extremely sensitive to background noise. We consider the biologically relevant case of a fixed magnitude constraint on the stimulus signal, and show how the control objective can be accomplished in minimum time. The control synthesis problem is addressed using the minimum-time-optimal Hamilton-Jacobi-Bellman framework, which is quite general and can be applied to any spiking neuron model in the conductance-based Hodgkin-Huxley formalism. We also use this methodology to compute a feedback control protocol for optimal spike rate increase. This framework provides a straightforward means of visualizing isochrons, without actually calculating them in the traditional way. Finally, we present an extension of the phase randomizing control scheme that is applied at the population level, to a network of globally coupled neurons that are firing in synchrony. The applied control signal desynchronizes the population in a demand-controlled way. PMID:19911192

  16. Investigation of the mechanical behavior of kangaroo humeral head cartilage tissue by a porohyperelastic model based on the strain-rate-dependent permeability.

    PubMed

    Thibbotuwawa, Namal; Oloyede, Adekunle; Senadeera, Wijitha; Li, Tong; Gu, YuanTong

    2015-11-01

    Solid-interstitial fluid interaction, which depends on tissue permeability, is significant to the strain-rate-dependent mechanical behavior of humeral head (shoulder) cartilage. Due to anatomical and biomechanical similarities to that of the human shoulder, kangaroos present a suitable animal model. Therefore, indentation experiments were conducted on kangaroo shoulder cartilage tissues from low (10(-4)/s) to moderately high (10(-2)/s) strain-rates. A porohyperelastic model was developed based on the experimental characterization; and a permeability function that takes into account the effect of strain-rate on permeability (strain-rate-dependent permeability) was introduced into the model to investigate the effect of rate-dependent fluid flow on tissue response. The prediction of the model with the strain-rate-dependent permeability was compared with those of the models using constant permeability and strain-dependent permeability. Compared to the model with constant permeability, the models with strain-dependent and strain-rate-dependent permeability were able to better capture the experimental variation at all strain-rates (p < 0.05). Significant differences were not identified between models with strain-dependent and strain-rate-dependent permeability at strain-rate of 5 × 10(-3)/s (p = 0.179). However, at strain-rate of 10(-2)/s, the model with strain-rate-dependent permeability was significantly better at capturing the experimental results (p < 0.005). The findings thus revealed the significance of rate-dependent fluid flow on tissue behavior at large strain-rates, which provides insights into the mechanical deformation mechanisms of cartilage tissues. PMID:26275487

  17. Gas ultrasonic flow rate measurement through genetic-ant colony optimization based on the ultrasonic pulse received signal model

    NASA Astrophysics Data System (ADS)

    Hou, Huirang; Zheng, Dandan; Nie, Laixiao

    2015-04-01

    For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.

  18. The degree of heart rate asymmetry is crucial for the validity of the deceleration and acceleration capacity indices of heart rate: A model-based study.

    PubMed

    Pan, Qing; Zhou, Gongzhan; Wang, Ruofan; Yu, Yihua; Li, Feng; Fang, Luping; Yan, Jing; Ning, Gangmin

    2016-09-01

    The deceleration capacity (DC) and acceleration capacity (AC) of heart rate are a pair of indices used for evaluating the autonomic nervous system (ANS). We assessed the role of heart rate asymmetry (HRA) in defining the relative performance of DC and AC using a mathematical model, which is able to generate a realistic RR interval (RRI) time series with controlled ANS states. The simulation produced a set of RRI series with random sympathetic and vagal activities. The multi-scale DCs and ACs were computed from the RRI series, and the correlation of DC and AC with the ANS functions was analyzed to evaluate the performance of the indices. In the model, the HRA level was modified by changing the inspiration/expiration (I/E) ratio to examine the influence of HRA on the performances of DC and AC. The results show that on the conventional scales (T=1, s=2), an HRA level above 50% results in a stronger association of DC with the ANS, compared with AC. On higher scales (T=4, s=6), there was no HRA and DC showed a similar performance to AC for all I/E ratios. The data suggest that the HRA level determines which of DC or AC is the optimal index for expressing ANS functions. Future clinical applications of DC and AC should be accompanied by an HRA analysis to provide a better index for assessing ANS. PMID:27392228

  19. Reliability of delirium rating scale (DRS) and delirium rating scale-revised-98 (DRS-R98) using variance-based multivariate modelling.

    PubMed

    Adamis, Dimitrios; Slor, Chantal J; Leonard, Maeve; Witlox, Joost; de Jonghe, Jos F M; Macdonald, Alastair J D; Trzepacz, Paula; Meagher, David

    2013-07-01

    Delirium's characteristic fluctuation in symptom severity complicates the assessment of test-retest reliability of scales using classical analyses, but application of modelling to longitudinal data offers a new approach. We evaluated test-retest reliability of the delirium rating scale (DRS) and delirium rating scale-revised-98 (DRS-R98), two widely used instruments with high validity and inter-rater reliability. Two existing longitudinal datasets for each scale included DSM-IV criteria for delirium diagnosis and repeated measurements using the DRS or DRS-R98. To estimate the reliability coefficients RT and RΛ for each scale we used a macros provided by Dr. Laenen at http://www.ibiostat.be/software/measurement.asp. For each dataset a linear mixed-effects model was fitted to estimate the variance-covariance parameters. A total of 531 cases with between 4 and 9 measurement points across studies including both delirious and non-delirious patients. Comorbid dementia in the datasets varied from 27% to 55%. Overall RT for the DRS were 0.71 and 0.50 and for DRS-R98 0.75 and 0.84. RΛ values for DRS were 0.99 and 0.98 and for DRS-R98 were 0.92 and 0.96. Individual RT measures for DRS-R98 and DRS across visits within studies showed more range than overall values. Our models found high overall reliability for both scales. Multiple factors impact a scale's reliability values including sample size, repeated measurements, patient population, etc in addition to rater variability. PMID:23522935

  20. An Extension to the Constructivist Coding Hypothesis as a Learning Model for Selective Feedback when the Base Rate Is High

    ERIC Educational Resources Information Center

    Ghaffarzadegan, Navid; Stewart, Thomas R.

    2011-01-01

    Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…

  1. [Prediction model of net photosynthetic rate of ginseng under forest based on optimized parameters support vector machine].

    PubMed

    Wu, Hai-wei; Yu, Hai-ye; Zhang, Lei

    2011-05-01

    Using K-fold cross validation method and two support vector machine functions, four kernel functions, grid-search, genetic algorithm and particle swarm optimization, the authors constructed the support vector machine model of the best penalty parameter c and the best correlation coefficient. Using information granulation technology, the authors constructed P particle and epsilon particle about those factors affecting net photosynthetic rate, and reduced these dimensions of the determinant. P particle includes the percent of visible spectrum ingredients. Epsilon particle includes leaf temperature, scattering radiation, air temperature, and so on. It is possible to obtain the best correlation coefficient among photosynthetic effective radiation, visible spectrum and individual net photosynthetic rate by this technology. The authors constructed the training set and the forecasting set including photosynthetic effective radiation, P particle and epsilon particle. The result shows that epsilon-SVR-RBF-genetic algorithm model, nu-SVR-linear-grid-search model and nu-SVR-RBF-genetic algorithm model obtain the correlation coefficient of up to 97% about the forecasting set including photosynthetic effective radiation and P particle. The penalty parameter c of nu-SVR-linear-grid-search model is the minimum, so the model's generalization ability is the best. The authors forecasted the forecasting set including photosynthetic effective radiation, P particle and epsilon particle by the model, and the correlation coefficient is up to 96%. PMID:21800612

  2. Further tests of a model-based scheme for predicting pilot opinion ratings for large commercial transports

    NASA Technical Reports Server (NTRS)

    Rickard, W. W.; Levison, W. H.

    1981-01-01

    A methodology was demonstrated for assessing longitudinal-axis handling qualities of transport aircraft on the basis of closed-loop criteria. Six longitudinal-axis approach configurations were studied covering a range of handling quality problems that included the presence of flexible aircraft modes. Using closed-loop performance requirements derived from task analyses and pilot interviews, predictions of performance/workload tradeoffs were obtained using an analytical pilot/vehicle model. A subsequent manned simulation study yielded objective performance measures and Cooper-Harper pilot ratings that were largely consistent with each other and with analytic predictions.

  3. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  4. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...

  5. A fuzzy-logic-based model to predict biogas and methane production rates in a pilot-scale mesophilic UASB reactor treating molasses wastewater.

    PubMed

    Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan

    2010-10-15

    A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98. PMID:20609515

  6. Modeling kinetic rate variation in third generation DNA sequencing data to detect putative modifications to DNA bases.

    PubMed

    Schadt, Eric E; Banerjee, Onureena; Fang, Gang; Feng, Zhixing; Wong, Wing H; Zhang, Xuegong; Kislyuk, Andrey; Clark, Tyson A; Luong, Khai; Keren-Paz, Alona; Chess, Andrew; Kumar, Vipin; Chen-Plotkin, Alice; Sondheimer, Neal; Korlach, Jonas; Kasarskis, Andrew

    2013-01-01

    Current generation DNA sequencing instruments are moving closer to seamlessly sequencing genomes of entire populations as a routine part of scientific investigation. However, while significant inroads have been made identifying small nucleotide variation and structural variations in DNA that impact phenotypes of interest, progress has not been as dramatic regarding epigenetic changes and base-level damage to DNA, largely due to technological limitations in assaying all known and unknown types of modifications at genome scale. Recently, single-molecule real time (SMRT) sequencing has been reported to identify kinetic variation (KV) events that have been demonstrated to reflect epigenetic changes of every known type, providing a path forward for detecting base modifications as a routine part of sequencing. However, to date no statistical framework has been proposed to enhance the power to detect these events while also controlling for false-positive events. By modeling enzyme kinetics in the neighborhood of an arbitrary location in a genomic region of interest as a conditional random field, we provide a statistical framework for incorporating kinetic information at a test position of interest as well as at neighboring sites that help enhance the power to detect KV events. The performance of this and related models is explored, with the best-performing model applied to plasmid DNA isolated from Escherichia coli and mitochondrial DNA isolated from human brain tissue. We highlight widespread kinetic variation events, some of which strongly associate with known modification events, while others represent putative chemically modified sites of unknown types. PMID:23093720

  7. Base Rates: Both Neglected and Intuitive

    ERIC Educational Resources Information Center

    Pennycook, Gordon; Trippas, Dries; Handley, Simon J.; Thompson, Valerie A.

    2014-01-01

    Base-rate neglect refers to the tendency for people to underweight base-rate probabilities in favor of diagnostic information. It is commonly held that base-rate neglect occurs because effortful (Type 2) reasoning is required to process base-rate information, whereas diagnostic information is accessible to fast, intuitive (Type 1) processing…

  8. A process-based model to estimate gas exchange and monoterpene emission rates in the mediterranean maquis - comparisons between modelled and measured fluxes at different scales

    NASA Astrophysics Data System (ADS)

    Vitale, M.; Matteucci, G.; Fares, S.; Davison, B.

    2009-02-01

    This paper concerns the application of a process-based model (MOCA, Modelling of Carbon Assessment) as an useful tool for estimating gas exchange, and integrating the empirical algorithms for calculation of monoterpene fluxes, in a Mediterranean maquis of central Italy (Castelporziano, Rome). Simulations were carried out for a range of hypothetical but realistic canopies of the evergreen Quercus ilex (holm oak), Arbutus unedo (strawberry tree) and Phillyrea latifolia. More, the dependence on total leaf area and leaf distribution of monoterpene fluxes at the canopy scale has been considered in the algorithms. Simulation of the gas exchange rates showed higher values for P. latifolia and A. unedo (2.39±0.30 and 3.12±0.27 gC m-2 d-1, respectively) with respect to Q. ilex (1.67±0.08 gC m-2 d-1) in the measuring campaign (May-June). Comparisons of the average Gross Primary Production (GPP) values with those measured by eddy covariance were well in accordance (7.98±0.20 and 6.00±1.46 gC m-2 d-1, respectively, in May-June), although some differences (of about 30%) were evident in a point-to-point comparison. These differences could be explained by considering the non uniformity of the measuring site where diurnal winds blown S-SW direction affecting thus calculations of CO2 and water fluxes. The introduction of some structural parameters in the algorithms for monoterpene calculation allowed to simulate monoterpene emission rates and fluxes which were in accord to those measured (6.50±2.25 vs. 9.39±4.5μg g-1DW h-1 for Q. ilex, and 0.63±0.207μg g-1DW h-1 vs. 0.98±0.30μg g-1DW h-1 for P. latifolia). Some constraints of the MOCA model are discussed, but it is demonstrated to be an useful tool to simulate physiological processes and BVOC fluxes in a very complicated plant distributions and environmental conditions, and necessitating also of a low number of input data.

  9. [A model-based meta-analysis to compare urate-lowering response rate of febuxostat and allopurinol in gout patient].

    PubMed

    Sun, Yi; Li, Liang; Zhou, Tian-Yan; Lu, Wei

    2014-12-01

    This study aims to compare the urate-lowering response rate of febuxostat and allopurinol in gout patient using a model-based meta-analysis. The literature search identified 22 clinical trials of gout with a total of 43 unique treatment arms that met our inclusion criteria, and a total of 6 365 gout patients were included in the study. The response rates of allopuriol and febuxostat were characterized by Tmax model and Emax model respectively, and the effect of baseline serum uric acid (sUA) and patient type on the drug effect was tested. The results showed that allopurinol can reach an average maximum response rate of 50.8% while febuxostat can reach a 100% response rate within a very short time, and the ED50 was 34.3 mg. Covariate analysis revealed that baseline sUA has a negative effect on response rate of allopurinol, and a positive effect on the predicted ED50 of febuxostat. For patients who had shown inadequate response to prior allopurinol treatment, the average response rate was about half that of the allopurinol responder patients. PMID:25920196

  10. Rate my data: a hierarchical approach to quantifying the relative value of ecological data for the development of process-based models of the terrestrial carbon cycle

    NASA Astrophysics Data System (ADS)

    Keenan, T. F.; Richardson, A. D.; Davidson, E. A.; Munger, J. W.

    2011-12-01

    The proliferation of ecological observation networks over the past two decades has led to the accumulation of large amounts of data at different spatial and temporal scales. Process-based models of the terrestrial carbon cycle have been adopted as the most effective way of scaling this point based information through space and time. Given the large amounts of data available, model developers have begun to update the statistical and analytical tools they use, relying more heavily on techniques such as data mining and model-data fusion. Such techniques are useful in that they can synchronously use all measurements available to give a more complete integration of models with data, shedding light on model weaknesses and highlighting model aspects in need of further development. Although modelers and organizers of measurement campaigns are focused on similar questions of terrestrial carbon cycling, cooperative efforts between the two are rare. Modelers generally use a limited set of measurements, with large assumptions as to what measurements are most effective in reducing uncertainty in model projections. On the other hand, those involved in field work are often motivated by hypothesis driven science, and commonly do not have information as to what measurements would be most useful for modelers. The lack of information flow between the two communities is clearly sub-optimal. Here we address this problem by providing a hierarchical rating of the value of different data sources for reducing uncertainty in model estimates of terrestrial carbon cycling. We do so using a model-data fusion framework to iteratively integrate different data streams (both real data from Harvard forest, MA, USA, and synthetic data) with a process-based model of terrestrial carbon cycling. At each stage, the data source that leads to the greatest reduction in uncertainty in model projections is retained, and the additional benefit of each other data stream is tested independently. This process is

  11. A physics-based two-dimensional comprehensive mathematical model to predict non-uniform regression rate in solid fuels for hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Antoniou, Antonis

    A numerical study using a comprehensive physics based mathematical model is conducted to predict the fuel regression rate in hybrid rocket fuels. The physical model adopted for the study is based on an unsteady, two-domain (solid fuel and gaseous oxidizer coupled through a moving interface) concept where both domains are assumed to be two-dimensional. The oxidizer gas flow is assumed to be compressible and turbulent with Navier-Stokes Assumptions. The radiative heat transfer is incorporated to the energy equation for the gas domain using the Rosseland diffusion approximation. Fuel is assumed to be a nontransparent isotropic solid. The two domains are coupled through an energy balance at the interface that includes heat transfer due to radiation, conduction, and ablation. The regression rate of the fuel surface due to ablation is modeled using the first-order Arrhenius Equation. The combustion of the ablated fuel is modeled by single step, three species chemical reaction equation of second order Arrhenius type. The solution to the governing differential equations of the present model is obtained by first transform the solution domain using a time and space dependent transformation. In the gas domain the transformed set of differential equations is discretized by a fully implicit finite-difference technique then linearized by using Newton linearization method. The resulting set of algebraic equations are transformed by the Coupled Modified Strongly Implicit Procedure (CMSIP) for the primitive variables of the problem. Validation of the solution algorithm and the CMSIP that is developed for this study is validated through the study of two bench mark cases: driven cavity and flow through channel. Furthermore, the results of the comprehensive model are compared to those of the parabolic incompressible model. Finally the proposed comprehensive mathematical model is used to predict the unsteady temperature and pressure distributions, and the velocity field in the gas

  12. Rating Scale Analysis with Latent Class Models.

    ERIC Educational Resources Information Center

    Rost, Jurgen

    1988-01-01

    A general approach for analyzing rating data with latent class models is described, paralleling rating models in the framework of latent trait theory. A general rating model and a two-parameter model with location and dispersion parameters are derived and illustrated. (Author/SLD)

  13. Influence of the formation- and passivation rate of boron-oxygen defects for mitigating carrier-induced degradation in silicon within a hydrogen-based model

    NASA Astrophysics Data System (ADS)

    Hallam, Brett; Abbott, Malcolm; Nampalli, Nitin; Hamer, Phill; Wenham, Stuart

    2016-02-01

    A three-state model is used to explore the influence of defect formation- and passivation rates of carrier-induced degradation related to boron-oxygen complexes in boron-doped p-type silicon solar cells within a hydrogen-based model. The model highlights that the inability to effectively mitigate carrier-induced degradation at elevated temperatures in previous studies is due to the limited availability of defects for hydrogen passivation, rather than being limited by the defect passivation rate. An acceleration of the defect formation rate is also observed to increase both the effectiveness and speed of carrier-induced degradation mitigation, whereas increases in the passivation rate do not lead to a substantial acceleration of the hydrogen passivation process. For high-throughput mitigation of such carrier-induced degradation on finished solar cell devices, two key factors were found to be required, high-injection conditions (such as by using high intensity illumination) to enable an acceleration of defect formation whilst simultaneously enabling a rapid passivation of the formed defects, and a high temperature to accelerate both defect formation and defect passivation whilst still ensuring an effective mitigation of carrier-induced degradation.

  14. Disturbance Distance: Using a process based ecosystem model to estimate and map potential thresholds in disturbance rates that would give rise to fundamentally altered ecosystems

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.; Hurtt, G. C.; Fisk, J.; Flanagan, S.; LePage, Y.; Sahajpal, R.

    2014-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. As recent studies highlight novel disturbance regimes resulting from pollution, invasive pests and climate change, there is a need to include these alterations in predictions of future forest function and structure. The Ecosystem Demography (ED) model is a mechanistic model of forest ecosystem dynamics in which individual-based forest dynamics can be efficiently implemented over regional to global scales due to advanced scaling methods. We utilize ED to characterize the sensitivity of potential vegetation structure and function to changes in rates of density independent mortality. Disturbance rate within ED can either be altered directly or through the development of sub-models. Disturbance sub-models in ED currently include fire, land use and hurricanes. We use a tiered approach to understand the sensitivity of North American ecosystems to changes in background density independent mortality. Our first analyses were conducted at half-degree spatial resolution with a constant rate of disturbance in space and time, which was altered between runs. Annual climate was held constant at the site level and the land use and fire sub-models were turned off. Results showed an ~ 30% increase in non-forest area across the US when disturbance rates were changed from 0.6% a year to 1.2% a year and a more than 3.5 fold increase in non-forest area when disturbance rates doubled again from 1.2% to 2.4%. Continued runs altered natural background disturbance rates with the existing fire and hurricane sub models turned on as well as historic and future land use. By quantify differences between model outputs that characterize ecosystem structure and function related to the carbon cycle across the US, we

  15. A flexible cure rate model for spatially correlated survival data based on generalized extreme value distribution and Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Dey, Dipak K

    2016-09-01

    Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa. PMID:27225466

  16. Gaussian Mixture Model of Heart Rate Variability

    PubMed Central

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  17. QSAR models for oxidation of organic micropollutants in water based on ozone and hydroxyl radical rate constants and their chemical classification.

    PubMed

    Sudhakaran, Sairam; Amy, Gary L

    2013-03-01

    Ozonation is an oxidation process for the removal of organic micropollutants (OMPs) from water and the chemical reaction is governed by second-order kinetics. An advanced oxidation process (AOP), wherein the hydroxyl radicals (OH radicals) are generated, is more effective in removing a wider range of OMPs from water than direct ozonation. Second-order rate constants (k(OH) and k(O3) are good indices to estimate the oxidation efficiency, where higher rate constants indicate more rapid oxidation. In this study, quantitative structure activity relationships (QSAR) models for O(3) and AOP processes were developed, and rate constants, k(OH) and [Formula: see text] , were predicted based on target compound properties. The k(O3) and k(OH) values ranged from 5 * 10(-4) to 10(5) M(-1)s(-1) and 0.04 to 18 * (10(9)) M(-1) s(-1), respectively. Several molecular descriptors which potentially influence O(3) and OH radical oxidation were identified and studied. The QSAR-defining descriptors were double bond equivalence (DBE), ionisation potential (IP), electron-affinity (EA) and weakly-polar component of solvent accessible surface area (WPSA), and the chemical and statistical significance of these descriptors was discussed. Multiple linear regression was used to build the QSAR models, resulting in high goodness-of-fit, r(2) (>0.75). The models were validated by internal and external validation along with residual plots. PMID:23260175

  18. Fetal heart rate classification using generative models.

    PubMed

    Dash, Shishir; Quirk, J Gerald; Djurić, Petar M

    2014-11-01

    This paper presents novel methods for classification of fetal heart rate (FHR) signals into categories that are meaningful for clinical implementation. They are based on generative models (GMs) and Bayesian theory. Instead of using scalar features that summarize information obtained from long-duration data, the models allow for explicit use of feature sequences derived from local patterns of FHR evolution. We compare our methods with a deterministic expert system for classification and with a support vector machine approach that relies on system-identification and heart rate variability features. We tested the classifiers on 83 retrospectively collected FHR records, with the gold-standard true diagnosis defined using umbilical cord pH values. We found that our methods consistently performed as well as or better than these, suggesting that the use of GMs and the Bayesian paradigm can bring significant improvement to automatic FHR classification approaches. PMID:24951678

  19. The Potential Geographical Distribution of bactrocera Dorsalis (Diptera: Tephrididae) in China Based on Emergence Rate Model and Arcgis

    NASA Astrophysics Data System (ADS)

    Wang, Ningbo; Li, Zhihong; Wu, Jiajiao; Rajotte, Edwin G.; Wan, Fanghao; Wang, Zhiling

    Precision agriculture is an important choice for the future agriculture. It is the base for precision agriculture development to change the state of small-scale farmland production and weak agricultural foundation in China gradually. Combined with the poorness of village in China, the variation of farmland and the dominance of small-scale peasant economy, this paper analyzed the adaptability of farmland landscape pattern to precision agriculture based on literatures and farmland landscape survey. With the requirements of precision agricultural production, this paper put forward the standards on cultivated field scale and shape, farmland corridor structure, cultivated field matrix and farmland landscape protection in order to make farmland landscape suitable for precision agriculture and to provide references for the sustainable development of precision agriculture in China.

  20. An "Emergent Model" for Rate of Change

    ERIC Educational Resources Information Center

    Herbert, Sandra; Pierce, Robyn

    2008-01-01

    Does speed provide a "model for" rate of change in other contexts? Does JavaMathWorlds (JMW), animated simulation software, assist in the development of the "model for" rate of change? This project investigates the transference of understandings of rate gained in a motion context to a non-motion context. Students were 27 14-15 year old students at…

  1. Base Rates, Contingencies, and Prediction Behavior

    ERIC Educational Resources Information Center

    Kareev, Yaakov; Fiedler, Klaus; Avrahami, Judith

    2009-01-01

    A skew in the base rate of upcoming events can often provide a better cue for accurate predictions than a contingency between signals and events. The authors study prediction behavior and test people's sensitivity to both base rate and contingency; they also examine people's ability to compare the benefits of both for prediction. They formalize…

  2. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  3. Associations of Perfluoroalkyl Substances (PFAS) with Lower Birth Weight: An Evaluation of Potential Confounding by Glomerular Filtration Rate Using a Physiologically Based Pharmacokinetic Model (PBPK)

    PubMed Central

    Loccisano, Anne E.; Morken, Nils-Halvdan; Yoon, Miyoung; Wu, Huali; McDougall, Robin; Maisonet, Mildred; Marcus, Michele; Kishi, Reiko; Miyashita, Chihiro; Chen, Mei-Huei; Hsieh, Wu-Shiun; Andersen, Melvin E.; Clewell, Harvey J.; Longnecker, Matthew P.

    2015-01-01

    Background Prenatal exposure to perfluoroalkyl substances (PFAS) has been associated with lower birth weight in epidemiologic studies. This association could be attributable to glomerular filtration rate (GFR), which is related to PFAS concentration and birth weight. Objectives We used a physiologically based pharmacokinetic (PBPK) model of pregnancy to assess how much of the PFAS–birth weight association observed in epidemiologic studies might be attributable to GFR. Methods We modified a PBPK model to reflect the association of GFR with birth weight (estimated from three studies of GFR and birth weight) and used it to simulate PFAS concentrations in maternal and cord plasma. The model was run 250,000 times, with variation in parameters, to simulate a population. Simulated data were analyzed to evaluate the association between PFAS levels and birth weight due to GFR. We compared simulated estimates with those from a meta-analysis of epidemiologic data. Results The reduction in birth weight for each 1-ng/mL increase in simulated cord plasma for perfluorooctane sulfonate (PFOS) was 2.72 g (95% CI: –3.40, –2.04), and for perfluorooctanoic acid (PFOA) was 7.13 g (95% CI: –8.46, –5.80); results based on maternal plasma at term were similar. Results were sensitive to variations in PFAS level distributions and the strength of the GFR–birth weight association. In comparison, our meta-analysis of epidemiologic studies suggested that each 1-ng/mL increase in prenatal PFOS and PFOA levels was associated with 5.00 g (95% CI: –21.66, –7.78) and 14.72 g (95% CI: –8.92, –1.09) reductions in birth weight, respectively. Conclusion Results of our simulations suggest that a substantial proportion of the association between prenatal PFAS and birth weight may be attributable to confounding by GFR and that confounding by GFR may be more important in studies with sample collection later in pregnancy. Citation Verner MA, Loccisano AE, Morken NH, Yoon M, Wu H, Mc

  4. SEE Rate Estimation: Model Complexity and Data Requirements

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2008-01-01

    Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.

  5. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics.

    PubMed

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J; Rubio, Roberto F; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make "deadly force decisions" in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  6. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics

    PubMed Central

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  7. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    PubMed

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana. PMID:25741459

  8. Rate-based modeling of reactive absorption of CO{sub 2} and H{sub 2}S into aqueous methyldiethanolamine

    SciTech Connect

    Pacheco, M.A.; Rochelle, G.T.

    1998-10-01

    A general framework was developed to model the transport processes that take place during reactive absorption when both rate- and equilibrium-controlled reactions occur in the liquid phase. This framework was applied to the selective absorption of H{sub 2}S from fuel gas containing CO{sub 2} using aqueous methyldiethanolamine. A rate-based distillation column module was used for the column integration. The Maxwell-Stefan and enhancement factor theories were utilized. In packed columns, CO{sub 2} absorption is controlled by diffusion with fast chemical reactions; in trayed columns it is controlled primarily by physical absorption. Gas-film resistance is never significant for CO{sub 2} absorption. For H{sub 2}S absorption, gas- and liquid-film resistances are important, and diffusion of bisulfide controls the liquid-film resistance. Heat effects produce temperatures bulges that can cause equilibrium pinches at the maximum temperature. This phenomenon gives an optimum packing height for the H{sub 2}S removal. Trayed columns are more selective than packed columns for H{sub 2}S removal, primarily because of the larger number of liquid-film mass transfer units.

  9. Rate control algorithm based on frame complexity estimation for MVC

    NASA Astrophysics Data System (ADS)

    Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang

    2010-07-01

    Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.

  10. Minority Utility Rate Design Assessment Model

    Energy Science and Technology Software Center (ESTSC)

    2003-01-20

    Econometric model simulates consumer demand response to various user-supplied, two-part tariff electricity rate designs and assesses their economic welfare impact on black, hispanic, poor and majority households.

  11. Single crystal plasticity by modeling dislocation density rate behavior

    SciTech Connect

    Hansen, Benjamin L; Bronkhorst, Curt; Beyerlein, Irene; Cerreta, E. K.; Dennis-Koller, Darcie

    2010-12-23

    The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. The elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.

  12. Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning

    ERIC Educational Resources Information Center

    Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus

    2008-01-01

    When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…

  13. Development of experimental design approach and ANN-based models for determination of Cr(VI) ions uptake rate from aqueous solution onto the solid biodiesel waste residue.

    PubMed

    Shanmugaprakash, M; Sivakumar, V

    2013-11-01

    In the present work, the evaluation capacities of two optimization methodologies such as RSM and ANN were employed and compared for predication of Cr(VI) uptake rate using defatted pongamia oil cake (DPOC) in both batch and column mode. The influence of operating parameters was investigated through a central composite design (CCD) of RSM using Design Expert 8.0.7.1 software. The same data was fed as input in ANN to obtain a trained the multilayer feed-forward networks back-propagation algorithm using MATLAB. The performance of the developed ANN models were compared with RSM mathematical models for Cr(VI) uptake rate in terms of the coefficient of determination (R(2)), root mean square error (RMSE) and absolute average deviation (AAD). The estimated values confirm that ANN predominates RSM representing the superiority of a trained ANN models over RSM models in order to capture the non-linear behavior of the given system. PMID:24080294

  14. Introducing AORN's new model for evidence rating.

    PubMed

    Spruce, Lisa; Van Wicklin, Sharon A; Hicks, Rodney W; Conner, Ramona; Dunn, Debra

    2014-02-01

    Nurses today are expected to implement evidence-based practices in the perioperative setting to assess and implement practice changes. All evidence-based practice begins with a question, a practice problem to address, or a needed change that is identified. To assess the question, a literature search is performed and relevant literature is identified and appraised. The types of evidence used to inform practice can be scientific research (eg, randomized controlled trials, systematic reviews) or nonresearch evidence (eg, regulatory and accrediting agency requirements, professional association practice standards and guidelines, quality improvement project reports). The AORN recommended practices are a synthesis of related knowledge on a given topic, and the authorship process begins with a systematic review of the literature conducted in collaboration with a medical librarian. At least two appraisers independently evaluate the applicable literature for quality and strength by using the AORN Research Appraisal Tool and AORN Non-Research Appraisal Tool. To collectively appraise the evidence supporting particular practice recommendations, the AORN recommended practices authors have implemented a new evidence rating model that is appropriate for research and nonresearch literature and that is relevant to the perioperative setting. PMID:24472588

  15. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    SciTech Connect

    Ballester, Facundo; Carlsson Tedgren, Åsa; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Rivard, Mark J.; Siebert, Frank-André; Sloboda, Ron S.; and others

    2015-06-15

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by

  16. The relationship between specific absorption rate and temperature elevation in anatomically based human body models for plane wave exposure from 30 MHz to 6 GHz

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Laakso, Ilkka; Oizumi, Takuya; Hanatani, Ryuto; Chan, Kwok Hung; Wiart, Joe

    2013-02-01

    According to the international safety guidelines/standard, the whole-body-averaged specific absorption rate (Poljak et al 2003 IEEE Trans. Electromagn. Compat. 45 141-5) and the peak spatial average SAR are used as metrics for human protection from whole-body and localized exposures, respectively. The IEEE standard (IEEE 2006 IEEE C95.1) indicates that the upper boundary frequency, over which the whole-body-averaged SAR is deemed to be the basic restriction, has been reduced from 6 to 3 GHz, because radio-wave energy is absorbed around the body surface when the frequency is increased. However, no quantitative discussion has been provided to support this description especially from the standpoint of temperature elevation. It is of interest to investigate the maximum temperature elevation in addition to the core temperature even for a whole-body exposure. In the present study, using anatomically based human models, we computed the SAR and the temperature elevation for a plane-wave exposure from 30 MHz to 6 GHz, taking into account the thermoregulatory response. As the primary result, we found that the ratio of the core temperature elevation to the whole-body-averaged SAR is almost frequency independent for frequencies below a few gigahertz; the ratio decreases above this frequency. At frequencies higher than a few gigahertz, core temperature elevation for the same whole-body averaged SAR becomes lower due to heat convection from the skin to air. This lower core temperature elevation is attributable to skin temperature elevation caused by the power absorption around the body surface. Then, core temperature elevation even for whole-body averaged SAR of 4 W kg-1 with the duration of 1 h was at most 0.8 °C, which is smaller than a threshold considered in the safety guidelines/standard. Further, the peak 10 g averaged SAR is correlated with the maximum body temperature elevations without extremities and pinna over the frequencies considered. These findings were confirmed

  17. Modeling of Kashmir Aftershock Decay Based on Static Coulomb Stress Changes and Laboratory-Derived Rate-and-State Dependent Friction Law

    NASA Astrophysics Data System (ADS)

    Javed, F.; Hainzl, S.; Aoudia, A.; Qaisar, M.

    2016-05-01

    We model the spatial and temporal evolution of October 8, 2005 Kashmir earthquake's aftershock activity using the rate-and-state dependent friction model incorporating uncertainties in computed coseismic stress perturbations. We estimated the best possible value for frictional resistance " Aσ n", background seismicity rate " r" and coefficient of stress variation "CV" using maximum log-likelihood method. For the whole Kashmir earthquake sequence, we measure a frictional resistance Aσ n ~ 0.0185 MPa, r ~ 20 M3.7+ events/year and CV = 0.94 ± 0.01. The spatial and temporal forecasted seismicity rate of modeled aftershocks fits well with the spatial and temporal distribution of observed aftershocks that occurred in the regions with positive static stress changes as well as in the apparent stress shadow region. To quantify the effect of secondary aftershock triggering, we have re-run the estimations for 100 stochastically declustered catalogs showing that the effect of aftershock-induced secondary stress changes is obviously minor compared to the overall uncertainties, and that the stress variability related to uncertain slip model inversions and receiver mechanisms remains the major factor to provide a reasonable data fit.

  18. Evolution of migration rate in a spatially realistic metapopulation model.

    PubMed

    Heino, M; Hanski, I

    2001-05-01

    We use an individual-based, spatially realistic metapopulation model to study the evolution of migration rate. We first explore the consequences of habitat change in hypothetical patch networks on a regular lattice. If the primary consequence of habitat change is an increase in local extinction risk as a result of decreased local population sizes, migration rate increases. A nonmonotonic response, with migration rate decreasing at high extinction rate, was obtained only by assuming very frequent catastrophes. If the quality of the matrix habitat deteriorates, leading to increased mortality during migration, the evolutionary response is more complex. As long as habitat patch occupancy does not decrease markedly with increased migration mortality, reduced migration rate evolves. However, once mortality becomes so high that empty patches remain uncolonized for a long time, evolution tends to increase migration rate, which may lead to an "evolutionary rescue" in a fragmented landscape. Kin competition has a quantitative effect on the evolution of migration rate in our model, but these patterns in the evolution of migration rate appear to be primarily caused by spatiotemporal variation in fitness and mortality during migration. We apply the model to real habitat patch networks occupied by two checkerspot butterfly (Melitaea) species, for which sufficient data are available to estimate rigorously most of the model parameters. The model-predicted migration rate is not significantly different from the empirically observed one. Regional variation in patch areas and connectivities leads to regional variation in the optimal migration rate, predictions that can be tested empirically. PMID:18707258

  19. Modeling heart rate variability by stochastic feedback

    NASA Technical Reports Server (NTRS)

    Amaral, L. A.; Goldberger, A. L.; Stanley, H. E.

    1999-01-01

    We consider the question of how the cardiac rhythm spontaneously self-regulates and propose a new mechanism as a possible answer. We model the neuroautonomic regulation of the heart rate as a stochastic feedback system and find that the model successfully accounts for key characteristics of cardiac variability, including the 1/f power spectrum, the functional form and scaling of the distribution of variations of the interbeat intervals, and the correlations in the Fourier phases which indicate nonlinear dynamics.

  20. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  1. Acoustically based fetal heart rate monitor

    NASA Technical Reports Server (NTRS)

    Baker, Donald A.; Zuckerwar, Allan J.

    1991-01-01

    The acoustically based fetal heart rate monitor permits an expectant mother to perform the fetal Non-Stress Test in her home. The potential market would include the one million U.S. pregnancies per year requiring this type of prenatal surveillance. The monitor uses polyvinylidene fluoride (PVF2) piezoelectric polymer film for the acoustic sensors, which are mounted in a seven-element array on a cummerbund. Evaluation of the sensor ouput signals utilizes a digital signal processor, which performs a linear prediction routine in real time. Clinical tests reveal that the acoustically based monitor provides Non-Stress Test records which are comparable to those obtained with a commercial ultrasonic transducer.

  2. Actuarial models of life insurance with stochastic interest rate

    NASA Astrophysics Data System (ADS)

    Wei, Xiang; Hu, Ping

    2009-07-01

    On the basis of general actuarial model of life insurance, this article has carried on research to continuous life insurance actuarial models under the stochastic interest rate separately. And it provide net single premium for life insurance and life annuity due over a period based on that de Moivre law of mortality and Makeham's law of mortality separately.

  3. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  4. Sphaleron rate in the minimal standard model.

    PubMed

    D'Onofrio, Michela; Rummukainen, Kari; Tranberg, Anders

    2014-10-01

    We use large-scale lattice simulations to compute the rate of baryon number violating processes (the sphaleron rate), the Higgs field expectation value, and the critical temperature in the standard model across the electroweak phase transition temperature. While there is no true phase transition between the high-temperature symmetric phase and the low-temperature broken phase, the crossover is sharp and located at temperature T(c) = (159.5 ± 1.5)  GeV. The sphaleron rate in the symmetric phase (T>T(c)) is Γ/T(4) = (18 ± 3)α(W)(5), and in the broken phase in the physically interesting temperature range 130 GeV < T < T(c) it can be parametrized as log(Γ/T(4)) = (0.83 ± 0.01)T/GeV-(147.7 ± 1.9). The freeze-out temperature in the early Universe, where the Hubble rate wins over the baryon number violation rate, is T* = (131.7 ± 2.3) GeV. These values, beyond being intrinsic properties of the standard model, are relevant for, e.g., low-scale leptogenesis scenarios. PMID:25325629

  5. Liver cancer mortality rate model in Thailand

    NASA Astrophysics Data System (ADS)

    Sriwattanapongse, Wattanavadee; Prasitwattanaseree, Sukon

    2013-09-01

    Liver Cancer has been a leading cause of death in Thailand. The purpose of this study was to model and forecast liver cancer mortality rate in Thailand using death certificate reports. A retrospective analysis of the liver cancer mortality rate was conducted. Numbering of 123,280 liver cancer causes of death cases were obtained from the national vital registration database for the 10-year period from 2000 to 2009, provided by the Ministry of Interior and coded as cause-of-death using ICD-10 by the Ministry of Public Health. Multivariate regression model was used for modeling and forecasting age-specific liver cancer mortality rates in Thailand. Liver cancer mortality increased with increasing age for each sex and was also higher in the North East provinces. The trends of liver cancer mortality remained stable in most age groups with increases during ten-year period (2000 to 2009) in the Northern and Southern. Liver cancer mortality was higher in males and increase with increasing age. There is need of liver cancer control measures to remain on a sustained and long-term basis for the high liver cancer burden rate of Thailand.

  6. The relationship between specific absorption rate and temperature elevation in anatomically based human body models for plane wave exposure from 30 MHz to 6 GHz.

    PubMed

    Hirata, Akimasa; Laakso, Ilkka; Oizumi, Takuya; Hanatani, Ryuto; Chan, Kwok Hung; Wiart, Joe

    2013-02-21

    According to the international safety guidelines/standard, the whole-body-averaged specific absorption rate (Poljak et al 2003 IEEE Trans. Electromagn. Compat. 45 141-5) and the peak spatial average SAR are used as metrics for human protection from whole-body and localized exposures, respectively. The IEEE standard (IEEE 2006 IEEE C95.1) indicates that the upper boundary frequency, over which the whole-body-averaged SAR is deemed to be the basic restriction, has been reduced from 6 to 3 GHz, because radio-wave energy is absorbed around the body surface when the frequency is increased. However, no quantitative discussion has been provided to support this description especially from the standpoint of temperature elevation. It is of interest to investigate the maximum temperature elevation in addition to the core temperature even for a whole-body exposure. In the present study, using anatomically based human models, we computed the SAR and the temperature elevation for a plane-wave exposure from 30 MHz to 6 GHz, taking into account the thermoregulatory response. As the primary result, we found that the ratio of the core temperature elevation to the whole-body-averaged SAR is almost frequency independent for frequencies below a few gigahertz; the ratio decreases above this frequency. At frequencies higher than a few gigahertz, core temperature elevation for the same whole-body averaged SAR becomes lower due to heat convection from the skin to air. This lower core temperature elevation is attributable to skin temperature elevation caused by the power absorption around the body surface. Then, core temperature elevation even for whole-body averaged SAR of 4 W kg(-1) with the duration of 1 h was at most 0.8 °C, which is smaller than a threshold considered in the safety guidelines/standard. Further, the peak 10 g averaged SAR is correlated with the maximum body temperature elevations without extremities and pinna over the frequencies considered. These findings

  7. Diagnosis of Photochemical Ozone Production Rates and Limiting Factors based on Observation-based Modeling Approach over East Asia: Impact of Radical Chemistry Mechanism and Ozone-Control Implications

    NASA Astrophysics Data System (ADS)

    Kanaya, Y.

    2015-12-01

    Growth of tropospheric ozone, causing health and climate impacts, is concerned over East Asia, because emissions of precursors have dramatically increased. Photochemical production rates of ozone and limiting factors, primarily studied for urban locations, have been poorly assessed within a perspective of regional-scale air pollution over East Asia. We performed comprehensive observations of ozone precursors at several locations with regional representativeness and made such assessment based on the observation-based modeling approach. Here, diagnosis at Fukue Island (32.75°N, 128.68°E) remotely located in western Japan (May 2009) is highlighted, where the highest 10% of hourly ozone concentrations reached 72‒118 ppb during May influenced by Asian continental outflow. The average in-situ ozone production rate was estimated to be 6.8 ppb per day, suggesting that in-travel production was still active, while larger buildup must have occurred beforehand. Information on the chemical status of the air mass arriving in Japan is important, because it affects how further ozone production occurs after precursor addition from Japanese domestic emissions. The main limiting factor of ozone production was usually NOx, suggesting that domestic NOx emission control is important in reducing further ozone production and the incidence of warning issuance (>120 ppb). VOCs also increased the ozone production rate, and occasionally (14% of time) became dominant. This analysis implies that the VOC reduction legislation recently enacted should be effective. The uncertainty in the radical chemistry mechanism governing ozone production had a non-negligible impact, but the main conclusion relevant to policy was not altered. When chain termination was augmented by HO2-H2O + NO/NO2 reactions and by heterogeneous loss of HO2 on aerosol particle surfaces, the daily ozone production rate decreased by <24%, and the fraction of hours when the VOC-limited condition occurred varied from 14% to 13

  8. Rate controlling model for bioremediation of oil contaminated soil

    SciTech Connect

    Li, K.Y.; Annamali, S.N.; Hopper, J.R. )

    1993-11-01

    A mathematical model of bio-remediation of hydrocarbons in a soil matrix has been developed to predict the rate controlling step and the remediation rate during the bioremediation of a contaminated soil. The model is based on mass transfer of oxygen and oil into the aqueous solution in the soil matrix and the biodegradation of the hydrocarbons in the aqueous solution. Monod's equation was used to describe the biodegradation rate in aqueous solution while the mass transfer equations were used to describe the mass transfer rates of oxygen and oil in the soil matrix. Results from model calculations indicate that the bio-remediation rate increases and approaches a limiting value when one of the rates becomes controlling. When the parameters of the site soil samples are measured and the solubilities of oxygen and oil in aqueous solution are obtained, the bioremediation rate can be predicted by this model. The rate controlling step of the bioremediation site may be identified quickly and steps to improve the bioremediation rate can be recommended. 8 refs., 7 figs.

  9. Modeling of asteroidal dust production rates

    NASA Technical Reports Server (NTRS)

    Durda, Daniel D.; Dermott, Stanley F.; Gustafson, Bo A. S.

    1992-01-01

    The production rate of dust associated with the prominent Hirayama asteroid families and the background asteroidal population are modeled with the intent of using the families as a calibrator of mainbelt dust production. However, the dust production rates of asteroid families may be highly stochastic; there is probably more than an order of magnitude variation in the total area of dust associated with a family. Over 4.5 x 10(exp 9) years of collisional evolution, the volume (mass) of a family is ground down by an order of magnitude, suggesting a similar loss from the entire mainbelt population. Our collisional models show that the number of meteoroids deliverable to Earth also varies stochastically, but only by a factor of 2 to 3.

  10. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  11. Modelling rating curves using remotely sensed LiDAR data

    USGS Publications Warehouse

    Nathanson, Marcus; Kean, Jason W.; Grabs, Thomas J.; Seibert, Jan; Laudon, Hjalmar; Lyon, Steve W.

    2012-01-01

    Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage-discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics-based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m-wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90-m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a 'hybrid model' rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see 'below' the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics-based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote

  12. Empirical analysis of quantum finance interest rates models

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yang, Cao

    2009-07-01

    Empirical forward interest rates drive the debt markets. Libor and Euribor futures data is used to calibrate and test models of interest rates based on the formulation of quantum finance. In particular, all the model parameters, including interest rate volatilities, are obtained from market data. The random noise driving the forward interest rates is taken to be a Euclidean two dimension quantum field. We analyze two models, namely the bond forward interest rates, which is a linear theory and the Libor Market Model, which is a nonlinear theory. Both the models are analyzed using Libor and Euribor data, with various approximations to match the linear and nonlinear models. The results are quite good, with the linear model having an accuracy of about 99% and the nonlinear model being slightly less accurate. We extend our analysis by directly using the Zero Coupon Yield Curve (ZCYC) data for Libor and for bonds; but due to some technical difficulties we could not derive the models parameters directly from the ZCYC data.

  13. Trends in Incidence Rates during 1999-2008 and Prevalence in 2008 of Childhood Type 1 Diabetes Mellitus in GERMANY – Model-Based National Estimates

    PubMed Central

    Kiess, Wieland; Kapellen, Thomas Michael; Stange, Thoralf; Manuwald, Ulf; Salzsieder, Eckhard; Holl, Reinhard Walter; Schoffer, Olaf; Stahl-Pehe, Anna; Giani, Guido; Ehehalt, Stefan; Neu, Andreas; Rosenbauer, Joachim

    2015-01-01

    Aims To estimate the national incidence rate and trend of type 1 diabetes (T1DM) in Germany from 1999 to 2008 and the national prevalence in 2008 in the age group 0–14 years. Methods Data were taken from a nationwide registry for incident cases of T1DM in the ages 0–4 years and 3 regional registries (North-Rhine-Westphalia, Baden-Wuerttemberg and Saxony) for incident cases of T1DM in the ages 0–14 years covering 41% of the child population in Germany. The degree of ascertainment was ≥ 97% in all registries. Incident and prevalent cases were grouped by region, sex, age (0–4, 5–9, 10–14 years), and, for incident data, additionally by two 5-year periods (1999–2003, 2004–2008). Poisson regression models were fitted to the data to derive national estimates of incidence rate trends and prevalence in the age groups 5–9, 10–14 and 0–14 years. We used direct age-standardization. Results The estimated national incidence rate in 0-14-year-olds increased significantly by 18.1% (95%CI: 11.6–25.0%, p<0.001) from 1999–2003 to 2004–2008, independent of sex, corresponding to an average annual increase of 3.4% (95%-CI: 2.2–4.6%). The overall incidence rate was estimated at 22.9 per 100,000 person-years and we identified a within-country west-east-gradient previously unknown. The national prevalence in the ages 0–14 years on 31/12/2008 was estimated to be 148.1 per 100,000 persons. Conclusions The national incidence rate of childhood T1DM in Germany is higher than in many other countries around the world. Importantly, the estimated trend of the incidence rate confirms the international data of a global increase of T1DM incidences. PMID:26181330

  14. Modeling the dissipation rate in rotating turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Raj, Rishi; Gatski, Thomas B.

    1990-01-01

    A variety of modifications to the modeled dissipation rate transport equation that have been proposed during the past two decades to account for rotational strains are examined. The models are subjected to two crucial test cases: the decay of isotropic turbulence in a rotating frame and homogeneous shear flow in a rotating frame. It is demonstrated that these modifications do not yield substantially improved predictions for these two test cases and in many instances give rise to unphysical behavior. An alternative proposal, based on the use of the tensor dissipation rate, is made for the development of improved models.

  15. A Symmetric Time-Varying Cluster Rate of Descent Model

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2015-01-01

    A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.

  16. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  17. Towards a Model for Protein Production Rates

    NASA Astrophysics Data System (ADS)

    Dong, J. J.; Schmittmann, B.; Zia, R. K. P.

    2007-07-01

    In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two "bottlenecks" (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel "edge" effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.

  18. Template based low data rate speech encoder

    NASA Astrophysics Data System (ADS)

    Fransen, Lawrence

    1993-09-01

    The 2400-b/s linear predictive coder (LPC) is currently being widely deployed to support tactical voice communication over narrowband channels. However, there is a need for lower-data-rate voice encoders for special applications: improved performance in high bit-error conditions, low-probability-of-intercept (LPI) voice communication, and narrowband integrated voice/data systems. An 800-b/s voice encoding algorithm is presented which is an extension of the 2400-b/s LPC. To construct template tables, speech samples of 420 speakers uttering 8 sentences each were excerpted from the Texas Instrument - Massachusetts Institute of Technology (TIMIT) Acoustic-Phonetic Speech Data Base. Speech intelligibility of the 800-b/s voice encoding algorithm measured by the diagnostic rhyme test (DRT) is 91.5 for three male speakers. This score compares favorably with the 2400-b/s LPC of a few years ago.

  19. Modeled Estimates of Soil and Dust Ingestion Rates for Children

    EPA Science Inventory

    Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust inge...

  20. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  1. Prediction of interest rate using CKLS model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-01

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ(j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j'-th time point where j≤j'≤j+n. To model the variation of φ(j), we assume that φ(j) depends on φ(j-m), φ(j-m+1),…, φ(j-1) and the interest rate rj+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value rj+n+1 of the interest rate at the next time point when the value rj+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate rj+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  2. A Model Explaining Genotypic and Ontogenetic Variation of Leaf Photosynthetic Rate in Rice (Oryza sativa) Based on Leaf Nitrogen Content and Stomatal Conductance

    PubMed Central

    Ohsumi, Akihiro; Hamasaki, Akihiro; Nakagawa, Hiroshi; Yoshida, Hiroe; Shiraiwa, Tatsuhiko; Horie, Takeshi

    2007-01-01

    Backgrounds and Aims Identification of physiological traits associated with leaf photosynthetic rate (Pn) is important for improving potential productivity of rice (Oryza sativa). The objectives of this study were to develop a model which can explain genotypic variation and ontogenetic change of Pn in rice under optimal conditions as a function of leaf nitrogen content per unit area (N) and stomatal conductance (gs), and to quantify the effects of interaction between N and gs on the variation of Pn. Methods Pn, N and gs were measured at different developmental stages for the topmost fully expanded leaves in ten rice genotypes with diverse backgrounds grown in pots (2002) and in the field (2001 and 2002). A model of Pn that accounts for carboxylation and CO2 diffusion processes, and assumes that the ratio of internal conductance to gs is constant, was constructed, and its goodness of fit was examined. Key Results Considerable genotypic differences in Pn were evident for rice throughout development in both the pot and field experiments. The genotypic variation of Pn was correlated with that of gs at a given stage, and the change of Pn with plant development was closely related to the change of N. The variation of gs among genotypes was independent of that of N. The model explained well the variation in Pn of the ten genotypes grown under different conditions at different developmental stages. Conclusions The response of Pn to increased N differs with gs, and the increase in Pn of genotypes with low gs is smaller than that of genotypes with high gs. Therefore, simultaneous improvements of these two traits are essential for an effective breeding of rice genotypes with increased Pn. PMID:17204541

  3. Ground-Based Remote Retrievals of Cumulus Entrainment Rates

    SciTech Connect

    Wagner, Timothy J.; Turner, David D.; Berg, Larry K.; Krueger, Steven K.

    2013-07-26

    While fractional entrainment rates for cumulus clouds have typically been derived from airborne observations, this limits the size and scope of available data sets. To increase the number of continental cumulus entrainment rate observations available for study, an algorithm for retrieving them from ground-based remote sensing observations has been developed. This algorithm, called the Entrainment Rate In Cumulus Algorithm (ERICA), uses the suite of instruments at the Southern Great Plains (SGP) site of the United States Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility as inputs into a Gauss-Newton optimal estimation scheme, in which an assumed guess of the entrainment rate is iteratively adjusted through intercomparison of modeled liquid water path and cloud droplet effective radius to their observed counterparts. The forward model in this algorithm is the Explicit Mixing Parcel Model (EMPM), a cloud parcel model that treats entrainment as a series of discrete entrainment events. A quantified value for measurement uncertainty is also returned as part of the retrieval. Sensitivity testing and information content analysis demonstrate the robust nature of this method for retrieving accurate observations of the entrainment rate without the drawbacks of airborne sampling. Results from a test of ERICA on three months of shallow cumulus cloud events show significant variability of the entrainment rate of clouds in a single day and from one day to the next. The mean value of 1.06 km-¹ for the entrainment rate in this dataset corresponds well with prior observations and simulations of the entrainment rate in cumulus clouds.

  4. Direct regression models for longitudinal rates of change

    PubMed Central

    Bryan, Matthew; Heagerty, Patrick J.

    2014-01-01

    Comparing rates of growth, or rates of change, across covariate-defined subgroups is a primary objective for many longitudinal studies. In the special case of a linear trend over time, the interaction between a covariate and time will characterize differences in longitudinal rates of change. However, in the presence of a non-linear longitudinal trajectory, the standard mean regression approach does not permit parsimonious description or inference regarding differences in rates of change. Therefore, we propose regression methodology for longitudinal data that allows a direct, structured comparison of rates across subgroups even in the presence of a non-linear trend over time. Our basic longitudinal rate regression method assumes a proportional difference across covariate groups in the rate of change across time, but this assumption can be relaxed. Rates are compared relative to a generally specified time trend for which we discuss both parametric and non-parametric estimating approaches. We develop mixed model longitudinal methodology that explicitly characterizes subject-to-subject variation in rates, as well as a marginal estimating equation-based method. In addition, we detail a score test to detect violations of the proportionality assumption, and we allow time-varying rate effects as a natural generalization. Simulation results demonstrate potential gains in power for the longitudinal rate regression model relative to a linear mixed effects model in the presence of a non-linear trend in time. We apply our method to a study of growth among infants born to HIV infected mothers, and conclude with a discussion of possible extensions for our methods. PMID:24497427

  5. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  6. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM. PMID:26737125

  7. A generalized Prandtl-Ishlinskii model for characterizing the rate-independent and rate-dependent hysteresis of piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Gan, Jinqiang; Zhang, Xianmin; Wu, Heng

    2016-03-01

    In this paper, a generalized hysteresis model is developed to describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. Based on the classical Prandtl-Ishlinskii (P-I) model, the developed model adds a quadratic polynomial and makes other small changes. When it is used to describe rate-independent hysteresis, the parameters of the model are constants, which can be identified by self-adaptive particle swarm optimization. The effectiveness of this rate-independent modified P-I model is demonstrated by comparing simulation results of the developed model and the classic Prandtl-Ishlinskii model. Simulation results suggest that the rate-independent modified P-I model can describe hysteresis more precisely. Compared with the classical P-I model, the rate-independent modified P-I model reduces modeling error by more than 50%. When it is used to describe rate-independent hysteresis, a one-side operator is adopted and the parameters are functions with input frequency. The results of the experiments and simulations have shown that the proposed models can accurately describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators.

  8. A generalized Prandtl-Ishlinskii model for characterizing the rate-independent and rate-dependent hysteresis of piezoelectric actuators.

    PubMed

    Gan, Jinqiang; Zhang, Xianmin; Wu, Heng

    2016-03-01

    In this paper, a generalized hysteresis model is developed to describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. Based on the classical Prandtl-Ishlinskii (P-I) model, the developed model adds a quadratic polynomial and makes other small changes. When it is used to describe rate-independent hysteresis, the parameters of the model are constants, which can be identified by self-adaptive particle swarm optimization. The effectiveness of this rate-independent modified P-I model is demonstrated by comparing simulation results of the developed model and the classic Prandtl-Ishlinskii model. Simulation results suggest that the rate-independent modified P-I model can describe hysteresis more precisely. Compared with the classical P-I model, the rate-independent modified P-I model reduces modeling error by more than 50%. When it is used to describe rate-independent hysteresis, a one-side operator is adopted and the parameters are functions with input frequency. The results of the experiments and simulations have shown that the proposed models can accurately describe both rate-independent and rate-dependent hysteresis in piezoelectric actuators. PMID:27036808

  9. Modeling Equity for Alternative Water Rate Structures

    NASA Astrophysics Data System (ADS)

    Griffin, R.; Mjelde, J.

    2011-12-01

    The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to

  10. Decoding Nitric Oxide Release Rates of Amine-Based Diazeniumdiolates

    PubMed Central

    Wang, Yan-Ni; Collins, Jack; Holland, Ryan J.; Keefer, Larry K.; Ivanic, Joseph

    2013-01-01

    Amine-based diazeniumdiolates (NONOates) have garnered widespread use as nitric oxide (NO) donors and their potential for nitroxyl (HNO) release has more recently been realized. While NO release rates can vary significantly with the type of amine, half-lives of seconds to days under physiological conditions, there is as yet no way to determine a priori the NO or HNO production rates of a given species and no discernible trends have manifested other than that secondary amines produce only NO (i.e., no HNO). As a step to understanding these complex systems, here we describe a procedure for modeling amine-based NONOates in water solvent that provides an excellent correlation (R2 = 0.94) between experimentally measured dissociation rates of seven secondary amine species and their computed NO release activation energies. The significant difference in behavior of NONOates in the gas and solvent phases is also rigorously demonstrated via explicit additions of quantum mechanical water molecules. The presented results suggest that the as-yet unsynthesized simplest amine-based NONOate, the diazeniumdiolated ammonia anion [H2N-N(O)=NO−], could serve as an unperturbed HNO donor. These results provide a step forward toward the accurate modeling of general NO and/or HNO donors as well as for the identification of tailored prodrug candidates. PMID:23834533

  11. Probabilistic precipitation rate estimates with ground-based radar networks

    NASA Astrophysics Data System (ADS)

    Kirstetter, Pierre-Emmanuel; Gourley, Jonathan J.; Hong, Yang; Zhang, Jian; Moazamigoodarzi, Saber; Langston, Carrie; Arthur, Ami

    2015-03-01

    The uncertainty structure of radar quantitative precipitation estimation (QPE) is largely unknown at fine spatiotemporal scales near the radar measurement scale. By using the WSR-88D radar network and gauge data sets across the conterminous US, an investigation of this subject has been carried out within the framework of the NOAA/NSSL ground radar-based Multi-Radar Multi-Sensor (MRMS) QPE system. A new method is proposed and called PRORATE for probabilistic QPE using radar observations of rate and typology estimates. Probability distributions of precipitation rates are computed instead of deterministic values using a model quantifying the relation between radar reflectivity and the corresponding "true" precipitation. The model acknowledges the uncertainty arising from many factors operative at the radar measurement scale and from the correction algorithm. Ensembles of reflectivity-to-precipitation rate relationships accounting explicitly for precipitation typology were derived at a 5 min/1 km scale. This approach conditions probabilistic quantitative precipitation estimates (PQPE) on the precipitation rate and type. The model components were estimated on the basis of a 1 year long data sample over the CONUS. This PQPE model provides the basis for precipitation probability maps and the generation of radar precipitation ensembles. Maps of the precipitation exceedance probability for specific thresholds (e.g., precipitation return periods) are computed. Precipitation probability maps are accumulated to the hourly time scale and compare favorably to the deterministic QPE. As an essential property of precipitation, the impact of the temporal correlation on the hourly accumulation is examined. This approach to PQPE can readily apply to other systems including space-based passive and active sensor algorithms.

  12. 77 FR 24198 - Notice of Revocation of Market-Based Rate Authority and Termination of Market-Based Rate Tariffs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-23

    ... Energy Regulatory Commission Notice of Revocation of Market-Based Rate Authority and Termination of Market-Based Rate Tariffs Docket Nos. Aleph One, Inc ER04-686-000 Alpha Domestic Power Trading, LLC... market-based rate authority of the public utilities listed in the caption of that order, which had...

  13. Discovery of novel mGluR1 antagonists: a multistep virtual screening approach based on an SVM model and a pharmacophore hypothesis significantly increases the hit rate and enrichment factor.

    PubMed

    Li, Guo-Bo; Yang, Ling-Ling; Feng, Shan; Zhou, Jian-Ping; Huang, Qi; Xie, Huan-Zhang; Li, Lin-Li; Yang, Sheng-Yong

    2011-03-15

    Development of glutamate non-competitive antagonists of mGluR1 (Metabotropic glutamate receptor subtype 1) has increasingly attracted much attention in recent years due to their potential therapeutic application for various nervous disorders. Since there is no crystal structure reported for mGluR1, ligand-based virtual screening (VS) methods, typically pharmacophore-based VS (PB-VS), are often used for the discovery of mGluR1 antagonists. Nevertheless, PB-VS usually suffers a lower hit rate and enrichment factor. In this investigation, we established a multistep ligand-based VS approach that is based on a support vector machine (SVM) classification model and a pharmacophore model. Performance evaluation of these methods in virtual screening against a large independent test set, M-MDDR, show that the multistep VS approach significantly increases the hit rate and enrichment factor compared with the individual SB-VS and PB-VS methods. The multistep VS approach was then used to screen several large chemical libraries including PubChem, Specs, and Enamine. Finally a total of 20 compounds were selected from the top ranking compounds, and shifted to the subsequent in vitro and in vivo studies, which results will be reported in the near future. PMID:21316965

  14. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  15. Review of composite propellant burn rate modeling

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.

    1979-01-01

    Steady-state combustion modeling of composite solid propellants is discussed with emphasis on the Beckstead-Derr-Price (BDP) model. The BDP model and some revisions are considered with respect to the analysis of monomodal ammonium perchlorate/inert binder propellants: topics examined include continuity relations, surface area relations, characteristic surface dimension, flame heights, and energy balance. Application of the BDP model to more complicated propellants containing multiple active ingredients is described. These propellants include multimodal, mixed oxidizer, active binder, aluminized, catalyzed, and nitramine propellants. Example cases of modeling (with comparison to experimental data) are presented, and strengths and weaknesses of current modeling approaches are evaluated.

  16. Hierarchical Classes Modeling of Rating Data

    ERIC Educational Resources Information Center

    Van Mechelen, Iven; Lombardi, Luigi; Ceulemans, Eva

    2007-01-01

    Hierarchical classes (HICLAS) models constitute a distinct family of structural models for N-way N-mode data. All members of the family include N simultaneous and linked classifications of the elements of the N modes implied by the data; those classifications are organized in terms of hierarchical, if-then-type relations. Moreover, the models are…

  17. Recurrent personality factors based on trait ratings.

    PubMed

    Tupes, E C; Christal, R E

    1992-06-01

    Intercorrelations among ratings on 35 personality traits, selected as representative of the personality domain, were obtained for eight samples. These samples differed in length of acquaintanceship from 3 days to more than a year; in kind of acquaintanceship from assessment programs in a military training course to a fraternity house situation; in type of subject from airmen with only a high-school education to male and female undergraduate students to first-year graduate students; and in type of rater from very naive persons to clinical psychologists and psychiatrists with years of experience in the evaluation of personality. Centroid or multiple-group factors were extracted and rotated orthogonally to simple structure. For one study, an independent solution was obtained in which analytic rotations were accomplished on an IBM 650 computer using Kaiser's normal varimax criterion. Five fairly strong and recurrent factors emerged from each analysis, labeled as (a) Surgency, (b) Agreeableness, (c) Dependability, (d) Emotional Stability, and (e) Culture. PMID:1635043

  18. Etch rate Modeling and Real-time Control

    NASA Astrophysics Data System (ADS)

    Hershkowitz, N.; Sarfaty, M.; Baum, C.; Harper, M.; Shohet, J. L.

    1997-11-01

    The relatively high process rates in high density plasma tools as well as the shrinking thickness of the films, require fast estimate of the process state in order to implement real-time advanced process control. The fast etch rate estimate, within one second, in a spot size of 1-2 mm and the time averaged rates across the wafer are obtained by a combined use of an in-situ two-color laser interferometer and a full wafer image interferometer, respectively. The gas phase state is monitored by optical emission spectroscopy and a residual gas analyzer. The magnetically confined ICP tool state, including gas flow, pressure, and RF power to the antenna and the electrostatic chuck, is computer controlled and monitored. The absolute thickness of the film is determined during the process, thus providing an end-point prediction. Splitting the two-color laser beam to two spots on the wafer that are coated with different films provides real-time etch selectivity. The advantages of two-color laser interferometry for real-time process monitoring, development and control will be described. Langmuir kinetics modeling of the measured etch rates of polysilicon and SiO2 films in Cl2 and CF4 discharges using tool state parameters will be described. The etch rate model enabled us to develop a model-based real-time control algorithm. The achieved real-time control of plasma etch rates of un-patterned SiO2 and polysilicon films will be described. This work is funded by NSF grant No. EEC-8721545.

  19. Tantalum strength model incorporating temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Triple-α reaction rate constrained by stellar evolution models

    NASA Astrophysics Data System (ADS)

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-01

    We investigate the quantitative constraint on the triple-α reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8<=M/Msolar<=25 and in the metallicity range between Z = 0 and Z = 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10Msolar) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8<=M/Msolar<=6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-α reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least ν > 10 at T = 1-1.2×108K where the cross section is proportional to Tν. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than ~ 10-29 cm6 s-1 mole-2 at ~ 107.8 K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  1. Triple-{alpha} reaction rate constrained by stellar evolution models

    SciTech Connect

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-12

    We investigate the quantitative constraint on the triple-{alpha} reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}25 and in the metallicity range between Z= 0 and Z= 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10M{sub Circled-Dot-Operator }) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-{alpha} reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least {nu} > 10 at T = 1-1.2 Multiplication-Sign 10{sup 8}K where the cross section is proportional to T{sup {nu}}. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than {approx} 10{sup -29} cm{sup 6} s{sup -1} mole{sup -2} at Almost-Equal-To 10{sup 7.8} K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  2. Software for emission rate modeling of accidental toxic releases

    SciTech Connect

    Kumar, A.; Vashisth, S.

    1999-08-01

    This book fulfills the need for Section 112(r) of the Clean Air Act Amendments of 1990. This software is based on the guidelines released by the USEPA. It includes manual and proprietary software on CDROM. Contents include release scenario description (two-phase and single-phase choked/unchoked gas release, two-phase pressurized and refrigerated liquid release, single-phase high and low volatility liquid release); emission rate model development for each release class; software design and software evaluation and application.

  3. Empirical rate equation model and rate calculations of hydrogen generation for Hanford tank waste

    SciTech Connect

    HU, T.A.

    1999-07-13

    Empirical rate equations are derived to estimate hydrogen generation based on chemical reactions, radiolysis of water and organic compounds, and corrosion processes. A comparison of the generation rates observed in the field with the rates calculated for twenty eight tanks shows agreement with in a factor of two to three.

  4. Comparison of field theory models of interest rates with market data

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Srikant, Marakani

    2004-03-01

    We calibrate and test various variants of field theory models of the interest rate with data from Eurodollar futures. Models based on psychological factors are seen to provide the best fit to the market. We make a model independent determination of the volatility function of the forward rates from market data.

  5. Individual-Based Completion Rates for Apprentices. Technical Paper

    ERIC Educational Resources Information Center

    Karmel, Tom

    2011-01-01

    Low completion rates for apprentices and trainees have received considerable attention recently and it has been argued that NCVER seriously understates completion rates. In this paper Tom Karmel uses NCVER data on recommencements to estimate individual-based completion rates. It is estimated that around one-quarter of trade apprentices swap…

  6. Dynamic mathematical model of high rate algal ponds (HRAP).

    PubMed

    Jupsin, H; Praet, E; Vasel, J L

    2003-01-01

    This article presents a mathematical model to describe High-Rate Algal Ponds (HRAPs). The hydrodynamic behavior of the reactor is described as completely mixed tanks in series with recirculation. The hydrodynamic pattern is combined with a subset of River Water Quality Model 1 (RWQM1), including the main processes in liquid phase. Our aim is to develop models for WSPs and aerated lagoons, too, but we focused on HRAPs first for several reasons: Sediments are usually less abundant in HRAP and can be neglected, Stratification is not observed and state variables are constant in a reactor cross section, Due to the system's geometry, the reactor is quite similar to a plugflow type reactor with recirculation, with a simple advection term. The model is based on mass balances and includes the following processes: *Phytoplankton growth with NO3-, NO2- and death, *Aerobic growth of heterotrophs with NO3-, NH4+ and respiration, *Anoxic growth of heterotrophs with NO3-, NO2- and anoxic respiration, *Growth of nitrifiers (two stages) and respiration. The differences with regard to RWQM1 are that we included a limiting term associated with inorganic carbon on the growth rate of algae and nitrifiers, gas transfers are taken into account by the familiar Adeney equation, and a subroutine calculates light intensity at the water surface. This article presents our first simulations. PMID:14510211

  7. Program documentation: Surface heating rate of thin skin models (THNSKN)

    NASA Technical Reports Server (NTRS)

    Mcbryde, J. D.

    1975-01-01

    Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.

  8. Model based manipulator control

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1989-01-01

    The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.

  9. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 592.510 Section 592.510 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate....

  10. 9 CFR 592.510 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 592.510 Section 592.510 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Fees and Charges § 592.510 Base time rate....

  11. Reasoning with base rates is routine, relatively effortless, and context dependent.

    PubMed

    Pennycook, Gordon; Thompson, Valerie A

    2012-06-01

    We tested models of base rate "neglect" using a novel paradigm. Participants (N = 62) judged the probability that a hypothetical person belonged to one of two categories (e.g., nurse/doctor) on the basis of either a personality description alone (NoBR) or the personality description and a base rate probability (BR). When base rates and descriptions were congruent, judgments in the BR condition were higher and more uniform than those in the NoBR condition. In contrast, base rates had a polarizing effect on judgments when they were incongruent with the descriptions, such that estimates were either consistent with the base rates or discrepant with them. These data suggest that the form of base rate use (i.e., whether base rates will be integrated with diagnostic information) is context dependent. In addition, judgments made under instructions to respond intuitively were influenced by the base rates and took the same length of time in the two conditions. These data suggest that the use of base rates is routine and effortless and that base rate "neglect" is really a mixture of two strategies, one that is informed primarily by the base rate and the other by the personality description. PMID:22427266

  12. Satellite altimetry based rating curves throughout the entire Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present

  13. Item Response Models for Local Dependence among Multiple Ratings

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Su, Chi-Ming; Qiu, Xue-Lan

    2014-01-01

    Ratings given to the same item response may have a stronger correlation than those given to different item responses, especially when raters interact with one another before giving ratings. The rater bundle model was developed to account for such local dependence by forming multiple ratings given to an item response as a bundle and assigning…

  14. ODP based UPT model

    NASA Astrophysics Data System (ADS)

    Berre, A. J.; Handegard, T.; Loevnes, K.; Skjellaug, B.; Aagedal, J. O.

    1994-01-01

    The report documents the experiments with object oriented modelling of Universal Personal Telecommunication (UPT) in a telecommunication environment based on the basic principles of open distributed processing (ODP). Through the object-oriented analysis and design technique Object Modelling Technique (OMT) the service is modelled as a collection of software objects distributed across multiple network nodes. A software platform provides the mechanisms for application objects to interact. The platform builds on the basic facilities in the native computing and communication environments, but hides the heterogeneity of these environments and provides distribution transparency at the application programmer's interface. The report closes with some thoughts about applying the paradigm of ODP to intelligent networks (IN), and the experience with OMT as a modelling technique for real time distributed applications.

  15. Solutions of two-factor models with variable interest rates

    NASA Astrophysics Data System (ADS)

    Li, Jinglu; Clemons, C. B.; Young, G. W.; Zhu, J.

    2008-12-01

    The focus of this work is on numerical solutions to two-factor option pricing partial differential equations with variable interest rates. Two interest rate models, the Vasicek model and the Cox-Ingersoll-Ross model (CIR), are considered. Emphasis is placed on the definition and implementation of boundary conditions for different portfolio models, and on appropriate truncation of the computational domain. An exact solution to the Vasicek model and an exact solution for the price of bonds convertible to stock at expiration under a stochastic interest rate are derived. The exact solutions are used to evaluate the accuracy of the numerical simulation schemes. For the numerical simulations the pricing solution is analyzed as the market completeness decreases from the ideal complete level to one with higher volatility of the interest rate and a slower mean-reverting environment. Simulations indicate that the CIR model yields more reasonable results than the Vasicek model in a less complete market.

  16. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  17. Bioremediation of oil-contaminated soil -- A rate model

    SciTech Connect

    Li, K.Y.; Zhang, Y.; Xu, T.

    1995-12-31

    Three rate equations, a modified Monod equation and two mass transfer rate equations, were used to calculate the biodegradation rate, oxygen transfer rate and oil transfer rate during a bioremediation process of oil-contaminated soil. Based on experimental rate constants, these three rates were calculated and compared. It was found the bioremediation rate of oil-contaminated soil could be controlled by the mass transfer process of oil into aqueous solution (0.12 mg BOD/(1-h)). When the oil transfer rate is enhanced by at least 10 times, the oxygen transfer process (0.1--1.0 mg BOD/(1-h)) becomes the rate-controlling step. For most of the cases, the biodegradation of oil in aqueous solution is not the limiting step unless the microbial population in the aqueous solution is less than 100 mg VSS/1.

  18. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  19. A model for the burning rates of composite propellants

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.; Strand, L. D.

    1980-01-01

    An analytical model of the steady-state burning of composite solid propellants is presented. An improved burning rate model is achieved by incorporating an improved AP monopropellant model, a separate energy balance for the binder in which a portion of the diffusion flame is used to heat the binder, proper use of the binder regression rate in the model, and a model for the combustion of the energetic binder component of CMDB propellants. Also, an improved correlation and model of aluminum agglomeration is developed which properly describes compositional trends.

  20. Teachers' Characteristics and Ratings for Evidence-Based Behavioral Interventions

    ERIC Educational Resources Information Center

    Stormont, Melissa; Reinke, Wendy; Herman, Keith

    2011-01-01

    The vast majority of schools today are not prepared to support children's social behavior needs. One challenge is that teachers may not be knowledgeable of evidence-based practices that can be utilized with children. This study explored teachers' agreement ratings for evidence-based and nonevidence-based behavior management practices for children…

  1. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  2. Development of an Interview-Based Geriatric Depression Rating Scale.

    ERIC Educational Resources Information Center

    Jamison, Christine; Scogin, Forrest

    1992-01-01

    Developed interview-based Geriatric Depression Rating Scale (GDRS) and administered 35-item GDRS to 68 older adults with range of affective disturbance. Found scale to have internal consistency and split-half reliability comparable to those of Hamilton Rating Scale for Depression and Geriatric Depression Scale. Concurrent validity, construct…

  3. Rate-based ABR flow control using two timescale SPSA

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Shalabh; Fu, Michael C.; Marcus, Steven I.

    1999-08-01

    In this paper, a two timescale simultaneous perturbation stochastic approximation algorithm is developed and applied to closed loop rate based available bit rate flow control. The relevant convergence results are stated and explained. Numerical experiments demonstrate fast convergence even in the presence of significant delays and a large number of parameterized policy levels.

  4. Additive-multiplicative rates model for recurrent events.

    PubMed

    Liu, Yanyan; Wu, Yuanshan; Cai, Jianwen; Zhou, Haibo

    2010-07-01

    Recurrent events are frequently encountered in biomedical studies. Evaluating the covariates effects on the marginal recurrent event rate is of practical interest. There are mainly two types of rate models for the recurrent event data: the multiplicative rates model and the additive rates model. We consider a more flexible additive-multiplicative rates model for analysis of recurrent event data, wherein some covariate effects are additive while others are multiplicative. We formulate estimating equations for estimating the regression parameters. The estimators for these regression parameters are shown to be consistent and asymptotically normally distributed under appropriate regularity conditions. Moreover, the estimator of the baseline mean function is proposed and its large sample properties are investigated. We also conduct simulation studies to evaluate the finite sample behavior of the proposed estimators. A medical study of patients with cystic fibrosis suffered from recurrent pulmonary exacerbations is provided for illustration of the proposed method. PMID:20229314

  5. Markov models and the ensemble Kalman filter for estimation of sorption rates.

    SciTech Connect

    Vugrin, Eric D.; McKenna, Sean Andrew; Vugrin, Kay White

    2007-09-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

  6. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  7. New model describing the dynamical behaviour of penetration rates

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru; Minagawa, Hiroe; Chiba, Michiko

    2013-02-01

    We propose a hierarchical logistic equation as a model to describe the dynamical behaviour of a penetration rate of a prevalent stuff. In this model, a memory, how many people who already possess it a person who does not process it yet met, is considered, which does not exist in the logistic model. As an application, we apply this model to iPod sales data, and find that this model can approximate the data much better than the logistic equation.

  8. Modeling Large-Strain, High-Rate Deformation in Metals

    SciTech Connect

    Lesuer, D R; Kay, G J; LeBlanc, M M

    2001-07-20

    The large strain deformation response of 6061-T6 and Ti-6Al-4V has been evaluated over a range in strain rates from 10{sup -4} s{sup -1} to over 10{sup 4} s{sup -1}. The results have been used to critically evaluate the strength and damage components of the Johnson-Cook (JC) material model. A new model that addresses the shortcomings of the JC model was then developed and evaluated. The model is derived from the rate equations that represent deformation mechanisms active during moderate and high rate loading. Another model that accounts for the influence of void formation on yield and flow behavior of a ductile metal (the Gurson model) was also evaluated. The characteristics and predictive capabilities of these models are reviewed.

  9. Model Based Definition

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  10. Cooling rate based on schreibersite growth for the Emery mesosiderite

    NASA Technical Reports Server (NTRS)

    Kulpecz, A. A., Jr.; Hewins, R. H.

    1978-01-01

    Computer simulation of diffusion-controlled growth of the large Ni-rich grains of schreibersite found in the Emery mesosiderite indicates that exsolution from kamacite occurred during cooling at the rate of 0.1 C/Myr. This finding agrees with the mesosiderite cooling rate determined by Powell (1969) from taenite-kamacite data. The cooling rate is the lowest found for any meteorite group, and implications for the cooling history, with a possibility of reheating, are considered. The procedure for computing a family of cooling rate curves is based on Randich's (1975) method.

  11. Model analysis of the link between interest rates and crashes

    NASA Astrophysics Data System (ADS)

    Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft

    2016-09-01

    We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.

  12. A Modeling Perspective on Interpreting Rates of Change in Context

    ERIC Educational Resources Information Center

    Ärlebäck, Jonas B.; Doerr, Helen M.; O'Neil, AnnMarie H.

    2013-01-01

    Functions provide powerful tools for describing change, but research has shown that students find difficulty in using functions to create and interpret models of changing phenomena. In this study, we drew on a models and modeling perspective to design an instructional approach to develop students' abilities to describe and interpret rates of…

  13. Stable large-scale CO2 storage in defiance of an energy system based on renewable energy - Modelling the impact of varying CO2 injection rates on reservoir behavior

    NASA Astrophysics Data System (ADS)

    Bannach, Andreas; Hauer, Rene; Martin, Streibel; Stienstra, Gerard; Kühn, Michael

    2015-04-01

    The IPCC Report 2014 strengthens the need for CO2 storage as part of CCS or BECCS to reach ambitious climate goals despite growing energy demand in the future. The further expansion of renewable energy sources is a second major pillar. As it is today in Germany the weather becomes the controlling factor for electricity production by fossil fuelled power plants which lead to significant fluctuations of CO2-emissions which can be traced in injection rates if the CO2 were captured and stored. To analyse the impact of such changing injection rates on a CO2 storage reservoir. two reservoir simulation models are applied: a. An (smaller) reservoir model approved by gas storage activities for decades, to investigate the dynamic effects in the early stage of storage filling (initial aquifer displacement). b. An anticline structure big enough to accommodate a total amount of ≥ 100 Mega tons CO2 to investigate the dynamic effects for the entire operational life time of the storage under particular consideration of very high filling levels (highest aquifer compression). Therefore a reservoir model was generated. The defined yearly injection rate schedule is based on a study performed on behalf of IZ Klima (DNV GL, 2014). According to this study the exclusive consideration of a pool of coal-fired power plants causes the most intensive dynamically changing CO2 emissions and hence accounts for variations of a system which includes industry driven CO2 production. Besides short-term changes (daily & weekly cycles) seasonal influences are also taken into account. Simulation runs cover a variation of injection points (well locations at the top vs. locations at the flank of the structure) and some other largely unknown reservoir parameters as aquifer size and aquifer mobility. Simulation of a 20 year storage operation is followed by a post-operational shut-in phase which covers approximately 500 years to assess possible effects of changing injection rates on the long-term reservoir

  14. Molecule-based approach for computing chemical-reaction rates in upper atmosphere hypersonic flows.

    SciTech Connect

    Gallis, Michail A.; Bond, Ryan Bomar; Torczynski, John Robert

    2009-08-01

    This report summarizes the work completed during FY2009 for the LDRD project 09-1332 'Molecule-Based Approach for Computing Chemical-Reaction Rates in Upper-Atmosphere Hypersonic Flows'. The goal of this project was to apply a recently proposed approach for the Direct Simulation Monte Carlo (DSMC) method to calculate chemical-reaction rates for high-temperature atmospheric species. The new DSMC model reproduces measured equilibrium reaction rates without using any macroscopic reaction-rate information. Since it uses only molecular properties, the new model is inherently able to predict reaction rates for arbitrary nonequilibrium conditions. DSMC non-equilibrium reaction rates are compared to Park's phenomenological non-equilibrium reaction-rate model, the predominant model for hypersonic-flow-field calculations. For near-equilibrium conditions, Park's model is in good agreement with the DSMC-calculated reaction rates. For far-from-equilibrium conditions, corresponding to a typical shock layer, the difference between the two models can exceed 10 orders of magnitude. The DSMC predictions are also found to be in very good agreement with measured and calculated non-equilibrium reaction rates. Extensions of the model to reactions typically found in combustion flows and ionizing reactions are also found to be in very good agreement with available measurements, offering strong evidence that this is a viable and reliable technique to predict chemical reaction rates.

  15. Surface analyses and modelling of rate multiplicity and instabilities

    SciTech Connect

    Harold, M.P.; Conner, W.C.

    1990-11-01

    Catalytic partial and complete oxidations of chemical species are a quite important class of reactions in the production of many commercial chemicals and in the elimination of environmental pollutants. This research focuses on a sub-class of oxidation reactions in which CO is a key player -- be it a reactant, intermediate, or product -- in the catalytic sequence and chemistry. The first three years of our research has followed two parallel paths which have a common destination and which together provide a framework for the proposed new research. The first path has involved the development of a combined experimental/modelling and analysis methodology for constructing feasible mechanistic sequences and their corresponding kinetic models of catalytic reactions that exhibit multiple rate behavior. The rather well-studied Pt catalyzed CO oxidation served as the test reaction. Our approach involves the measurement of basic kinetic features (apparent reaction orders, activation energy) and multiplicity features (location of ignition and extinction points) over a wide range of conditions (catalyst temperature, total pressure, feed composition), and a kinetic modelling part, in which potential reaction sequences are constructed and screened based on their ability to predict the experimentally observed kinetic and multiplicity features over a wide range of conditions. The second path has involved the development of an under-utilized (in catalysis) spectroscopic technique, Fourier transform infrared emission spectroscopy (FTIRES), to monitor the surface of a catalyst during reaction. Main accomplishments from both studies are summarized.

  16. On Estimation of GPS-based Indonesian Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Susilo, Susilo; Abidin, Hasanuddin Z.; Meilano, Irwan; Sapiie, Benyamin; Wijanarto, Antonius B.

    2016-04-01

    Using the GPS-derived rates at survey mode (sGPS) stations and continuous GPS stations across Indonesian region, covering the 22 years period from 1993 to 2014, the linear deformation velocities with an accuracy of about 2 to 3 mm/year level are derived. These velocities are corrected to the coseismic and postseismic deformation caused by significant earthquakes in that period. In this study, we use this GPS velocities field to construct a crustal strain rate map without including the physical model yet. An interpolation method was used to compute the velocity model. By differentiation of the continuous velocity model, we derive the strain rate map of Indonesia. At present, our result is only the magnitude of the strain rate. The Indonesian strain rate map is very important for studying the deformation characteristics in the region and to establish a deformation (velocity) model for supporting the implementation of the Indonesian Geospatial Reference System 2013 (IGRS 2013). This is a new semi-dynamic geocentric datum of Indonesia, which uses the global ITRF2008 reference frame, with a reference epoch of 1 January 2012. A deformation (velocity) model is required to transform coordinates from an observation epoch to or from this reference epoch.

  17. The Random-Effect Generalized Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wu, Shiu-Lien

    2011-01-01

    Rating scale items have been widely used in educational and psychological tests. These items require people to make subjective judgments, and these subjective judgments usually involve randomness. To account for this randomness, Wang, Wilson, and Shih proposed the random-effect rating scale model in which the threshold parameters are treated as…

  18. STATISTICAL MODEL OF LABORATORY DEATH RATE MEASUREMENTS FOR AIRBORNE BACTERIA

    EPA Science Inventory

    From 270 published laboratory airborne death rate measurements, two regression models relating the death rate constant for 15 bacterial species to aerosol age in the dark, Gram reaction, temperature, and an evaporation factor which is a function of RH and temperature were obtaine...

  19. Rate dependent constitutive models for fiber reinforced polymer composites

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.

    1990-01-01

    A literature survey was conducted to assess the state-of-the-art in rate dependent constitutive models for continuous fiber reinforced polymer matrix composite (PMC) materials. Several recent models which include formulations for describing plasticity, viscoelasticity, viscoplasticity, and rate-dependent phenomenon such as creep and stress relaxation are outlined and compared. When appropriate, these comparisons include brief descriptions of the mathematical formulations, the test procedures required for generating material constants, and details of available data comparing test results to analytical predictions.

  20. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  1. The Wealth Distribution Model with the Kickback Rate

    NASA Astrophysics Data System (ADS)

    Zhang, Yujie; He, Mingfeng

    We define an asset exchange model by adding the kickback rate to the trade, and discuss the Gini index with different kickback rates. It is found that for every kickback rate, the related Gini index tends to be steady; thus, the kickback rate — Gini index curve may be obtained. Furthermore, it is shown that the Gini index decreases when the kickback rate increases, so that the fair degree of social wealth distribution gets better. The Gini index reaches a minimum when the kickback rate is 0.58, and then it increases, as the accretion of the kickback rate destroys the fair degree of social wealth distribution. However, in all situations, the Gini index with kickback rate is less than the one without kickback. This means that the introduction of kickback rate is favorable to the raising of the fair degree of wealth distribution. We also define a moral index similar to the Gini index to weigh the differences of social moral level, and find that the differences of social moral level increase with time for the model with kickback rate.

  2. First principles modeling of nonlinear incidence rates in seasonal epidemics.

    PubMed

    Ponciano, José M; Capistrán, Marcos A

    2011-02-01

    In this paper we used a general stochastic processes framework to derive from first principles the incidence rate function that characterizes epidemic models. We investigate a particular case, the Liu-Hethcote-van den Driessche's (LHD) incidence rate function, which results from modeling the number of successful transmission encounters as a pure birth process. This derivation also takes into account heterogeneity in the population with regard to the per individual transmission probability. We adjusted a deterministic SIRS model with both the classical and the LHD incidence rate functions to time series of the number of children infected with syncytial respiratory virus in Banjul, Gambia and Turku, Finland. We also adjusted a deterministic SEIR model with both incidence rate functions to the famous measles data sets from the UK cities of London and Birmingham. Two lines of evidence supported our conclusion that the model with the LHD incidence rate may very well be a better description of the seasonal epidemic processes studied here. First, our model was repeatedly selected as best according to two different information criteria and two different likelihood formulations. The second line of evidence is qualitative in nature: contrary to what the SIRS model with classical incidence rate predicts, the solution of the deterministic SIRS model with LHD incidence rate will reach either the disease free equilibrium or the endemic equilibrium depending on the initial conditions. These findings along with computer intensive simulations of the models' Poincaré map with environmental stochasticity contributed to attain a clear separation of the roles of the environmental forcing and the mechanics of the disease transmission in shaping seasonal epidemics dynamics. PMID:21379320

  3. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified

  4. Equivalence of interest rate models and lattice gases

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t1,t2)=-Cov[x(t1),x(t2)]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e-γ|x-y|-e-γ(x+y)). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  5. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J., II; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  6. Faculty Ratings as Part of a Competency-Based Evaluation Clinic Grading System.

    ERIC Educational Resources Information Center

    Chambers, David W.

    1999-01-01

    Describes a quarterly rating system developed to replace daily grading in a dental school with a competency-based educational model. Presents results from an early administration of the ratings. These results, for 126 students, show excellent face validity and rater consistency and satisfy the school's standard for grade defensibility. (SLD)

  7. Cosmogenic Ne-21 Production Rates in H-Chondrites Based on Cl-36 - Ar-36 Ages

    NASA Technical Reports Server (NTRS)

    Leya, I.; Graf, Th.; Nishiizumi, K.; Guenther, D.; Wieler, R.

    2000-01-01

    We measured Ne-21 production rates in 14 H-chondrites in good agreement with model calculations. The production rates are based on Ne-21 concentrations measured on bulk samples or the non-magnetic fraction and Cl-36 - Ar-36 ages determined from the metal phase.

  8. A rumor spreading model with variable forgetting rate

    NASA Astrophysics Data System (ADS)

    Zhao, Laijun; Xie, Wanlin; Gao, H. Oliver; Qiu, Xiaoyan; Wang, Xiaoli; Zhang, Shuhai

    2013-12-01

    A rumor spreading model with the consideration of forgetting rate changing over time is examined in small-world networks. The mean-field equations are derived to describe the dynamics of rumor spreading in small-world networks. Further, numerical solutions are conducted on LiveJournal, an online social blogging platform, to better understand the performance of the model. Results show that the forgetting rate has a significant impact on the final size of rumor spreading: the larger the initial forgetting rate or the faster the forgetting speed, the smaller the final size of the rumor spreading. Numerical solutions also show that the final size of rumor spreading is much larger under a variable forgetting rate compared to that under a constant forgetting rate.

  9. Rates and potentials of soil organic carbon sequestration in agricultural lands in Japan: an assessment using a process-based model and spatially-explicit land-use change inventories

    NASA Astrophysics Data System (ADS)

    Yagasaki, Y.; Shirato, Y.

    2013-11-01

    to other land-use types by abandoning or urbanization accompanied by substantial changes in the rate of organic carbon input to soils, could cause a greater or comparable influence on country-scale SCSC compared with changes in management of agricultural lands. A net-net based accounting on SCSC showed potential influence of variations in future climate on SCSC, that highlighted importance of application of process-based model for estimation of this quantity. Whereas a baseline-based accounting on SCSC was shown to have robustness over variations in future climate and effectiveness to factor out direct human-induced influence on SCSC. Validation of the system's function to estimate SCSC in agricultural lands, by comparing simulation output with data from nation-wide stationary monitoring conducted during year 1979-1998, suggested that the system has an acceptable levels of validity, though only for limited range of conditions at current stage. In addition to uncertainties in estimation of the rate of organic carbon input to soils in different land-use types at large-scale, time course of SOC sequestration, supposition on land-use change pattern in future, as well as feasibility of agricultural policy planning are considered as important factors that need to be taken account in estimation on a potential of country-scale SCSC.

  10. Modeling the minimum creep rate of discontinuous lamellar- reinforced composites

    NASA Astrophysics Data System (ADS)

    Bartholomeusz, Michael F.; Wert, John A.

    1995-12-01

    An analytical model has been developed to predict the creep rate of discontinuous lamellar-reinforced composites in which both phases plastically deform. The model incorporates effects associated with lamellar orientation relative to the uniaxial stress axis. For modest to large differences between matrix and reinforcement creep rates, lamellar aspect ratio has a significant impact on composite creep rate. For a prescribed reinforcing phase volume fraction, microstructural inhomogeneity can have a pronounced effect on composite creep properties. In the case of uniaxially aligned rigid lamellar-reinforced composites, an inhomogeneous distribution of reinforcing lamellae in the microstructure substantially increases the composite creep rate. Model results demonstrate that there is no significant improvement in creep resistance for aligned fiber-reinforced composites compared to aligned lamellar-reinforced composites, unless the reinforcing phase is essentially nondeforming relative to the matrix phase.

  11. Strain Rate Dependent Modeling of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Stouffer, Donald C.

    1999-01-01

    A research program is in progress to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. Strain rate dependent inelastic constitutive equations have been developed to model the polymer matrix, and have been incorporated into a micromechanics approach to analyze polymer matrix composites. The Hashin failure criterion has been implemented within the micromechanics results to predict ply failure strengths. The deformation model has been implemented within LS-DYNA, a commercially available transient dynamic finite element code. The deformation response and ply failure stresses for the representative polymer matrix composite AS4/PEEK have been predicted for a variety of fiber orientations and strain rates. The predicted results compare favorably to experimentally obtained values.

  12. Spray Combustion Modeling with VOF and Finite-Rate Chemistry

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Wang, Ten-See

    1996-01-01

    A spray atomization and combustion model is developed based on the volume-of-fluid (VOF) transport equation with finite-rate chemistry model. The gas-liquid interface mass, momentum and energy conservation laws are modeled by continuum surface force mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed range flows. The objectives of the present study are: (1) to develop and verify the fractional volume-of-fluid (VOF) cell partitioning approach into a predictor-corrector algorithm to deal with multiphase (gas-liquid) free surface flow problems; (2) to implement the developed unified algorithm in a general purpose computational fluid dynamics (CFD) code, Finite Difference Navier-Stokes (FDNS), with droplet dynamics and finite-rate chemistry models; and (3) to demonstrate the effectiveness of the present approach by simulating benchmark problems of jet breakup/spray atomization and combustion. Modeling multiphase fluid flows poses a significant challenge because a required boundary must be applied to a transient, irregular surface that is discontinuous, and the flow regimes considered can range from incompressible to highspeed compressible flows. The flow-process modeling is further complicated by surface tension, interfacial heat and mass transfer, spray formation and turbulence, and their interactions. The major contribution of the present method is to combine the novel feature of the Volume of Fluid (VOF) method and the Eulerian/Lagrangian method into a unified algorithm for efficient noniterative, time-accurate calculations of multiphase free surface flows valid at all speeds. The proposed method reformulated the VOF equation to strongly couple two distinct phases (liquid and gas), and tracks droplets on a Lagrangian frame when spray model is required, using a unified predictor-corrector technique to account for the non-linear linkages through the convective contributions of VOF. The discontinuities within the

  13. An Effective Collision Rate Model for Atomtronic Devices

    NASA Astrophysics Data System (ADS)

    Straatsma, Cameron J. E.; Chow, Weng W.; Anderson, Dana Z.

    2015-05-01

    We demonstrate application of a model, previously developed for the detailed study of quantum electronic systems, to atomtronic devices utilizing finite temperature Bose-condensed gases. The numerical approach is based on the relaxation rate approximation where collisions effectively drive the system towards a dynamical (non-thermal) equilibrium distribution. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to non-equilibrium state evolution, as well as the much longer time durations typical of steady-state device operation. The model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase-locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied. This work is supported by the National Science Foundation, the Air Force Office of Scientific Research, and the Sandia LDRD program.

  14. A frictional population model of seismicity rate change

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.

    2005-01-01

    We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.

  15. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  16. Modeling and predicting drug resistance rate and strength.

    PubMed

    Fullybright, R; Dwivedi, A; Mallawaarachchi, I; Sinsin, B

    2016-08-01

    Drug resistance has been worsening in human infectious diseases medicine over the past several decades. Our ability to successfully control resistance depends to a large extent on our understanding of the features characterizing the process. Part of that understanding includes the rate at which new resistance has been emerging in pathogens. Along that line, resistance data covering 90 infectious diseases, 118 pathogens, and 337 molecules, from 1921 through 2007, are modeled using various statistical tools to generate regression models for the rate of new resistance emergence and for cumulative resistance build-up in pathogens. Thereafter, the strength of the association between the number of molecules put on the market and the number of resulting cases of resistance is statistically tested. Predictive models are presented for the rate at which new resistance has been emerging in infectious diseases medicine, along with predictive models for the rate of cumulative resistance build-up in the aggregate of 118 pathogens as well as in ten individual pathogens. The models are expressed as a function of time and/or as a function of the number of molecules put on the market by the pharmaceutical industry. It is found that molecules significantly induce resistance in pathogens and that new or cumulative drug resistance across infectious diseases medicine has been arising at exponential rates. PMID:27209288

  17. Converting copepod vital rates into units appropriate for biogeochemical models

    NASA Astrophysics Data System (ADS)

    Frangoulis, C.; Carlotti, F.; Eisenhauer, L.; Zervoudaki, S.

    2010-01-01

    The conversion of units is one of the difficulties of model parameterisation. Conversion errors may result not only from incorrect choices of conversion factors, but also from incorrect choices of the value itself. In biogeochemical models, mesozooplankton, is the highest trophic level of the food web, and it is very often reduced to a single variable generally considered as a representation of the copepod community, the dominant taxa in mesozooplankton. If this simplifies the information to be obtained for the stock, a correct parameterisation of the processes related to the copepod community is already a tricky task due to the wide range of copepod species, sizes, stages and behaviour. The goal of this paper is to improve the communication between experimentalists and modellers by giving indications for the conversion of copepod vital rates from experimental to biogeochemical model units. This includes the choice of values, conversion factors, terminology distinction and the scale transfer. To begin with, we briefly address the common problem of the conversion of a rate per individual to a rate per mass. Then, we focus on unit conversion problems for each specific rate and give recommendations. Finally, we discuss the problem of scale transfer between the level of organisation at which the rate value is measured at characteristic time and space-scales versus the level of representation of the corresponding process in the model, with its different characteristic time and space-scales .

  18. Evaluation and refinement of leak-rate estimation models

    SciTech Connect

    Paul, D.D.; Ahmad, J.; Scott, P.M.; Flanigan, L.F.; Wilkowski, G.M. )

    1991-04-01

    Leak-rate estimation models are important elements in developing a leak-before-break methodology in piping integrity and safety analyses. Existing thermal-hydraulic and crack-opening-area models used in current leak-rate estimations have been incorporated into a single computer code for leak-rate estimation. The code is called SQUIRT, which stands for Seepage Quantification of Upsets In Reactor Tubes. The SQUIRT program has been validated by comparing its thermal-hydraulic predictions with the limited experimental data that have been published on two-phase flow through slits and cracks, and by comparing its crack-opening-area predictions with data from the Degraded Piping Program. In addition, leak-rate experiments were conducted to obtain validation data for a circumferential fatigue crack in a carbon steel pipe girth weld. 56 refs., 30 figs., 4 tabs.

  19. 75 FR 20796 - Market-Based Rate Affiliate Restrictions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ...The Federal Energy Regulatory Commission (Commission) proposes to amend its regulations to revise its regulations governing market- based rates for public utilities pursuant to section 205 of the Federal Power Act (FPA). The Commission proposes to clarify that employees that determine the timing of scheduled outages, or that engage in economic dispatch, fuel procurement, or resource planning......

  20. Increasing Response Rates to Web-Based Surveys

    ERIC Educational Resources Information Center

    Monroe, Martha C.; Adams, Damian C.

    2012-01-01

    We review a popular method for collecing data--Web-based surveys. Although Web surveys are popular, one major concern is their typically low response rates. Using the Dillman et al. (2009) approach, we designed, pre-tested, and implemented a survey on climate change with Extension professionals in the Southeast. The Dillman approach worked well,…

  1. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND INSPECTION SERVICE ADMINISTRATIVE PROVISIONS FEES AND CHARGES FOR INSPECTION SERVICES AND...

  2. 9 CFR 391.2 - Base time rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Base time rate. 391.2 Section 391.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FOOD SAFETY AND INSPECTION SERVICE ADMINISTRATIVE PROVISIONS FEES AND CHARGES FOR INSPECTION SERVICES AND...

  3. 76 FR 4569 - Market-Based Rate Affiliate Restrictions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ...The Federal Energy Regulatory Commission (Commission) withdraws a notice of proposed rulemaking, which proposed to amend its regulations governing market-based rates for public utilities pursuant to section 205 of the Federal Power Act (FPA) to include in the regulatory text the clarification that employees that determine the timing of scheduled outages or that engage in economic dispatch,......

  4. A model for reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Chinitz, W.; Evans, J. S.

    1984-01-01

    To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.

  5. Removal rate model for magnetorheological finishing of glass.

    PubMed

    Degroote, Jessica E; Marino, Anne E; Wilson, John P; Bishop, Amy L; Lambropoulos, John C; Jacobs, Stephen D

    2007-11-10

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magnetorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time, to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses. PMID:17994145

  6. Removal Rate Model for Magnetorheological Finishing of Glass

    SciTech Connect

    DeGroote, J.E.; Marino, A.E.; WIlson, J.P.; Bishop, A.L.; Lambropoulos, J.C.; Jacobs, S.D.

    2007-11-14

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magntorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time, to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.

  7. Modeling the intracellular pathogen-immune interaction with cure rate

    NASA Astrophysics Data System (ADS)

    Dubey, Balram; Dubey, Preeti; Dubey, Uma S.

    2016-09-01

    Many common and emergent infectious diseases like Influenza, SARS, Hepatitis, Ebola etc. are caused by viral pathogens. These infections can be controlled or prevented by understanding the dynamics of pathogen-immune interaction in vivo. In this paper, interaction of pathogens with uninfected and infected cells in presence or absence of immune response are considered in four different cases. In the first case, the model considers the saturated nonlinear infection rate and linear cure rate without absorption of pathogens into uninfected cells and without immune response. The next model considers the effect of absorption of pathogens into uninfected cells while all other terms are same as in the first case. The third model incorporates innate immune response, humoral immune response and Cytotoxic T lymphocytes (CTL) mediated immune response with cure rate and without absorption of pathogens into uninfected cells. The last model is an extension of the third model in which the effect of absorption of pathogens into uninfected cells has been considered. Positivity and boundedness of solutions are established to ensure the well-posedness of the problem. It has been found that all the four models have two equilibria, namely, pathogen-free equilibrium point and pathogen-present equilibrium point. In each case, stability analysis of each equilibrium point is investigated. Pathogen-free equilibrium is globally asymptotically stable when basic reproduction number is less or equal to unity. This implies that control or prevention of infection is independent of initial concentration of uninfected cells, infected cells, pathogens and immune responses in the body. The proposed models show that introduction of immune response and cure rate strongly affects the stability behavior of the system. Further, on computing basic reproduction number, it has been found to be minimum for the fourth model vis-a-vis other models. The analytical findings of each model have been exemplified by

  8. Constraining kinetic rates of mineral reactions using reactive transport models

    NASA Astrophysics Data System (ADS)

    Bolton, E. W.; Wang, Z.; Ague, J.; Bercovici, D.; Cai, Z.; Karato, S.; Oristaglio, M. L.; Qiu, L.

    2012-12-01

    We use a reactive transport model to better understand results of experiments to obtain kinetic rates of mineral reactions in closed systems. Closed system experiments pose special challenges in that secondary minerals may form that modify the fluid composition evolution and may grow on the dissolving minerals thus armoring the surface. Even so, such closed system experiments provide critical data for what minerals would actually form in field applications and how coupled dissolution and precipitation mineral reactions are strongly linked. Comparing to experimental observations can test the reactive transport model, and the experimental observations can be better understood by comparing the results to the modeling. We apply a 0D end member of the model to understand the dissolution of single crystals of forsterite in a variety of settings (low pH, high pH, or NaHCO3 initial fluids, at 100 C and 1 bar, or 200 C and 150 bar). Depending on the initial conditions, we observe the precipitation of talc, brucite, amorphous silica, chrysotile, or magnesite, in various combinations. We compare simulation results to fluid compositions and the presence of secondary minerals experimentally sampled at various times. Insight from the simulations helped create an inverse model to extract the rates of forsterite dissolution and to create a simple forward model useful for exploring the influence of system size, secondary mineral surface areas, etc. Our reactive transport model allows secondary minerals to armor the forsterite surface, which can strongly decrease the dissolution rate as the system evolves. Tuning our model with experimentally derived rates and assuring relevant processes are included so as to reproduce experimental observations is necessary before upscaling to heterogeneous field conditions. The reactive transport model will be used for field-scale sequestration simulations and coupled with a geomechanical model that includes the influence of deformation.

  9. A geodetic plate motion and Global Strain Rate Model

    NASA Astrophysics Data System (ADS)

    Kreemer, Corné; Blewitt, Geoffrey; Klein, Elliot C.

    2014-10-01

    present a new global model of plate motions and strain rates in plate boundary zones constrained by horizontal geodetic velocities. This Global Strain Rate Model (GSRM v.2.1) is a vast improvement over its predecessor both in terms of amount of data input as in an increase in spatial model resolution by factor of ˜2.5 in areas with dense data coverage. We determined 6739 velocities from time series of (mostly) continuous GPS measurements; i.e., by far the largest global velocity solution to date. We transformed 15,772 velocities from 233 (mostly) published studies onto our core solution to obtain 22,511 velocities in the same reference frame. Care is taken to not use velocities from stations (or time periods) that are affected by transient phenomena; i.e., this data set consists of velocities best representing the interseismic plate velocity. About 14% of the Earth is allowed to deform in 145,086 deforming grid cells (0.25° longitude by 0.2° latitude in dimension). The remainder of the Earth's surface is modeled as rigid spherical caps representing 50 tectonic plates. For 36 plates we present new GPS-derived angular velocities. For all the plates that can be compared with the most recent geologic plate motion model, we find that the difference in angular velocity is significant. The rigid-body rotations are used as boundary conditions in the strain rate calculations. The strain rate field is modeled using the Haines and Holt method, which uses splines to obtain an self-consistent interpolated velocity gradient tensor field, from which strain rates, vorticity rates, and expected velocities are derived. We also present expected faulting orientations in areas with significant vorticity, and update the no-net rotation reference frame associated with our global velocity gradient field. Finally, we present a global map of recurrence times for Mw=7.5 characteristic earthquakes.

  10. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  11. The Technical Adequacy of Curriculum-Based and Rating-Based Measures of Written Expression for Elementary School Students

    ERIC Educational Resources Information Center

    Gansle, Kristin A.; VanDerHeyden, Amanda M.; Noell, George H.; Resetar, Jennifer L.; Williams, Kashunda L.

    2006-01-01

    Five hundred thirty-eight elementary school students participated in a study designed to examine the technical characteristics of curriculum-based measures (CBMs) for the assessment of writing. In addition, the study investigated rating-based measures of writing using the Six Trait model, an assessment instrument and writing program in use in many…

  12. A comparison of analytic models for estimating dose equivalent rates in shielding with beam spill measurements

    SciTech Connect

    Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.

    1992-12-31

    A comparison of 800-MeV proton beam spill measurements at the Los Alamos Meson Physics Facility (LAMPF) with analytical model calculations of neutron dose equivalent rates (DER) show agreement within factors of 2-3 for simple shielding geometries. The DER estimates were based on a modified Moyer model for transverse angles and a Monte Carlo based forward angle model described in the proceeding paper.

  13. Exploring Latent Class Based on Growth Rates in Number Sense Ability

    ERIC Educational Resources Information Center

    Kim, Dongil; Shin, Jaehyun; Lee, Kijyung

    2013-01-01

    The purpose of this study was to explore latent class based on growth rates in number sense ability by using latent growth class modeling (LGCM). LGCM is one of the noteworthy methods for identifying growth patterns of the progress monitoring within the response to intervention framework in that it enables us to analyze latent sub-groups based not…

  14. Finite driving rate and anisotropy effects in landslide modeling

    SciTech Connect

    Piegari, E.; Cataudella, V.; Di Maio, R.; Milano, L.; Nicodemi, M.

    2006-02-15

    In order to characterize landslide frequency-size distributions and individuate hazard scenarios and their possible precursors, we investigate a cellular automaton where the effects of a finite driving rate and the anisotropy are taken into account. The model is able to reproduce observed features of landslide events, such as power-law distributions, as experimentally reported. We analyze the key role of the driving rate and show that, as it is increased, a crossover from power-law to non-power-law behaviors occurs. Finally, a systematic investigation of the model on varying its anisotropy factors is performed and the full diagram of its dynamical behaviors is presented.

  15. Modeling heart rate variability including the effect of sleep stages

    NASA Astrophysics Data System (ADS)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  16. Modeling heart rate variability including the effect of sleep stages.

    PubMed

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that-in comparison with real data-the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed. PMID:26931582

  17. Micromechanics-Based Permeability Evolution in Brittle Materials at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Perol, Thibaut; Bhat, Harsha S.

    2016-08-01

    We develop a micromechanics-based permeability evolution model for brittle materials at high strain rates (≥ 100 s^{-1}). Extending for undrained deformation the mechanical constitutive description of brittle solids, whose constitutive response is governed by micro-cracks, we now relate the damage-induced strains to micro-crack aperture. We then use an existing permeability model to evaluate the permeability evolution. This model predicts both the percolative and connected regime of permeability evolution of Westerly Granite during triaxial loading at high strain rate. This model can simulate pore pressure history during earthquake coseismic dynamic ruptures under undrained conditions.

  18. A constitutive model for ballistic gelatin at surgical strain rates.

    PubMed

    Ravikumar, Nishant; Noble, Christopher; Cramphorn, Edward; Taylor, Zeike A

    2015-07-01

    This paper describes a constitutive model for ballistic gelatin at the low strain rates experienced, for example, by soft tissues during surgery. While this material is most commonly associated with high speed projectile penetration and impact investigations, it has also been used extensively as a soft tissue simulant in validation studies for surgical technologies (e.g. surgical simulation and guidance systems), for which loading speeds and the corresponding mechanical response of the material are quite different. We conducted mechanical compression experiments on gelatin specimens at strain rates spanning two orders of magnitude (~0.001-0.1s(-1)) and observed a nonlinear load-displacement history and strong strain rate-dependence. A compact and efficient visco-hyperelastic constitutive model was then formulated and found to fit the experimental data well. An Ogden type strain energy density function was employed for the elastic component. A single Prony exponential term was found to be adequate to capture the observed rate-dependence of the response over multiple strain rates. The model lends itself to immediate use within many commercial finite element packages. PMID:25863009

  19. Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2015-12-01

    The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.

  20. High rate constitutive modeling of aluminium alloy tube

    NASA Astrophysics Data System (ADS)

    Salisbury, C. P.; Worswick, M. J.; Mayer, R.

    2006-08-01

    As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.

  1. A microphysical model explains rate-and-state friction

    NASA Astrophysics Data System (ADS)

    Chen, Jianye; Spiers, Christopher J.

    2015-04-01

    The rate-and-state friction (RSF) laws were originally developed as a phenomenological description of the frictional behavior observed in lab experiments. In previous studies, the empirical RSF laws have been extensively and quite successfully applied to fault mechanisms. However, these laws can not readily be envisioned in terms of the underlying physics. There are several critical discrepancies between seismological constraints on RSF behavior associated with earthquakes and lab-derived RSF parameters, in particular regarding the static stress drop and characteristic slip distance associated with seismic events. Moreover, lab friction studies can address only limited fault topographies, displacements, experimental durations and P-T conditions, which means that scale issues, and especially processes like dilatation and fluid-rock interaction, cannot be fully taken into account. Without a physical basis accounting for such effects, extrapolation of lab-derived RSF data to nature involves significant, often unknown uncertainties. In order to more reliably apply experimental results to natural fault zones, and notably to extrapolate lab data beyond laboratory pressure, temperature and velocity conditions, an understanding of the microphysical mechanisms governing fault frictional behavior is required. Here, following some pioneering efforts (e.g. Niemeijer and Spiers, 2007; Den Hartog and Spiers, 2014), a mechanism-based microphysical model is developed for describing the frictional behavior of carbonate fault gouge, assuming that the frictional behavior seen in lab experiments is controlled by competing processes of intergranular slip versus contact creep by pressure solution. The model basically consists of two governing equations derived from energy/entropy balance considerations and the kinematic relations that apply to a granular fault gouge undergoing shear and dilation/compaction. These two equations can be written as ˙τ/K = Vimp- Lt[λ˙γsbps +(1-

  2. Error Growth Rate in the MM5 Model

    NASA Astrophysics Data System (ADS)

    Ivanov, S.; Palamarchuk, J.

    2006-12-01

    The goal of this work is to estimate model error growth rates in simulations of the atmospheric circulation by the MM5 model all the way from the short range to the medium range and beyond. The major topics are addressed to: (i) search the optimal set of parameterization schemes; (ii) evaluate the spatial structure and scales of the model error for various atmospheric fields; (iii) determine geographical regions where model errors are largest; (iv) define particular atmospheric patterns contributing to the fast and significant model error growth. Results are presented for geopotential, temperature, relative humidity and horizontal wind components fields on standard surfaces over the Atlantic-European region during winter 2002. Various combinations of parameterization schemes for cumulus, PBL, moisture and radiation are used to identify which one provides a lesser difference between the model state and analysis. The comparison of the model fields is carried out versus ERA-40 reanalysis of the ECMWF. Results show that the rate, at which the model error grows as well as its magnitude, varies depending on the forecast range, atmospheric variable and level. The typical spatial scale and structure of the model error also depends on the particular atmospheric variable. The distribution of the model error over the domain can be separated in two parts: the steady and transient. The first part is associated with a few high mountain regions including Greenland, where model error is larger. The transient model error mainly moves along with areas of high gradients in the atmospheric flow. Acknowledgement: This study has been supported by NATO Science for Peace grant #981044. The MM5 modelling system used in this study has been provided by UCAR. ERA-40 re-analysis data have been obtained from the ECMWF data server.

  3. Stochastic heart-rate model can reveal pathologic cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kuusela, Tom

    2004-03-01

    A simple one-dimensional Langevin-type stochastic difference equation can simulate the heart-rate fluctuations in a time scale from minutes to hours. The model consists of a deterministic nonlinear part and a stochastic part typical of Gaussian noise, and both parts can be directly determined from measured heart-rate data. Data from healthy subjects typically exhibit the deterministic part with two or more stable fixed points. Studies of 15 congestive heart-failure subjects reveal that the deterministic part of pathologic heart dynamics has no clear stable fixed points. Direct simulations of the stochastic model for normal and pathologic cases can produce statistical parameters similar to those of real subjects. Results directly indicate that pathologic situations simplify the heart-rate control system.

  4. A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study

    PubMed Central

    Souza, JP; Betran, AP; Dumont, A; de Mucio, B; Gibbs Pickens, CM; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, JG; Vogel, JP; Jayaratne, K; Leal, MC; Gissler, M; Morisaki, N; Lack, N; Oladapo, OT; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, AD; Marcolin, AC; Zongo, A; Blondel, B; Hernández, B; Hogue, CJ; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, ECD; Vieira, EM; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, ML; Torloni, MR; Kramer, MR; Borges, P; Olkhanud, PB; Pérez-Cuevas, R; Agampodi, SB; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, AM

    2016-01-01

    Objective To generate a global reference for caesarean section (CS) rates at health facilities. Design Cross-sectional study. Setting Health facilities from 43 countries. Population/Sample Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10 045 875 women giving birth from 43 countries for model testing. Methods We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Main outcome measures Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. Results According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). Conclusions This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. PMID:26259689

  5. Stochastic Models of Tropical Rain-Rate Fields

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.

    2003-01-01

    Because of the extreme variability of rain rate in space and time and the difficulties with remote sensing methods of measuring rain rates, accurate determination of rainfall over large areas and time periods has long been a problem for hydrologists, meteorogists, and climatologists. A number of statistical models of rain have been developed in order to investigate the impact of rain variability on satellite remote sensing methods, validation of satellite rain products, and generation of rain maps with accompanying error estimates. These models may be useful in examining 'sub-grid scale' issues in representing precipitation in numerical mdoels. A stochastic model will first be described which can generate time-dependent high-resolution spatial rain fields with space and time correlations similar to those seen in rain data, as well as representing the presence of areas with zero rain rate and log-normally distributed rain rates where there is rain. A simpler model derived from this, formulated in the spectral domain, seems to imply fractal-like rain statistics at small scales when fit to rain data.

  6. Mixed Rasch Modeling of the Self-Rating Depression Scale

    ERIC Educational Resources Information Center

    Hong, Sehee; Min, Sae-Young

    2007-01-01

    In this study, mixed Rasch modeling was used on the Self-Rating Depression Scale (SDS), a widely used measure of depression, among a non-Western sample of 618 Korean college students. The results revealed three latent classes and confirmed the unidimensionality of the SDS. In addition, there was a significant effect for gender in terms of class…

  7. Markov Models and the Ensemble Kalman Filter for Estimation of Sorption Rates

    NASA Astrophysics Data System (ADS)

    Vugrin, E. D.; McKenna, S. A.; White Vugrin, K.

    2007-12-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000. This work was supported under the Sandia Laboratory Directed Research and Development program.

  8. Estimating digital information throughput rates for radiology networks. A model.

    PubMed

    Cox, G G; Templeton, A W; Anderson, W H; Cook, L T; Hensley, K S; Dwyer, S J

    1986-02-01

    The design and implementation of a digital radiology image management system requires the definition, evaluation, and comparison of appropriate measures of system performance. The mean throughput rate is an important measure of the actual performance of a finished system. The mean throughput rate identifies the transmission of digital information either in bits/second or tasks/second. It is dependent on software, database management, equipment interface designs, number of users and display stations, and communications media. The mean throughput rate can document resource allocation bottlenecks within a given system. A model for estimating the mean throughput rate and its application in helping us design our radiology digital image networks is described. PMID:3957590

  9. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a

  10. A master curve-mechanism based approach to modeling the effects of constraint, loading rate and irradiation on the toughness-temperature behavior of a V-4Cr-4Ti alloy

    SciTech Connect

    Odette, G.R.; Donahue, E.; Lucas, G.E.; Sheckherd, J.W.

    1996-10-01

    The influence of loading rate and constraint on the effective fracture toughness as a function of temperature [K{sub e}(T)] of the fusion program heat of V-4Cr-4Ti was measured using subsized, three point bend specimens. The constitutive behavior was characterized as a function of temperature and strain rate using small tensile specimens. Data in the literature on this alloy was also analysed to determine the effect of irradiation on K{sub e}(T) and the energy temperature (E-T) curves measured in subsized Charpy V-notch tests. It was found that V-4Cr-4Ti undergoes {open_quotes}normal{close_quotes} stress-controlled cleavage fracture below a temperature marking a sharp ductile-to-brittle transition. The transition temperature is increased by higher loading rates, irradiation hardening and triaxial constraint. Shifts in a reference transition temperature due to higher loading rates and irradiation can be reasonably predicted by a simple equivalent yield stress model. These results also suggest that size and geometry effects, which mediate constraint, can be modeled by combining local critical stressed area {sigma}*/A* fracture criteria with finite element method simulations of crack tip stress fields. The fundamental understanding reflected in these models will be needed to develop K{sub e}(T) curves for a range of loading rates, irradiation conditions, structural size scales and geometries relying (in large part) on small specimen tests. Indeed, it may be possible to develop a master K{sub e}(T) curve-shift method to account for these variables. Such reliable and flexible failure assessment methods are critical to the design and safe operation of defect tolerant vanadium structures.

  11. Predicting online ratings based on the opinion spreading process

    NASA Astrophysics Data System (ADS)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  12. The Relationship Between Hospital Value-Based Purchasing Program Scores and Hospital Bond Ratings.

    PubMed

    Rangnekar, Anooja; Johnson, Tricia; Garman, Andrew; O'Neil, Patricia

    2015-01-01

    Tax-exempt hospitals and health systems often borrow long-term debt to fund capital investments. Lenders use bond ratings as a standard metric to assess whether to lend funds to a hospital. Credit rating agencies have historically relied on financial performance measures and a hospital's ability to service debt obligations to determine bond ratings. With the growth in pay-for-performance-based reimbursement models, rating agencies are expanding their hospital bond rating criteria to include hospital utilization and value-based purchasing (VBP) measures. In this study, we evaluated the relationship between the Hospital VBP domains--Clinical Process of Care, Patient Experience of Care, Outcome, and Medicare Spending per Beneficiary (MSPB)--and hospital bond ratings. Given the historical focus on financial performance, we hypothesized that hospital bond ratings are not associated with any of the Hospital VBP domains. This was a retrospective, cross-sectional study of all hospitals that were rated by Moody's for fiscal year 2012 and participated in the Centers for Medicare & Medicaid Services' VBP program as of January 2014 (N = 285). Of the 285 hospitals in the study, 15% had been assigned a bond rating of Aa, and 46% had been assigned an A rating. Using a binary logistic regression model, we found an association between MSPB only and bond ratings, after controlling for other VBP and financial performance scores; however, MSPB did not improve the overall predictive accuracy of the model. Inclusion of VBP scores in the methodology used to determine hospital bond ratings is likely to affect hospital bond ratings in the near term. PMID:26554267

  13. A mechanistic detachment rate model to predict soil erodibility due to fluvial and seepage forces

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The erosion rate of cohesive soils is typically computed using an excess shear stress model based on the applied fluvial shear stress. However, no mechanistic approaches are available for incorporating additional forces such as localized groundwater seepage forces into the excess shear stress model...

  14. Modeling anisotropic sensitivity in pentaerythritol tetranitrate using strain rate dependent reactive flow model

    NASA Astrophysics Data System (ADS)

    Kim, Kihong; Fried, Laurence; Yoh, Jack

    2013-06-01

    Initiation of detonation in some high explosives has shown strong anisotropic sensitivity under mechanical impact. Preferred directions of crystal orientation on shock initiation have been experimentally observed in pentaerythritol tetranitrate (PETN), which resulted in dramatic difference in the detonation sensitivity upon shock compression in different directions. The ignition and growth model based on empirical observation on the pressure-dependent initiation of detonation has been widely used to date. Since the model is independent of direction of compression, it is impossible to address sensitivity associated with preferred crystal orientation for establishing the go/no-go criteria. In this paper, we have proposed a new reaction flow model that is consistent with avaialble PETN experiments and atomistic calculations. A general tensor notation is utilized to fully address three-dimensional effect of the strain rate dependence to anisotropic detonation of PETN. K. Kim was supported by post-doctoral research fellowship from the National Research Foundation of Korea.

  15. Model-Based Reasoning

    ERIC Educational Resources Information Center

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  16. A high-rate PCI-based telemetry processor system

    NASA Astrophysics Data System (ADS)

    Turri, R.

    2002-07-01

    The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.

  17. Estimation of Eddy Dissipation Rates from Mesoscale Model Simulations

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2012-01-01

    The Eddy Dissipation Rate is an important metric for representing the intensity of atmospheric turbulence and is used as an input parameter for predicting the decay of aircraft wake vortices. In this study, the forecasts of eddy dissipation rates obtained from the current state-of-the-art mesoscale model are evaluated for terminal area applications. The Weather Research and Forecast mesoscale model is used to simulate the planetary boundary layer at high horizontal and vertical mesh resolutions. The Bougeault-Lacarrer and the Mellor-Yamada-Janji schemes implemented in the Weather Research and Forecast model are evaluated against data collected during the National Aeronautics and Space Administration s Memphis Wake Vortex Field Experiment. Comparisons with other observations are included as well.

  18. Eigen model with general fitness functions and degradation rates

    NASA Astrophysics Data System (ADS)

    Hu, Chin-Kun; Saakian, David B.

    2006-03-01

    We present an exact solution of Eigen's quasispecies model with a general degradation rate and fitness functions, including a square root decrease of fitness with increasing Hamming distance from the wild type. The found behavior of the model with a degradation rate is analogous to a viral quasi-species under attack by the immune system of the host. Our exact solutions also revise the known results of neutral networks in quasispecies theory. To explain the existence of mutants with large Hamming distances from the wild type, we propose three different modifications of the Eigen model: mutation landscape, multiple adjacent mutations, and frequency-dependent fitness in which the steady state solution shows a multi-center behavior.

  19. Model-based software design

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael

    1992-01-01

    Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.

  20. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  1. Predation rates by North Sea cod (Gadus morhua) - Predictions from models on gastric evacuation and bioenergetics

    USGS Publications Warehouse

    Hansson, S.; Rudstam, L. G.; Kitchell, J.F.; Hilden, M.; Johnson, B.L.; Peppard, P.E.

    1996-01-01

    We compared four different methods for estimating predation rates by North Sea cod (Gadus moi hua). Three estimates, based on gastric evacuation rates, came from an ICES multispecies working group and the fourth from a bioenergetics model. The bioenergetics model was developed from a review of literature on cod physiology. The three gastric evacuation rate models produced very different prey consumption estimates for small (2 kg) fish. For most size and age classes, the bioenergetics model predicted food consumption rates intermediate to those predicted by the gastric evacuation models. Using the standard ICES model and the average population abundance and age structure for 1974-1989, annual, prey consumption by the North Sea cod population (age greater than or equal to 1) was 840 kilotons. The other two evacuation rate models produced estimates of 1020 and 1640 kilotons, respectively. The bioenergetics model estimate was 1420 kilotons. The major differences between models were due to consumption rate estimates for younger age groups of cod. (C) 1996 International Council for the Exploration of the Sea

  2. Principles of models based engineering

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  3. Caesarean Delivery Rate Review: An Evidence-Based Analysis

    PubMed Central

    Degani, N; Sikich, N

    2015-01-01

    Background In 2007, caesarean deliveries comprised 28% of all hospital deliveries in Ontario. Provincial caesarean delivery rates increased with maternal age and varied by Local Health Integration Network. However, the accepted rate of caesarean delivery in a low-risk maternal population remains unclear. Objectives To review the literature to assess factors that affect the likelihood of experiencing a caesarean delivery, and to examine Ontario caesarean delivery rates to determine whether there is rate variation across the province. Data Sources Data sources included publications from OVID MEDLINE, OVID MEDLINE In-Process and Other Non-Indexed Citations, OVID Embase, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), and EBM Reviews, as well as data from the Canadian Institute for Health Information Discharge Abstracts Database and the Better Outcomes and Registry Network. Review Methods A mixed-methods approach was used, which included a systematic review of the literature to delineate factors associated with the likelihood of caesarean delivery and an analysis of administrative and clinical data on hospital deliveries in Ontario to determine provincial caesarean delivery rates, variation in rates, and reasons for variation. Results Fourteen systematic reviews assessed 14 factors affecting the likelihood of caesarean delivery; 7 factors were associated with an increased likelihood of caesarean delivery, and 2 factors were associated with a decreased likelihood. Five factors had no influence. One factor provided moderate-quality evidence supporting elective induction policies in low-risk women. The overall Ontario caesarean delivery rate in a very-low-risk population was 17%, but varied significantly across Ontario hospitals. Limitations The literature review included a 5–year period and used only systematic reviews. The determination of Robson class for women is based on care received in hospital only, and the low-risk population may have

  4. Rapid calculation of functional maps of glucose metabolic rate and individual model rate parameters from serial 2-FDG images

    SciTech Connect

    Koeppe, R.A.; Holden, J.E.; Hutchins, G.D.

    1985-05-01

    The authors have developed a method for the rapid pixel-by-pixel estimation of glucose metabolic rate from a dynamic sequence of PCT images acquired over 40 minutes following venous bolus injection of 2-deoxy-2-fluoro-D-glucose (2-FDG). The calculations are based on the conventional four parameter model. The dephosphorylation rate (k/sub 4/) cannot be reliably estimated from only 40 minutes of data; however, neglecting dephosphorylation can nonetheless introduce significant biases into the parameter estimation processes. In the authors' method, the rate is constrained to fall within a small range about a presumed value. Computer simulation studies show that this constraint greatly reduces the systematic biases in the other three fitted parameters and in the metabolic rate that arise from the assumption of no dephosphorylation. The parameter estimation scheme used is formally identical to one originally developed for dynamic methods of cerebral blood flow estimation. Estimation of metabolic rate and the individual model rate parameters k/sub 1/, k/sub 2/, and k/sub 3/, can be carried out for each pixel sequence of a 100 x 100 pixel image in less than two minutes on our PDP 11/60 minicomputer with floating point processor. While the maps of k/sub 2/ amd k/sub 3/ are quite noisy, accurate estimates of average values can be attained for regions of a few cm/sup 2/. The maps of metabolic rate offer many advantages in addition to that of direct visualization. These include improved statistical precision and the avoidance of averaging failure in the fitting of heterogeneous regions.

  5. Examining rating scales using Rasch and Mokken models for rater-mediated assessments.

    PubMed

    Wind, Stephanie A

    2014-01-01

    A variety of methods for evaluating the psychometric quality of rater-mediated assessments have been proposed, including rater effects based on latent trait models (e.g., Engelhard, 2013; Wolfe, 2009). Although information about rater effects contributes to the interpretation and use of rater-assigned scores, it is also important to consider ratings in terms of the structure of the rating scale on which scores are assigned. Further, concern with the validity of rater-assigned scores necessitates investigation of these quality control indices within student subgroups, such as gender, language, and race/ethnicity groups. Using a set of guidelines for evaluating the interpretation and use of rating scales adapted from Linacre (1999, 2004), this study demonstrates methods that can be used to examine rating scale functioning within and across student subgroups with indicators from Rasch measurement theory (Rasch, 1960) and Mokken scale analysis (Mokken, 1971). Specifically, this study illustrates indices of rating scale effectiveness based on Rasch models and models adapted from Mokken scaling, and considers whether the two approaches to evaluating the interpretation and use of rating scales lead to comparable conclusions within the context of a large-scale rater-mediated writing assessment. Major findings suggest that indices of rating scale effectiveness based on a parametric and nonparametric approach provide related, but slightly different, information about the structure of rating scales. Implications for research, theory, and practice are discussed. PMID:24950531

  6. Residential air exchange rates for use in indoor air and exposure modeling studies.

    PubMed

    Pandian, M D; Ott, W R; Behar, J V

    1993-01-01

    Data on air exchange rates are important inputs to indoor air quality models. Indoor air models, in turn, are incorporated into the structure of total human exposure models. Fragmentary data on residential ventilation rates are available in various governmental reports, journal articles, and contractor reports. Most of the published papers present data on only a few homes to answer very specialized questions, and none of these publications summarize the ventilation rates of a large population of homes across the United States. Brookhaven National Laboratory (BNL) has conducted more than 4000 residential perfluorocarbon tracer (PFT) measurements and brought them together into a large data base from about 100 studies in the United States and elsewhere. This paper analyzes the BNL PFT data base to generate frequency distributions and summary statistics for different regions of the United States, different seasons, and different levels within the homes. The data analyses suggest that residential ventilation rates are similar in the northeastern and northwestern states but higher in the southwestern states. Winter and fall ventilation rates are similar, but the rates are slightly higher in spring, and much higher in summer. Multi-level residences have higher air exchange rates than single-level residences. Although the BNL data are not a representative sample of homes in the United States, these analyses give insight into the range of air exchange rates found in the United States under a great variety of conditions and are intended for use by developers of models of indoor air quality and total human exposure. PMID:8173341

  7. Rates of coalescence for common epidemiological models at equilibrium

    PubMed Central

    Koelle, Katia; Rasmussen, David A.

    2012-01-01

    Coalescent theory provides a mathematical framework for quantitatively interpreting gene genealogies. With the increased availability of molecular sequence data, disease ecologists now regularly apply this body of theory to viral phylogenies, most commonly in attempts to reconstruct demographic histories of infected individuals and to estimate parameters such as the basic reproduction number. However, with few exceptions, the mathematical expressions at the core of coalescent theory have not been explicitly linked to the structure of epidemiological models, which are commonly used to mathematically describe the transmission dynamics of a pathogen. Here, we aim to make progress towards establishing this link by presenting a general approach for deriving a model's rate of coalescence under the assumption that the disease dynamics are at their endemic equilibrium. We apply this approach to four common families of epidemiological models: standard susceptible-infected-susceptible/susceptible-infected-recovered/susceptible-infected-recovered-susceptible models, models with individual heterogeneity in infectivity, models with an exposed but not yet infectious class and models with variable distributions of the infectious period. These results improve our understanding of how epidemiological processes shape viral genealogies, as well as how these processes affect levels of viral diversity and rates of genetic drift. Finally, we discuss how a subset of these coalescent rate expressions can be used for phylodynamic inference in non-equilibrium settings. For the ones that are limited to equilibrium conditions, we also discuss why this is the case. These results, therefore, point towards necessary future work while providing intuition on how epidemiological characteristics of the infection process impact gene genealogies. PMID:21920961

  8. Computational modeling of dynamic mechanical properties of pure polycrystalline magnesium under high loading strain rates

    NASA Astrophysics Data System (ADS)

    Li, Qizhen

    2015-09-01

    Computational simulations were performed to investigate the dynamic mechanical behavior of pure polycrystalline magnesium under different high loading strain rates with the values of 800, 1000, 2000, and 3600 s-1. The Johnson-Cook model was utilized in the simulations based on finite element modeling. The results showed that the simulations provided well-matched predictions of the material behavior such as the strain rate-time history, the stress-strain curve, and the temperature increase. Under high loading strain rates, the tested material experienced linear strain hardening at the early stage of plastic deformation, increased strain hardening at the intermediate plastic deformation region, and decreased strain hardening at the region before fracture. The strain hardening rates for the studied high loading strain rate cases do not vary much with the change of strain rates.

  9. Video-rate volumetric optical coherence tomography-based microangiography

    NASA Astrophysics Data System (ADS)

    Baran, Utku; Wei, Wei; Xu, Jingjiang; Qi, Xiaoli; Davis, Wyatt O.; Wang, Ruikang K.

    2016-04-01

    Video-rate volumetric optical coherence tomography (vOCT) is relatively young in the field of OCT imaging but has great potential in biomedical applications. Due to the recent development of the MHz range swept laser sources, vOCT has started to gain attention in the community. Here, we report the first in vivo video-rate volumetric OCT-based microangiography (vOMAG) system by integrating an 18-kHz resonant microelectromechanical system (MEMS) mirror with a 1.6-MHz FDML swept source operating at ˜1.3 μm wavelength. Because the MEMS scanner can offer an effective B-frame rate of 36 kHz, we are able to engineer vOMAG with a video rate up to 25 Hz. This system was utilized for real-time volumetric in vivo visualization of cerebral microvasculature in mice. Moreover, we monitored the blood perfusion dynamics during stimulation within mouse ear in vivo. We also discussed this system's limitations. Prospective MEMS-enabled OCT probes with a real-time volumetric functional imaging capability can have a significant impact on endoscopic imaging and image-guided surgery applications.

  10. Cardiac rate detection method based on the beam splitter prism

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Liu, Xiaohua; Liu, Ming; Zhao, Yuejin; Dong, Liquan; Zhao, Ruirui; Jin, Xiaoli; Zhao, Jingsheng

    2013-09-01

    A new cardiac rate measurement method is proposed. Through the beam splitter prism, the common-path optical system of transmitting and receiving signals is achieved. By the focusing effect of the lens, the small amplitude motion artifact is inhibited and the signal-to-noise is improved. The cardiac rate is obtained based on the PhotoPlethysmoGraphy (PPG). We use LED as the light source and use photoelectric diode as the receiving tube. The LED and the photoelectric diode are on the different sides of the beam splitter prism and they form the optical system. The signal processing and display unit is composed by the signal processing circuit, data acquisition device and computer. The light emitted by the modulated LED is collimated by the lens and irradiates the measurement target through the beam splitter prism. The light reflected by the target is focused on the receiving tube through the beam splitter prism and another lens. The signal received by the photoelectric diode is processed by the analog circuit and obtained by the data acquisition device. Through the filtering and Fast Fourier Transform, the cardiac rate is achieved. We get the real time cardiac rate by the moving average method. We experiment with 30 volunteers, containing different genders and different ages. We compare the signals captured by this method to a conventional PPG signal captured concurrently from a finger. The results of the experiments are all relatively agreeable and the biggest deviation value is about 2bmp.

  11. A count rate based contamination control standard for electron accelerators

    SciTech Connect

    May, R.T.; Schwahn, S.O.

    1996-12-31

    Accelerators of sufficient energy and particle fluence can produce radioactivity as an unwanted byproduct. The radioactivity is typically imbedded in structural materials but may also be removable from surfaces. Many of these radionuclides decay by positron emission or electron capture; they often have long half lives and produce photons of low energy and yield making detection by standard devices difficult. The contamination control limit used throughout the US nuclear industry and the Department of Energy is 1,000 disintegrations per minute. This limit is based on the detection threshold of pancake type Geiger-Mueller probes for radionuclides of relatively high radiotoxicity, such as cobalt-60. Several radionuclides of concern at a high energy electron accelerator are compared in terms of radiotoxicity with radionuclides commonly found in the nuclear industry. Based on this comparison, a count-rate based contamination control limit and associated measurement strategy is proposed which provides adequate detection of contamination at accelerators without an increase in risk.

  12. SFT based cosmological models

    NASA Astrophysics Data System (ADS)

    Koshelev, Alexey S.

    2010-11-01

    We consider the appearance of multiple scalar fields in SFT inspired non-local models with a single scalar field at late times. In this regime all the scalar fields are free. This system minimally coupled to gravity is mainly analyzed in this note. We build one exact solution to the equations of motion. We consider an exactly solvable model which obeys a simple exact solution in the cosmological context for the Friedmann equations and that reproduces the behavior expected from SFT in the asymptotic regime.

  13. Queuing model of a traffic bottleneck with bimodal arrival rate

    NASA Astrophysics Data System (ADS)

    Woelki, Marko

    2016-06-01

    This paper revisits the problem of tuning the density in a traffic bottleneck by reduction of the arrival rate when the queue length exceeds a certain threshold, studied recently for variants of totally asymmetric simple exclusion process (TASEP) and Burgers equation. In the present approach, a simple finite queuing system is considered and its contrasting “phase diagram” is derived. One can observe one jammed region, one low-density region and one region where the queue length is equilibrated around the threshold. Despite the simplicity of the model the physics is in accordance with the previous approach: The density is tuned at the threshold if the exit rate lies in between the two arrival rates.

  14. [Design of Oxygen Saturation, Heart Rate, Respiration Rate Detection System Based on Smartphone of Android Operating System].

    PubMed

    Zhu, Mingshan; Zeng, Bixin

    2015-03-01

    In this paper, we designed an oxygen saturation, heart rate, respiration rate monitoring system based on smartphone of android operating system, physiological signal acquired by MSP430 microcontroller and transmitted by Bluetooth module. PMID:26524782

  15. Nonlinear modeling on rate dependent ferroelectric and ferroelastic response of 1-3 piezocomposites

    NASA Astrophysics Data System (ADS)

    Jayendiran, R.; Arockiarajan, A.

    2016-06-01

    The effect of loading rate on ferroelectric and ferroelastic behavior of 1-3 piezocomposites is presented in this work. Experiments are conducted for various loading rates under different loading conditions such as electrical and electromechanical to measure the rate dependent response of 1-3 piezocomposite compared with bulk piezoceramics. A thermodynamic based rate dependent domain switching criteria has been proposed to predict the ferroelectric and ferroelastic behavior of homogenized 1-3 piezocomposites. In this model, volume fraction of six distinct uni-axial variants are used as internal variables to describe the microscopic state of the material. Plasticity based kinematic hardening parameter is introduced as a function of internal variables to describe the grain boundary effects. Homogenization of 1-3 piezocomposite material properties are obtained by finite element (FE) resonator models using commercially available FE tool Abaqus. To evaluate the possible modes of vibration of 1-3 piezocomposite four different configuration of FE resonators are modeled. The FE resonator model is validated with the impedance spectra obtained experimentally for length extensional and thickness extensional resonator models. The predicted effective properties using the resonance based technique are incorporated in the proposed rate dependent macromechanical model to study the behavior of 1-3 piezocomposites. The simulated results are compared with the experimental observations.

  16. On the figure of merit model for SEU rate calculations

    SciTech Connect

    Barak, J.; Reed, R.A.; LaBel, K.A.

    1999-12-01

    Petersen has introduced a one parameter characterization of a device by the Figure of Merit (FOM). It was claimed that this parameter was sufficient to estimate the SEU rate in almost all orbits. The present paper presents an analytic study of the FOM concept and compares the FOM model with other empirical models. It is found that indeed the FOM parameter gives, in most cases, a good agreement with the rates found using the full SEU cross section plots of the devices. The agreement is poorer in cases where a high portion of the proton flux comes from low energy protons and for very SEU-hard devices. This is demonstrated for certain devices (FPGAs) where the FOM predicted by proton may be smaller by an order of magnitude than the FOM from heavy ions.

  17. Assessment on the rates and potentials of soil organic carbon sequestration in agricultural lands in Japan using a process-based model and spatially explicit land-use change inventories - Part 2: Future potentials

    NASA Astrophysics Data System (ADS)

    Yagasaki, Y.; Shirato, Y.

    2014-08-01

    Future potentials of the sequestration of soil organic carbon (SOC) in agricultural lands in Japan were estimated using a simulation system we recently developed to simulate SOC stock change at country-scale under varying land-use change, climate, soil, and agricultural practices, in a spatially explicit manner. Simulation was run from 1970 to 2006 with historical inventories, and subsequently to 2020 with future scenarios of agricultural activity comprised of various agricultural policy targets advocated by the Japanese government. Furthermore, the simulation was run subsequently until 2100 while forcing no temporal changes in land-use and agricultural activity to investigate duration and course of SOC stock change at country scale. A scenario with an increased rate of organic carbon input to agricultural fields by intensified crop rotation in combination with the suppression of conversion of agricultural lands to other land-use types was found to have a greater reduction of CO2 emission by enhanced soil carbon sequestration, but only under a circumstance in which the converted agricultural lands will become settlements that were considered to have a relatively lower rate of organic carbon input. The size of relative reduction of CO2 emission in this scenario was comparable to that in another contrasting scenario (business-as-usual scenario of agricultural activity) in which a relatively lower rate of organic matter input to agricultural fields was assumed in combination with an increased rate of conversion of the agricultural fields to unmanaged grasslands through abandonment. Our simulation experiment clearly demonstrated that net-net-based accounting on SOC stock change, defined as the differences between the emissions and removals during the commitment period and the emissions and removals during a previous period (base year or base period of Kyoto Protocol), can be largely influenced by variations in future climate. Whereas baseline-based accounting, defined

  18. Infrared imaging based hyperventilation monitoring through respiration rate estimation

    NASA Astrophysics Data System (ADS)

    Basu, Anushree; Routray, Aurobinda; Mukherjee, Rashmi; Shit, Suprosanna

    2016-07-01

    A change in the skin temperature is used as an indicator of physical illness which can be detected through infrared thermography. Thermograms or thermal images can be used as an effective diagnostic tool for monitoring and diagnosis of various diseases. This paper describes an infrared thermography based approach for detecting hyperventilation caused due to stress and anxiety in human beings by computing their respiration rates. The work employs computer vision techniques for tracking the region of interest from thermal video to compute the breath rate. Experiments have been performed on 30 subjects. Corner feature extraction using Minimum Eigenvalue (Shi-Tomasi) algorithm and registration using Kanade Lucas-Tomasi algorithm has been used here. Thermal signature around the extracted region is detected and subsequently filtered through a band pass filter to compute the respiration profile of an individual. If the respiration profile shows unusual pattern and exceeds the threshold we conclude that the person is stressed and tending to hyperventilate. Results obtained are compared with standard contact based methods which have shown significant correlations. It is envisaged that the thermal image based approach not only will help in detecting hyperventilation but can assist in regular stress monitoring as it is non-invasive method.

  19. Effects of distribution of infection rate on epidemic models

    NASA Astrophysics Data System (ADS)

    Lachiany, Menachem; Louzoun, Yoram

    2016-08-01

    A goal of many epidemic models is to compute the outcome of the epidemics from the observed infected early dynamics. However, often, the total number of infected individuals at the end of the epidemics is much lower than predicted from the early dynamics. This discrepancy is argued to result from human intervention or nonlinear dynamics not incorporated in standard models. We show that when variability in infection rates is included in standard susciptible-infected-susceptible (SIS ) and susceptible-infected-recovered (SIR ) models the total number of infected individuals in the late dynamics can be orders lower than predicted from the early dynamics. This discrepancy holds for SIS and SIR models, where the assumption that all individuals have the same sensitivity is eliminated. In contrast with network models, fixed partnerships are not assumed. We derive a moment closure scheme capturing the distribution of sensitivities. We find that the shape of the sensitivity distribution does not affect R0 or the number of infected individuals in the early phases of the epidemics. However, a wide distribution of sensitivities reduces the total number of removed individuals in the SIR model and the steady-state infected fraction in the SIS model. The difference between the early and late dynamics implies that in order to extrapolate the expected effect of the epidemics from the initial phase of the epidemics, the rate of change in the average infectivity should be computed. These results are supported by a comparison of the theoretical model to the Ebola epidemics and by numerical simulation.

  20. Effects of distribution of infection rate on epidemic models.

    PubMed

    Lachiany, Menachem; Louzoun, Yoram

    2016-08-01

    A goal of many epidemic models is to compute the outcome of the epidemics from the observed infected early dynamics. However, often, the total number of infected individuals at the end of the epidemics is much lower than predicted from the early dynamics. This discrepancy is argued to result from human intervention or nonlinear dynamics not incorporated in standard models. We show that when variability in infection rates is included in standard susciptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models the total number of infected individuals in the late dynamics can be orders lower than predicted from the early dynamics. This discrepancy holds for SIS and SIR models, where the assumption that all individuals have the same sensitivity is eliminated. In contrast with network models, fixed partnerships are not assumed. We derive a moment closure scheme capturing the distribution of sensitivities. We find that the shape of the sensitivity distribution does not affect R_{0} or the number of infected individuals in the early phases of the epidemics. However, a wide distribution of sensitivities reduces the total number of removed individuals in the SIR model and the steady-state infected fraction in the SIS model. The difference between the early and late dynamics implies that in order to extrapolate the expected effect of the epidemics from the initial phase of the epidemics, the rate of change in the average infectivity should be computed. These results are supported by a comparison of the theoretical model to the Ebola epidemics and by numerical simulation. PMID:27627337

  1. Can we properly model the neutron monitor count rate?

    NASA Astrophysics Data System (ADS)

    Gil, Agnieszka; Usoskin, Ilya G.; Kovaltsov, Gennady A.; Mishev, Alexander L.; Corti, Claudio; Bindi, Veronica

    2015-09-01

    Neutron monitors provide continuous measurements of secondary nucleonic particles produced in the atmosphere by the primary cosmic rays and form the main tool to study the heliospheric modulation of cosmic rays. In order to study cosmic rays using the world network of neutron monitor and needs to be able to model the neutron monitor count rate. Earlier it was difficult because of the poorly known yield function, which has been essentially revisited recently. We have presented a verification of the new yield function of the standard neutron monitor (NM) using a recently released data on the direct in situ measurements of the galactic cosmic rays energy spectrum during 2006-2009 (the period of the record high cosmic ray flux) by Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics spaceborne spectrometer, and on NM latitude surveys performed during the period of 1994-2007, including periods of high solar activity. We found a very good agreement between the measured count rates of sea level NMs and the modeled ones in very different conditions: from low to high solar activity and from polar to tropical regions. This implies that the count rate of a sea level neutron monitor can be properly modeled in all conditions, using the new yield function.

  2. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  3. A cloud model-radiative model combination for determining microwave TB-rain rate relations

    NASA Technical Reports Server (NTRS)

    Szejwach, Gerard; Adler, Robert F.; Jobard, Esabelle; Mack, Robert A.

    1986-01-01

    The development of a cloud model-radiative transfer model combination for computing average brightness temperature, T(B), is discussed. The cloud model and radiative transfer model used in this study are described. The relations between rain rate, cloud and rain water, cloud and precipitation ice, and upwelling radiance are investigated. The effects of the rain rate relations on T(B) under different climatological conditions are examined. The model-derived T(B) results are compared to the 92 and 183 GHz aircraft observations of Hakkarinen and Adler (1984, 1986) and the radar-estimated rain rate of Hakkarinen and Adler (1986); good correlation between the data is detected.

  4. Dose-rate and irradiation temperature dependence of BJT SPICE model rad-parameters

    SciTech Connect

    Montagner, X.; Briand, R.; Fouillat, P.; Touboul, A.; Schrimpf, R.D.; Galloway, K.F.; Calvet, M.C.; Calvel, P.

    1998-06-01

    A method to predict low dose rate degradation of bipolar transistors using high dose-rate, high temperature irradiation is evaluated, based on an analysis of four new rad-parameters that are introduced in the BJT SPICE model. This improved BJT model describes the radiation-induced excess base current with great accuracy. The low-level values of the rad-parameters are good tools for evaluating the proposed high-temperature test method because of their high sensitivity to radiation-induced degradation.

  5. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The

  6. Modeling the Growth Rates of Tetragonal Lysozyme Crystal Faces

    NASA Technical Reports Server (NTRS)

    Li, Meirong; Nadarajah, Arunan; Pusey, Marc L.

    1998-01-01

    with respect to its concentration at saturation in order to apply growth rate models to this process. The measured growth rates were then compared with the predicted ones from several dislocation and 2D nucleation growth models, employing tetramer and octamer growth units in polydisperse solutions and monomer units in monodisperse solutions. For the (110) face, the calculations consistently showed that the measured growth rates followed the expected model relations with octamer growth units. For the (101) face, it is not possible to obtain a clear agreement between the predicted and measured growth rates for a single growth unit as done for the (110) face. However, the calculations do indicate that the average size of the growth unit is between a tetramer and an octamer. This suggests that tetramers, octamers and other intermediate size growth units all participate in the growth process for this face. These calculations show that it is possible to model the macroscopic protein crystal growth rates if the molecular level processes can be account for, particularly protein aggregation processes in the bulk solution. Our recent investigations of tetragonal lysozyme crystals employing high resolution atomic force microscopy scans have further confirmed the growth of these crystals by aggregate growth units corresponding to 4(sub 3) helices.

  7. Crack growth rate in core shroud horizontal welds using two models for a BWR

    NASA Astrophysics Data System (ADS)

    Arganis Juárez, C. R.; Hernández Callejas, R.; Medina Almazán, A. L.

    2015-05-01

    An empirical crack growth rate correlation model and a predictive model based on the slip-oxidation mechanism for Stress Corrosion Cracking (SCC) were used to calculate the crack growth rate in a BWR core shroud. In this study, the crack growth rate was calculated by accounting for the environmental factors related to aqueous environment, neutron irradiation to high fluence and the complex residual stress conditions resulting from welding. In estimating the SCC behavior the crack growth measurements data from a Boiling Water Reactor (BWR) plant are referred to, and the stress intensity factor vs crack depth throughout thickness is calculated using a generic weld residual stress distribution for a core shroud, with a 30% stress relaxation induced by neutron irradiation. Quantitative agreement is shown between the measurements of SCC growth rate and the predictions of the slip-oxidation mechanism model for relatively low fluences (5 × 1024 n/m2), and the empirical model predicted better the SCC growth rate than the slip-oxidation model for high fluences (>1 × 1025 n/m2). The relevance of the models predictions for SCC growth rate behavior depends on knowing the model parameters.

  8. Networks in financial markets based on the mutual information rate

    NASA Astrophysics Data System (ADS)

    Fiedor, Paweł

    2014-05-01

    In the last few years there have been many efforts in econophysics studying how network theory can facilitate understanding of complex financial markets. These efforts consist mainly of the study of correlation-based hierarchical networks. This is somewhat surprising as the underlying assumptions of research looking at financial markets are that they are complex systems and thus behave in a nonlinear manner, which is confirmed by numerous studies, making the use of correlations which are inherently dealing with linear dependencies only baffling. In this paper we introduce a way to incorporate nonlinear dynamics and dependencies into hierarchical networks to study financial markets using mutual information and its dynamical extension: the mutual information rate. We show that this approach leads to different results than the correlation-based approach used in most studies, on the basis of 91 companies listed on the New York Stock Exchange 100 between 2003 and 2013, using minimal spanning trees and planar maximally filtered graphs.

  9. LS-DYNA Implementation of Polymer Matrix Composite Model Under High Strain Rate Impact

    NASA Technical Reports Server (NTRS)

    Zheng, Xia-Hua; Goldberg, Robert K.; Binienda, Wieslaw K.; Roberts, Gary D.

    2003-01-01

    A recently developed constitutive model is implemented into LS-DYNA as a user defined material model (UMAT) to characterize the nonlinear strain rate dependent behavior of polymers. By utilizing this model within a micromechanics technique based on a laminate analogy, an algorithm to analyze the strain rate dependent, nonlinear deformation of a fiber reinforced polymer matrix composite is then developed as a UMAT to simulate the response of these composites under high strain rate impact. The models are designed for shell elements in order to ensure computational efficiency. Experimental and numerical stress-strain curves are compared for two representative polymers and a representative polymer matrix composite, with the analytical model predicting the experimental response reasonably well.

  10. Mass Customization Production Planning System by Advance Demand Information Based on Unfulfilled-order-rate II

    NASA Astrophysics Data System (ADS)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Okuhara, Koji

    In the previous paper, we proposed Mass Customization Production Planning & Management System (MCPS) based on unfulfilled-order-rate by using Advance Demand Information which is called ‘Naiji System’ as an unique corporation between a maker and suppliers in Japan, and 3 indicators to estimate the unfulfilled-order-rate. Applying these indicators to the model, we investigated the behavior of unfulfilled-order-rate at the final period in the planning horizon. In this paper, we propose a new model for purchasing, and investigate the unfulfilled-order-rate at each period and the impact to the total inventory. We find that the total inventories become 5.9%-20.0% decreases by using SOn rather than by using SOn(0). And we enhance a base-stock policy to a new one with multi-period. We prove that the MCPS model for purchasing by using SOn(0) is equivalent to the base-stock policy with multi-period under the specified condition. Under this condition, the proposed model by using SOn decreases inventories more than the base-stock policy with multi-period.

  11. Category Rating Is Based on Prototypes and Not Instances: Evidence from Feedback-Dependent Context Effects

    ERIC Educational Resources Information Center

    Petrov, Alexander A.

    2011-01-01

    Context effects in category rating on a 7-point scale are shown to reverse direction depending on feedback. Context (skewed stimulus frequencies) was manipulated between and feedback within subjects in two experiments. The diverging predictions of prototype- and exemplar-based scaling theories were tested using two representative models: ANCHOR…

  12. A model for predicting wear rates in tooth enamel.

    PubMed

    Borrero-Lopez, Oscar; Pajares, Antonia; Constantino, Paul J; Lawn, Brian R

    2014-09-01

    It is hypothesized that wear of enamel is sensitive to the presence of sharp particulates in oral fluids and masticated foods. To this end, a generic model for predicting wear rates in brittle materials is developed, with specific application to tooth enamel. Wear is assumed to result from an accumulation of elastic-plastic micro-asperity events. Integration over all such events leads to a wear rate relation analogous to Archard׳s law, but with allowance for variation in asperity angle and compliance. The coefficient K in this relation quantifies the wear severity, with an arbitrary distinction between 'mild' wear (low K) and 'severe' wear (high K). Data from the literature and in-house wear-test experiments on enamel specimens in lubricant media (water, oil) with and without sharp third-body particulates (silica, diamond) are used to validate the model. Measured wear rates can vary over several orders of magnitude, depending on contact asperity conditions, accounting for the occurrence of severe enamel removal in some human patients (bruxing). Expressions for the depth removal rate and number of cycles to wear down occlusal enamel in the low-crowned tooth forms of some mammals are derived, with tooth size and enamel thickness as key variables. The role of 'hard' versus 'soft' food diets in determining evolutionary paths in different hominin species is briefly considered. A feature of the model is that it does not require recourse to specific material removal mechanisms, although processes involving microplastic extrusion and microcrack coalescence are indicated. PMID:24953823

  13. Modelling airborne concentration and deposition rate of maize pollen

    NASA Astrophysics Data System (ADS)

    Jarosz, Nathalie; Loubet, Benjamin; Huber, Laurent

    2004-10-01

    The introduction of genetically modified (GM) crops has reinforced the need to quantify gene flow from crop to crop. This requires predictive tools which take into account meteorological conditions, canopy structure as well as pollen aerodynamic characteristics. A Lagrangian Stochastic (LS) model, called SMOP-2D (Stochastic Mechanistic model for Pollen dispersion and deposition in 2 Dimensions), is presented. It simulates wind dispersion of pollen by calculating individual pollen trajectories from their emission to their deposition. SMOP-2D was validated using two field experiments where airborne concentration and deposition rate of pollen were measured within and downwind from different sized maize (Zea mays) plots together with micrometeorological measurements. SMOP-2D correctly simulated the shapes of the concentration profiles but generally underestimated the deposition rates in the first 10 m downwind from the source. Potential explanations of this discrepancy are discussed. Incorrect parameterisation of turbulence in the transition from the crop to the surroundings is probably the most likely reason. This demonstrates that LS models for particle transfer need to be coupled with air-flow models under complex terrain conditions.

  14. A model of northern pintail productivity and population growth rate

    USGS Publications Warehouse

    Flint, P.L.; Grand, J.B.; Rockwell, R.F.

    1998-01-01

    Our objective was to synthesize individual components of reproductive ecology into a single estimate of productivity and to assess the relative effects of survival and productivity on population dynamics. We used information on nesting ecology, renesting potential, and duckling survival of northern pintails (Anas acuta) collected on the Yukon-Kuskokvim Delta (Y-K Delta), Alaska, 1991-95, to model the number of ducklings produced under a range of nest success and duckling survival probabilities. Using average values of 25% nest success, 11% duckling survival, and 56% renesting probability from our study population, we calculated that all young in our population were produced by 13% of the breeding females, and that early-nesting females produced more young than later-nesting females. Further, we calculated, on average, that each female produced only 0.16 young females/nesting season. We combined these results with estimates of first-year and adult survival to examine the growth rate (??) of the population and the relative contributions of these demographic parameters to that growth rate. Contrary to aerial survey data, the population projection model suggests our study population is declining rapidly (?? = 0.6969). The relative effects on population growth rate were 0.1175 for reproductive success, 0.1175 for first-year survival, and 0.8825 for adult survival. Adult survival had the greatest influence on ?? for our population, and this conclusion was robust over a range of survival and productivity estimates. Given published estimates of annual survival for adult females (61%), our model suggested nest success and duckling survival need to increase to approximately 40% to achieve population stability. We discuss reasons for the apparent discrepancy in population trends between our model and aerial surveys in terms of bias in productivity and survival estimates.

  15. High strain rate modeling of ceramics and ceramic composites

    NASA Astrophysics Data System (ADS)

    Espinosa, H. D.

    1994-07-01

    The high strain rate response of an AlN/AlN/Al composite manufactured by Lanxide Armor Products, Inc., has been studied through normal and pressure-shear plate impact experiments. Differences in the measured shear resistance, as a function of the impact configuration, motivated the examination of the material response by means of a microcracking multiple-plane model and a continuum elasto-viscoplastic constitutive model. Numerical simulations of the normal impact experiments do not support microcracking as the dominant inelastic mechanism. By contrast, an elasto-viscoplastic description of the material behavior predicts the main features of the normal stress history and the thickness dependence of the Hugoniot elastic limit. Nonetheless, the elasto-viscoplastic model cannot reproduce both the normal and pressure-shear experiments with a single set of model parameters. The inadequacy of the continuum elasto-viscoplastic model seems to result from the isotropic assumption embodied in its formulation. The shear resistance measured in the pressure-shear experiments is adequately predicted by a microcracking multiple-plane model. The agreement seems to hinge in the continuous shearing of the material on a micro-localized fashion, i.e. only one orientation becomes dominant and controls the inelastic shear deformation rate. This event does not occur in the normal impact configuration, in which the amount of inelasticity is primarily controlled by the elastic compressibility of the material. These findings explain the higher sensitivity to damage and microplasticity observed in the pressure-shear configuration, as well as the softer material response recorded in this configuration.

  16. Vertical distribution, migration rates, and model comparison of actinium in a semi-arid environment.

    PubMed

    McClellan, Y; August, R A; Gosz, J R; Gann, S; Parmenter, R R; Windsor, M

    2006-01-01

    Vertical soil characterization and migration of radionuclides were investigated at four radioactively contaminated sites on Kirtland Air Force Base (KAFB), New Mexico to determine the vertical downward migration of radionuclides in a semi-arid environment. The surface soils (0-15 cm) were intentionally contaminated with Brazilian sludge (containing (232)Thorium and other radionuclides) approximately 40 years ago, in order to simulate the conditions resulting from a nuclear weapons accident. Site grading consisted of manually raking or machine disking the sludge. The majority of the radioactivity was found in the top 15 cm of soil, with retention ranging from 69 to 88%. Two models, a compartment diffusion model and leach rate model, were evaluated to determine their capabilities and limitations in predicting radionuclide behavior. The migration rates of actinium were calculated with the diffusion compartment and the leach rate models for all sites, and ranged from 0.009 to 0.1 cm/yr increasing with depth. The migration rates calculated with the leach rate models were similar to those using the diffusion compartment model and did not increase with depth (0.045-0.076, 0.0 cm/yr). The research found that the physical and chemical properties governing transport processes of water and solutes in soil provide a valid radionuclide transport model. The evaluation also showed that the physical model has fewer limitations and may be more applicable to this environment. PMID:16243414

  17. Model based vibration monitoring

    SciTech Connect

    Esat, I.; Paya, B.; Badi, M.N.M.

    1996-11-01

    The principal source of vibratory excitation of gear system is the unsteady component of the relative angular motion of pair of meshing spur gears. This vibratory excitation is described by the transmission error. The transmission error present itself as a varying force at the contact point of the meshing gear teeth. The varying force is also influenced by the varying tooth stiffness due to change of orientation of teeth relative to each other, during the contact phase of each pair. Such a varying force produces both lateral and torsional excitation to the gear system. This paper presents analytical formulation of a simple two meshing spur gear system as a three mass system (18 DOF). The mathematical model also incorporates the analytical formulation of the tooth stiffness. The analytical results are compared with the experimental results. At this stage of analysis the procedure developed for handling the nonlinear influences of the tooth geometry is not fully implemented and the tooth stiffness taken as a constant value representing the average tooth stiffness. The comparison between the analytical and experimental results are encouraging as three main frequency obtained from FFT of the experimental results correlates very closely with the analytical results.

  18. A Minimalistic Resource Allocation Model to Explain Ubiquitous Increase in Protein Expression with Growth Rate

    PubMed Central

    Keren, Leeat; Segal, Eran; Milo, Ron

    2016-01-01

    Most proteins show changes in level across growth conditions. Many of these changes seem to be coordinated with the specific growth rate rather than the growth environment or the protein function. Although cellular growth rates, gene expression levels and gene regulation have been at the center of biological research for decades, there are only a few models giving a base line prediction of the dependence of the proteome fraction occupied by a gene with the specific growth rate. We present a simple model that predicts a widely coordinated increase in the fraction of many proteins out of the proteome, proportionally with the growth rate. The model reveals how passive redistribution of resources, due to active regulation of only a few proteins, can have proteome wide effects that are quantitatively predictable. Our model provides a potential explanation for why and how such a coordinated response of a large fraction of the proteome to the specific growth rate arises under different environmental conditions. The simplicity of our model can also be useful by serving as a baseline null hypothesis in the search for active regulation. We exemplify the usage of the model by analyzing the relationship between growth rate and proteome composition for the model microorganism E.coli as reflected in recent proteomics data sets spanning various growth conditions. We find that the fraction out of the proteome of a large number of proteins, and from different cellular processes, increases proportionally with the growth rate. Notably, ribosomal proteins, which have been previously reported to increase in fraction with growth rate, are only a small part of this group of proteins. We suggest that, although the fractions of many proteins change with the growth rate, such changes may be partially driven by a global effect, not necessarily requiring specific cellular control mechanisms. PMID:27073913

  19. A Minimalistic Resource Allocation Model to Explain Ubiquitous Increase in Protein Expression with Growth Rate.

    PubMed

    Barenholz, Uri; Keren, Leeat; Segal, Eran; Milo, Ron

    2016-01-01

    Most proteins show changes in level across growth conditions. Many of these changes seem to be coordinated with the specific growth rate rather than the growth environment or the protein function. Although cellular growth rates, gene expression levels and gene regulation have been at the center of biological research for decades, there are only a few models giving a base line prediction of the dependence of the proteome fraction occupied by a gene with the specific growth rate. We present a simple model that predicts a widely coordinated increase in the fraction of many proteins out of the proteome, proportionally with the growth rate. The model reveals how passive redistribution of resources, due to active regulation of only a few proteins, can have proteome wide effects that are quantitatively predictable. Our model provides a potential explanation for why and how such a coordinated response of a large fraction of the proteome to the specific growth rate arises under different environmental conditions. The simplicity of our model can also be useful by serving as a baseline null hypothesis in the search for active regulation. We exemplify the usage of the model by analyzing the relationship between growth rate and proteome composition for the model microorganism E.coli as reflected in recent proteomics data sets spanning various growth conditions. We find that the fraction out of the proteome of a large number of proteins, and from different cellular processes, increases proportionally with the growth rate. Notably, ribosomal proteins, which have been previously reported to increase in fraction with growth rate, are only a small part of this group of proteins. We suggest that, although the fractions of many proteins change with the growth rate, such changes may be partially driven by a global effect, not necessarily requiring specific cellular control mechanisms. PMID:27073913

  20. Semiparametric Stochastic Modeling of the Rate Function in Longitudinal Studies

    PubMed Central

    Zhu, Bin; Taylor, Jeremy M.G.; Song, Peter X.-K.

    2011-01-01

    In longitudinal biomedical studies, there is often interest in the rate functions, which describe the functional rates of change of biomarker profiles. This paper proposes a semiparametric approach to model these functions as the realizations of stochastic processes defined by stochastic differential equations. These processes are dependent on the covariates of interest and vary around a specified parametric function. An efficient Markov chain Monte Carlo algorithm is developed for inference. The proposed method is compared with several existing methods in terms of goodness-of-fit and more importantly the ability to forecast future functional data in a simulation study. The proposed methodology is applied to prostate-specific antigen profiles for illustration. Supplementary materials for this paper are available online. PMID:22423170

  1. A method for calculating strain energy release rate based on beam theory

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Pandey, R. K.

    1993-01-01

    The Timoshenko beam theory was used to model cracked beams and to calculate the total strain energy release rate. The root rotation of the beam segments at the crack tip were estimated based on an approximate 2D elasticity solution. By including the strain energy released due to the root rotations of the beams during crack extension, the strain energy release rate obtained using beam theory agrees very well with the 2D finite element solution. Numerical examples were given for various beam geometries and loading conditions. Comparisons with existing beam models were also given.

  2. Total dose and dose rate models for bipolar transistors in circuit simulation.

    SciTech Connect

    Campbell, Phillip Montgomery; Wix, Steven D.

    2013-05-01

    The objective of this work is to develop a model for total dose effects in bipolar junction transistors for use in circuit simulation. The components of the model are an electrical model of device performance that includes the effects of trapped charge on device behavior, and a model that calculates the trapped charge densities in a specific device structure as a function of radiation dose and dose rate. Simulations based on this model are found to agree well with measurements on a number of devices for which data are available.

  3. Modelling of percolation rate of stormwater from underground infiltration systems.

    PubMed

    Burszta-Adamiak, Ewa; Lomotowski, Janusz

    2013-01-01

    Underground or surface stormwater storage tank systems that enable the infiltration of water into the ground are basic elements used in Sustainable Urban Drainage Systems (SUDS). So far, the design methods for such facilities have not taken into account the phenomenon of ground clogging during stormwater infiltration. Top layer sealing of the filter bed influences the infiltration rate of water into the ground. This study presents an original mathematical model describing changes in the infiltration rate variability in the phases of filling and emptying the storage and infiltration tank systems, which enables the determination of the degree of top ground layer clogging. The input data for modelling were obtained from studies conducted on experimental sites on objects constructed on a semi-technological scale. The experiment conducted has proven that the application of the model developed for the phase of water infiltration enables us to estimate the degree of module clogging. However, this method is more suitable for reservoirs embedded in more permeable soils than for those located in cohesive soils. PMID:24292460

  4. Predicting aeolian sand transport rates: A reevaluation of models

    NASA Astrophysics Data System (ADS)

    Sherman, Douglas J.; Li, Bailiang

    2012-01-01

    Eight aeolian sand-transport models are evaluated using a field data set and a new approach to estimating shear velocity. The models are those of Bagnold (1937), Kawamura (1951), Zingg (1953), Owen (1964), Kadib (1965), Hsu (1971), Lettau and Lettau (1978) and Sørensen (2004). All of these models predict transport as a function of shear velocity to the third power. Shear velocities are estimated using wind profile data (log-linear slope) with the von Kármán constant and with the apparent von Kármán parameter and the results of the different approaches are evaluated based on comparison of regression statistics and RMS error. The models were not adjusted to account for sediment moisture content or local surface slope effects. All of the models have about the same statistical explanatory power, so evaluations were made by comparing slopes and intercepts of best fit (least-squares) lines and RMSE. From this basis, we conclude that predictions made with the Bagnold (1937) model best match our observations, with the models of Kadib (1965) and Hsu (1971) performing nearly as well. The Lettau and Lettau (1978) and Kawamura (1951) model predictions match observations least.

  5. Effects of Population Based Screening for Chlamydia Infections in The Netherlands Limited by Declining Participation Rates

    PubMed Central

    Schmid, Boris V.; Over, Eelco A. B.; van den Broek, Ingrid V. F.; Op de Coul, Eline L. M.; van Bergen, Jan E. A. M.; Fennema, Johan S. A.; Götz, Hannelore M.; Hoebe, Christian J. P. A.; de Wit, G. Ardine; van der Sande, Marianne A. B.; Kretzschmar, Mirjam E. E.

    2013-01-01

    Background A large trial to investigate the effectiveness of population based screening for chlamydia infections was conducted in the Netherlands in 2008–2012. The trial was register based and consisted of four rounds of screening of women and men in the age groups 16–29 years in three regions in the Netherlands. Data were collected on participation rates and positivity rates per round. A modeling study was conducted to project screening effects for various screening strategies into the future. Methods and Findings We used a stochastic network simulation model incorporating partnership formation and dissolution, aging and a sexual life course perspective. Trends in baseline rates of chlamydia testing and treatment were used to describe the epidemiological situation before the start of the screening program. Data on participation rates was used to describe screening uptake in rural and urban areas. Simulations were used to project the effectiveness of screening on chlamydia prevalence for a time period of 10 years. In addition, we tested alternative screening strategies, such as including only women, targeting different age groups, and biennial screening. Screening reduced prevalence by about 1% in the first two screening rounds and leveled off after that. Extrapolating observed participation rates into the future indicated very low participation in the long run. Alternative strategies only marginally changed the effectiveness of screening. Higher participation rates as originally foreseen in the program would have succeeded in reducing chlamydia prevalence to very low levels in the long run. Conclusions Decreasing participation rates over time profoundly impact the effectiveness of population based screening for chlamydia infections. Using data from several consecutive rounds of screening in a simulation model enabled us to assess the future effectiveness of screening on prevalence. If participation rates cannot be kept at a sufficient level, the effectiveness

  6. Metabolically Derived human ventilation rates: A revised approach based upon oxygen consumption rates (Final Report) 2009

    EPA Science Inventory

    The purpose of this report is to provide a revised approach for calculating an individual's ventilation rate directly from their oxygen consumption rate. This revised approach will be used to update the ventilation rate information in the Exposure Factors Handbook, which serve as...

  7. Ultra High-Rate Germanium (UHRGe) Modeling Status Report

    SciTech Connect

    Warren, Glen A.; Rodriguez, Douglas C.

    2012-06-07

    The Ultra-High Rate Germanium (UHRGe) project at Pacific Northwest National Laboratory (PNNL) is conducting research to develop a high-purity germanium (HPGe) detector that can provide both the high resolution typical of germanium and high signal throughput. Such detectors may be beneficial for a variety of potential applications ranging from safeguards measurements of used fuel to material detection and verification using active interrogation techniques. This report describes some of the initial radiation transport modeling efforts that have been conducted to help guide the design of the detector as well as a description of the process used to generate the source spectrum for the used fuel application evaluation.

  8. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  9. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  10. Developing models for the prediction of hospital healthcare waste generation rate.

    PubMed

    Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe

    2016-01-01

    An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. PMID:26437681

  11. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  12. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  13. Optimization of surface acoustic wave-based rate sensors.

    PubMed

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO₃, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  14. Optimization of Surface Acoustic Wave-Based Rate Sensors

    PubMed Central

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO3, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  15. Constraint Based Modeling Going Multicellular

    PubMed Central

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches. PMID:26904548

  16. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  17. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    EPA Science Inventory

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  18. Rain cell size statistics as a function of rain rate for attenuation modeling

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1983-01-01

    Rain cell size statistics as a function of rain rate have been deduced by employing a radar data base of rain reflectivity data acquired over a three-year period at Wallops Island, VA. These cell statistics have important applications in slant path rain attenuation modeling and remote sensing of the earth's surface from space at frequencies above 10 GHz.

  19. Regression rate behaviors of HTPB-based propellant combinations for hybrid rocket motor

    NASA Astrophysics Data System (ADS)

    Sun, Xingliang; Tian, Hui; Li, Yuelong; Yu, Nanjia; Cai, Guobiao

    2016-02-01

    The purpose of this paper is to characterize the regression rate behavior of hybrid rocket motor propellant combinations, using hydrogen peroxide (HP), gaseous oxygen (GOX), nitrous oxide (N2O) as the oxidizer and hydroxyl-terminated poly-butadiene (HTPB) as the based fuel. In order to complete this research by experiment and simulation, a hybrid rocket motor test system and a numerical simulation model are established. Series of hybrid rocket motor firing tests are conducted burning different propellant combinations, and several of those are used as references for numerical simulations. The numerical simulation model is developed by combining the Navies-Stokes equations with the turbulence model, one-step global reaction model, and solid-gas coupling model. The distribution of regression rate along the axis is determined by applying simulation mode to predict the combustion process and heat transfer inside the hybrid rocket motor. The time-space averaged regression rate has a good agreement between the numerical value and experimental data. The results indicate that the N2O/HTPB and GOX/HTPB propellant combinations have a higher regression rate, since the enhancement effect of latter is significant due to its higher flame temperature. Furthermore, the containing of aluminum (Al) and/or ammonium perchlorate(AP) in the grain does enhance the regression rate, mainly due to the more energy released inside the chamber and heat feedback to the grain surface by the aluminum combustion.

  20. Relationship between soil erodibility and modeled infiltration rate in different soils

    NASA Astrophysics Data System (ADS)

    Wang, Guoqiang; Fang, Qingqing; Wu, Binbin; Yang, Huicai; Xu, Zongxue

    2015-09-01

    The relationship between soil erodibility, which is hard to measure, and modeled infiltration rate were rarely researched. Here, the soil erodibility factors (K and Ke in the USLE, Ki and K1 in the WEPP) were calculated and the infiltration rates were modeled based on the designed laboratory simulation experiments and proposed infiltration model, in order to build their relationship. The impacts of compost amendment on the soil erosion characteristics and relationship were also studied. Two contrasting agricultural soils (bare and cultivated fluvo-aquic soils) were used, and different poultry compost contents (control, low and high) were applied to both soils. The results indicated that the runoff rate, sediment yield rate and soil erodibility of the bare soil treatments were generally higher than those of the corresponding cultivated soil treatments. The application of composts generally decreased sediment yield and soil erodibility but did not always decrease runoff. The comparison of measured and modeled infiltration rates indicated that the model represented the infiltration processes well with an N-S coefficient of 0.84 for overall treatments. Significant negative logarithmic correlations have been found between final infiltration rate (FIR) and the four soil erodibility factors, and the relationship between USLE-K and FIR demonstrated the best correlation. The application of poultry composts would not influence the logarithmic relationship between FIR and soil erodibility. Our study provided a useful tool to estimate soil erodibility.

  1. Measurement and Modeling of Respiration Rate of Tomato (Cultivar Roma) for Modified Atmosphere Storage.

    PubMed

    Kandasamy, Palani; Moitra, Ranabir; Mukherjee, Souti

    2015-01-01

    Experiments were conducted to determine the respiration rate of tomato at 10, 20 and 30 °C using closed respiration system. Oxygen depletion and carbon dioxide accumulation in the system containing tomato was monitored. Respiration rate was found to decrease with increasing CO2 and decreasing O2 concentration. Michaelis-Menten type model based on enzyme kinetics was evaluated using experimental data generated for predicting the respiration rate. The model parameters that obtained from the respiration rate at different O2 and CO2 concentration levels were used to fit the model against the storage temperatures. The fitting was fair (R2 = 0.923 to 0.970) when the respiration rate was expressed as O2 concentation. Since inhibition constant for CO2 concentration tended towards negetive, the model was modified as a function of O2 concentration only. The modified model was fitted to the experimental data and showed good agreement (R2 = 0.998) with experimentally estimated respiration rate. PMID:26078087

  2. Decay Rate for Travelling Waves of a Relaxation Model

    NASA Astrophysics Data System (ADS)

    Liu, Hailiang; Woo, Ching Wah; Yang, Tong

    1997-03-01

    A relaxation model was proposed in [Shi Jin and Zhouping Xin,Comm. Pure Appl. Math.48(1995), 555-563] to approximate the hyperbolic systems numerically under the subcharacteristic condition introduced in [T. P. Liu,Comm. Math. Phys.108(1987), 153-175]. The stability of travelling waves with strong shock profile and integral zero was proved in [H. L. Liu, J. H. Wang, and T. Yang, Stability in a relaxation model with nonconvex flux, preprint, 1996; H. L. Liu and J. Wang, Asymptotic stability of travelling wave solutions of a hyperbolic system with relaxation terms, preprint, 1995] when the original system is scalar. In this paper, we study the rate of the asymptotic convergence speed of thse travelling wave solutions. The analysis applies to the case of a nonconvex flux and when the shock speed coincides with characteristic speed of the state at infinity. The decay rate is obtained by applying the energy method and is shown to be the same as the one for the viscous conservation law [A. Matsumura and K. Nishihara,Comm. Math. Phys.165(1994), 83-96].

  3. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  4. Theoretical Model of the Relationship between Single Embryo Transfer Rate and Multiple Pregnancy Rate in Japan

    PubMed Central

    Ooki, Syuichi

    2012-01-01

    The purpose of the present study was to examine the effect of single embryo transfer (SET) in assisted reproductive technology (ART) on the reduction of the multiple pregnancy rate. We also estimated the monozygotic (MZ) twinning rates according to the SET diffusion indirectly. A reverse sigmoid curve was assumed and examined using nationwide data of SET from 2007 to 2009 in Japan. The multiple pregnancy rate decreased almost linearly where the SET pregnancy rate was between about 40% and 80% of regression approximation. The linear approximation overestimated multiple pregnancy rates in an early period and underestimated multiple pregnancy rates in the final period. The multiple pregnancy rate seemed to be influenced by the improvement of the total pregnancy rate of ART in the early period and by the MZ twinning after SET in the final period. The estimated MZ twinning rate after SET was around 2%. PMID:22900185

  5. Improved model for the angular dependence of excimer laser ablation rates in polymer materials

    SciTech Connect

    Pedder, J. E. A.; Holmes, A. S.; Dyer, P. E.

    2009-10-26

    Measurements of the angle-dependent ablation rates of polymers that have applications in microdevice fabrication are reported. A simple model based on Beer's law, including plume absorption, is shown to give good agreement with the experimental findings for polycarbonate and SU8, ablated using the 193 and 248 nm excimer lasers, respectively. The modeling forms a useful tool for designing masks needed to fabricate complex surface relief by ablation.

  6. Low-dose-rate extrapolation using the multistage model

    SciTech Connect

    Portier, C.; Hoel, D.

    1983-12-01

    The distribution of the maximum likelihood estimates of virtually safe levels of exposure to environmental chemicals is derived by using large-sample theory and Monte Carlo simulation according to the Armitage-Doll multistage model. Using historical dose-response we develop a set of 33 two-stage models upon which we base our conclusions. The large-sample distributions of the virtually safe dose are normal for cases in which the multistage-model parameters have nonzero expectation, and are skewed in other cases. The large-sample theory does not provide a good approximation of the distribution observed for small bioassays when Monte Carlo simulation is used. The constrained nature of the multistage-model parameters leads to bimodal distributions for small bioassays. The two modes are the direct result of estimating the linear parameter in the multistage model; the lower mode results from estimating this parameter to be nonzero, and the upper mode from estimating it to be zero. The results of this research emphasize the need for incorporation of the biological theory in the model-selection process.

  7. Incorporating a Time Horizon in Rate-of-Return Estimations: Discounted Cash Flow Model in Electric Transmission Rate Cases

    SciTech Connect

    Chatterjee, Bishu; Sharp, Peter A.

    2006-07-15

    Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)

  8. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-01

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. PMID:27258213

  9. Modeling temperature and strain rate history in effects in OFHU Cu

    NASA Astrophysics Data System (ADS)

    Tanner, Albert Buck

    Accurate material behavior prediction during large deformations is essential. For the U.S. Army, explosively formed projectiles (EFP), penetrators, and vehicle armor are applications which will benefit from a better understanding of and ability to predict material behavior when subjected to high and varying strain rates and temperatures. Linking macro-scale material behavior with the evolution of microstructure has proven effective in obtaining an appropriate mathematical structure for constitutive relationships. Incorporation of strain rate, temperature, and deformation path history effects are especially critical to accurately predict material responses for arbitrary nonisothermal, variable strain rate conditions. Material constitutive equations contain numerous parameters which must be determined experimentally, and often are not fully optimized. The goal of this research was to develop more physically descriptive kinematics and kinetics models for large strain deformation based on internal state variable (ISV) evolution laws which include strain rate and temperature history dependence. A unique and comprehensive set of experiments involving sequences of different strain rates, temperatures, and deformation paths, as well as, constant strain rate, isothermal and experiments characterizing restoration processes, were conducted on OFHC Cu. Microstructural examinations found that recrystallization occurs and has a significant influence on the flow stress. The performance of various models, including state-of-the-art models such as the BCJ (Sandia), MTS (Los Alamos), and McDowell models were correlated and compared to experimental data. A novel hybrid optimization strategy was used to obtain the optimum parameter set possible corresponding to each model form. To account for the observed flow stress softening, an internal state variable representing the "softened" recrystallized state was incorporated into the hardening evolution equations in the BCJ and Mc

  10. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  11. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  12. Non-contact Laser-based Human Respiration Rate Measurement

    NASA Astrophysics Data System (ADS)

    Scalise, L.; Marchionni, P.; Ercoli, I.

    2011-08-01

    At present the majority of the instrumentation, used in clinical environments, to measure human respiration rate are based on invasive and contact devices. The gold standard instrument is considered the spirometer which is largely used; it needs a direct contact and requires a collaboration by the patient. Laser Doppler Vibrometer (LDVi) is an optical, non-contact measurement system for the assessment of a surface velocity and displacement. LDVi has already been used for the measurement of the cardiac activity and for the measurement of the chest-wall displacements. The aims of this work are to select the best measurement point on the thoracic surface for LDVi monitoring of the respiration rate (RR) and to compare measured data with the RR valued provided by the spirometer. The measurement system is composed by a LDV system and a data acquisition board installed on a PC. Tests were made on 10 different point of the thorax for each patient. Patients population was composed by 33 subjects (17 male and 16 female). The optimal measurement point was chosen considering the maximum peak-to-peak value of the displacement measured by LDV. Before extracting RR we have used a special wavelet decomposition for better selection of the expiration peaks. A standard spirometer was used for the validation of the data. From tests it results that the optimal measurement point, namely is located on the inferior part of the thoracic region (left, front side). From our tests we have obtained a close correlation between the RR values measured by the spirometer and those measured by the proposed method: a difference of 14±211 ms on the RR value is reported for the entire population of 33 subjects. Our method allows a no-contact measurement of lungs activity (respiration period), reducing the electric and biological risks. Moreover it allows to measure in critical environment like in RMN or in burned skin where is difficult or impossible to apply electrodes.

  13. A comparison between computer-controlled and set work rate exercise based on target heart rate

    NASA Technical Reports Server (NTRS)

    Pratt, Wanda M.; Siconolfi, Steven F.; Webster, Laurie; Hayes, Judith C.; Mazzocca, Augustus D.; Harris, Bernard A., Jr.

    1991-01-01

    Two methods are compared for observing the heart rate (HR), metabolic equivalents, and time in target HR zone (defined as the target HR + or - 5 bpm) during 20 min of exercise at a prescribed intensity of the maximum working capacity. In one method, called set-work rate exercise, the information from a graded exercise test is used to select a target HR and to calculate a corresponding constant work rate that should induce the desired HR. In the other method, the work rate is controlled by a computer algorithm to achieve and maintain a prescribed target HR. It is shown that computer-controlled exercise is an effective alternative to the traditional set work rate exercise, particularly when tight control of cardiovascular responses is necessary.

  14. Evaporation rate of emulsion and oil-base emulsion pheromones

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Knowledge of pheromone evaporation rate is critical to distribute pheromone containers effectively in the forest, orchard and field. There are several factors influencing the pheromone evaporation rate that include wind speed, container size and porosity, release area, temperature, humidity, pherom...

  15. A simulation model for the determination of tabarru' rate in a family takaful

    NASA Astrophysics Data System (ADS)

    Ismail, Hamizun bin

    2014-06-01

    The concept of tabarru' that is incorporated in family takaful serves to eliminate the element of uncertainty in the contract as a participant agree to relinquish as donation certain portion of his contribution. The most important feature in family takaful is that it does not guarantee a definite return on a participant's contribution, unlike its conventional counterpart where a premium is paid in return for a guaranteed amount of insurance benefit. In other words, investment return on contributed funds by the participants are based on actual investment experience. The objective of this study is to set up a framework for the determination of tabarru' rate by simulation. The model is based on binomial death process. Specifically, linear tabarru' rate and flat tabarru' rate are introduced. The results of the simulation trials show that the linear assumption on the tabarru' rate has an advantage over the flat counterpart as far as the risk of the investment accumulation on maturity is concerned.

  16. A multiscale strength model for tantalum over an extended range of strain rates

    NASA Astrophysics Data System (ADS)

    Barton, N. R.; Rhee, M.

    2013-09-01

    A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].

  17. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    PubMed

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891

  18. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates

    PubMed Central

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene

    2016-01-01

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891

  19. Models explaining motor vehicle death rates in the United States.

    PubMed

    Zlatoper, T J

    1989-04-01

    This paper is a selective survey of models explaining motor vehicle death rates in the United States. First, it reviews Peltzman's 1975 study of the effect of automobile safety regulation and critiques of the study. Then it summarizes several subsequent statistical studies of highway fatalities. The surveyed studies are typically regression analyses of the impact of various factors on motor vehicle deaths. They are categorized in this paper according to which of three types of data they utilized: time-series; cross-sectional; or pooled time-series, cross-sectional. This paper notes what can be inferred collectively from the surveyed studies regarding the impacts of various factors on highway fatalities. It also discusses certain shortcomings of the studies in general along with possible remedies, and makes recommendations regarding future research. Tabular summaries of the statistical studies surveyed in this paper are included in the Appendix. PMID:2785390

  20. Thermodynamic Development of Corrosion Rate Modeling in Iron Phosphate Glasses

    SciTech Connect

    Schlesinger, Mark; Brow, Richard

    2011-10-31

    A two-year research program investigated links between the thermodynamic properties of phosphate glasses and their corrosion rates in different solutions. Glasses in the Na2O-CaO-P2O5 and Na2O-Fe2O3-PO5 systems were prepared and characterized. These glasses were then exposed in bulk and powder form to acid (0.1M HCl), basic (0.1M KOH) and neutral (deionized water) solutions at varying exposure times and temperatures. Analysis of the solution and the glass after exposure determined the rate and type of corrosion that occurred. Simultaneously, efforts were made to determine the thermodynamic properties of solid iron phosphate compounds. This included measurement of low temperature (5-300 K) heat capacities, measured at Brigham Young University; the attempted use of a Parr calorimeter to measure ambient temperature enthalpies of formation; and attempted measurement of temperature heat capacities. Only the first of the three tasks was successfully accomplished. In lieu of experimental measurement of enthalpies of formation, first-principles calculation of enthalpies of formation was performed at Missouri S&T; these results will be used in subsequent modeling efforts.

  1. Sketch-based geologic modeling

    NASA Astrophysics Data System (ADS)

    Rood, M. P.; Jackson, M.; Hampson, G.; Brazil, E. V.; de Carvalho, F.; Coda, C.; Sousa, M. C.; Zhang, Z.; Geiger, S.

    2015-12-01

    Two-dimensional (2D) maps and cross-sections, and 3D conceptual models, are fundamental tools for understanding, communicating and modeling geology. Yet geologists lack dedicated and intuitive tools that allow rapid creation of such figures and models. Standard drawing packages produce only 2D figures that are not suitable for quantitative analysis. Geologic modeling packages can produce 3D models and are widely used in the groundwater and petroleum communities, but are often slow and non-intuitive to use, requiring the creation of a grid early in the modeling workflow and the use of geostatistical methods to populate the grid blocks with geologic information. We present an alternative approach to rapidly create figures and models using sketch-based interface and modelling (SBIM). We leverage methods widely adopted in other industries to prototype complex geometries and designs. The SBIM tool contains built-in geologic rules that constrain how sketched lines and surfaces interact. These rules are based on the logic of superposition and cross-cutting relationships that follow from rock-forming processes, including deposition, deformation, intrusion and modification by diagenesis or metamorphism. The approach allows rapid creation of multiple, geologically realistic, figures and models in 2D and 3D using a simple, intuitive interface. The user can sketch in plan- or cross-section view. Geologic rules are used to extrapolate sketched lines in real time to create 3D surfaces. Quantitative analysis can be carried our directly on the models. Alternatively, they can be output as simple figures or imported directly into other modeling tools. The software runs on a tablet PC and can be used in a variety of settings including the office, classroom and field. The speed and ease of use of SBIM enables multiple interpretations to be developed from limited data, uncertainty to be readily appraised, and figures and models to be rapidly updated to incorporate new data or concepts.

  2. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate

    PubMed Central

    Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng

    2015-01-01

    Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter’s improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate. PMID:26690148

  3. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate.

    PubMed

    Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng

    2015-01-01

    Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter's improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate. PMID:26690148

  4. Influence of satellite-derived photolysis rates and NOx emissions on Texas ozone modeling

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Pour-Biazar, A.; Lamsal, L. N.; White, A. T.; Xiao, X.; Zhou, W.; Henderson, B. H.; Lash, B. F.

    2015-02-01

    Uncertain photolysis rates and emission inventory impair the accuracy of state-level ozone (O3) regulatory modeling. Past studies have separately used satellite-observed clouds to correct the model-predicted photolysis rates, or satellite-constrained top-down NOx emissions to identify and reduce uncertainties in bottom-up NOx emissions. However, the joint application of multiple satellite-derived model inputs to improve O3 state implementation plan (SIP) modeling has rarely been explored. In this study, Geostationary Operational Environmental Satellite (GOES) observations of clouds are applied to derive the photolysis rates, replacing those used in Texas SIP modeling. This changes modeled O3 concentrations by up to 80 ppb and improves O3 simulations by reducing modeled normalized mean bias (NMB) and normalized mean error (NME) by up to 0.1. A sector-based discrete Kalman filter (DKF) inversion approach is incorporated with the Comprehensive Air Quality Model with extensions (CAMx)-decoupled direct method (DDM) model to adjust Texas NOx emissions using a high-resolution Ozone Monitoring Instrument (OMI) NO2 product. The discrepancy between OMI and CAMx NO2 vertical column densities (VCDs) is further reduced by increasing modeled NOx lifetime and adding an artificial amount of NO2 in the upper troposphere. The region-based DKF inversion suggests increasing NOx emissions by 10-50% in most regions, deteriorating the model performance in predicting ground NO2 and O3, while the sector-based DKF inversion tends to scale down area and nonroad NOx emissions by 50%, leading to a 2-5 ppb decrease in ground 8 h O3 predictions. Model performance in simulating ground NO2 and O3 are improved using sector-based inversion-constrained NOx emissions, with 0.25 and 0.04 reductions in NMBs and 0.13 and 0.04 reductions in NMEs, respectively. Using both GOES-derived photolysis rates and OMI-constrained NOx emissions together reduces modeled NMB and NME by 0.05, increases the model

  5. Can organic matter flux profiles be diagnosed using remineralisation rates derived from observed tracers and modelled ocean transport rates?

    NASA Astrophysics Data System (ADS)

    Wilson, J. D.; Ridgwell, A.; Barker, S.

    2015-09-01

    The average depth in the ocean at which the majority of sinking organic matter particles remineralise is a fundamental parameter in the ocean's role in regulating atmospheric CO2. Observed spatial patterns in sinking fluxes and relationships between the fluxes of different particles in the modern ocean have widely been used to invoke controlling mechanisms with important implications for CO2 regulation. However, such analyses are limited by the sparse spatial sampling of the available sediment trap data. Here we explore whether model ocean circulation rates, in the form of a transport matrix, can be used to derive remineralisation rates and infer sinking particle flux curves from the much more highly resolved observations of dissolved nutrient concentrations. Initially we show an example of the method using a transport matrix from the MITgcm model and demonstrate that there are a number of potential uncertainties associated with the method. We then use the Earth system model GENIE to generate a synthetic tracer data set to explore the method and its sensitivity to key sources of uncertainty arising from errors in the tracer observations and in the model circulation. We use a 54-member ensemble of different, but plausible, estimates of the modern circulation to explore errors associated with model transport rates. We find that reconstructed re-mineralisation rates are very sensitive to both errors in observations and model circulation rates, such that a simple inversion cannot provide a robust estimate of particulate flux profiles. Estimated remineralisation rates are particularly sensitive to differences between the "observed" and modelled circulation because remineralisation rates are 3-4 magnitudes smaller than transport rates. We highlight a potential method of constraining the uncertainty associated with using modelled circulation rates, although its success is limited by the observations currently available. Finally, we show that there are additional

  6. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  7. Fast entropy-based CABAC rate estimation for mode decision in HEVC.

    PubMed

    Chen, Wei-Gang; Wang, Xun

    2016-01-01

    High efficiency video coding (HEVC) seeks the best code tree configuration, the best prediction unit division and the prediction mode, by evaluating the rate-distortion functional in a recursive way and using a "try all and select the best" strategy. Further, HEVC only supports context adaptive binary arithmetic coding (CABAC), which has the disadvantage of being highly sequential and having strong data dependencies, as the entropy coder. So, the development of a fast rate estimation algorithm for CABAC-based coding has a great practical significance for mode decision in HEVC. There are three elementary steps in CABAC encoding process: binarization, context modeling, and binary arithmetic coding. Typical approaches to fast CABAC rate estimation simplify or eliminate the last two steps, but leave the binarization step unchanged. To maximize the reduction of computational complexity, we propose a fast entropy-based CABAC rate estimator in this paper. It eliminates not only the modeling and the coding steps, but also the binarization step. Experimental results demonstrate that the proposed estimator is able to reduce the computational complexity of the mode decision in HEVC by 9-23 % with negligible PSNR loss and BD-rate increment, and therefore exhibits applicability to practical HEVC encoder implementation. PMID:27386240

  8. Moving base simulation evaluation of translational rate command systems for STOVL aircraft in hover

    NASA Technical Reports Server (NTRS)

    Franklin, James A.; Stortz, Michael W.

    1996-01-01

    Using a generalized simulation model, a moving-base simulation of a lift-fan short takeoff/vertical landing fighter aircraft has been conducted on the Vertical Motion Simulator at Ames Research Center. Objectives of the experiment were to determine the influence of system bandwidth and phase delay on flying qualities for translational rate command and vertical velocity command systems. Assessments were made for precision hover control and for landings aboard an LPH type amphibious assault ship in the presence of winds and rough seas. Results obtained define the boundaries between satisfactory and adequate flying qualities for these design features for longitudinal and lateral translational rate command and for vertical velocity command.

  9. Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates (External Review Draft)

    EPA Science Inventory

    EPA has released a draft report entitled, Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates, for independent external peer review and public comment. NCEA published the Exposure Factors Handbook in 1997. This comprehens...

  10. Sexually violent predators: toward reasonable estimates of recidivism base rates.

    PubMed

    Neller, Daniel J; Petris, Giovanni

    2013-01-01

    The sexual recidivism rate of sex offenders is a controversial issue. Perhaps as controversial is the sexual recidivism rate of the select group of sex offenders who are examined pursuant to sexually violent predator (SVP) statutes. At present, reliable estimates of SVP recidivism are unavailable. We propose that reasonable estimates of SVP recidivism can be reached by considering three available pieces of data: (i) a likely recidivism rate of the general population of sex offenders; (ii) procedures typically followed by jurisdictions that civilly commit sex offenders; and (iii) classification accuracy of procedures. Although sexual recidivism rates vary across jurisdictions, the results of our analyses suggest sex offenders referred for examination pursuant to SVP statutes recidivate at substantially higher rates than typical sex offenders. Our results further suggest that sex offenders recommended for commitment as SVPs recidivate at even greater rates than SVP respondents who are not recommended for commitment. We discuss practice and policy implications of these findings. PMID:23620130

  11. Implications of two Holocene time-dependent geomagnetic models for cosmogenic nuclide production rate scaling

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel

    2016-01-01

    The geomagnetic field is a major influence on in situ cosmogenic nuclide production rates at a given location (in addition to atmospheric pressure and, to a lesser extent, solar modulation effects). A better understanding of how past fluctuations in these influences affected production rates should allow more accurate application of cosmogenic nuclides. As such, this work explores the cosmogenic nuclide production rate scaling implications of two recent time-dependent spherical harmonic geomagnetic models spanning the Holocene. Korte and Constable (2011, Phys. Earth Planet. Inter.188, 247-259) and Korte et al. (2011, Earth Planet. Sci. Lett. 312, 497-505) recently updated earlier spherical harmonic paleomagnetic models with new paleomagnetic data from sediment cores in addition to new archeomagnetic and volcanic data. These updated models offer improved resolution and accuracy over the previous versions, in part due to increased temporal and spatial data coverage. In addition, Pavón-Carrasco et al. (2014, Earth Planet. Sci. Lett. 388, 98-109) developed another time-dependent spherical harmonic model of the Holocene geomagnetic field, based solely on archeomagnetic and volcanic paleomagnetic data from the same underlying paleomagnetic database as the Korte et al. models, but extending to 14 ka. With the new models as input, trajectory-traced estimates of effective vertical cutoff rigidity (RC - the standard method for ordering cosmic ray data) yield significantly different time-integrated scaling predictions when compared to each other and to results using the earlier models. In addition, predictions of each new model using RC are tested empirically using recently published production rate calibration data for both 10Be and 3He, and compared to predictions using corresponding time-varying geocentric dipolar RC formulations and a static geocentric axial dipole (GAD) model. Results for the few calibration sites from geomagnetically sensitive regions suggest that the

  12. Mechanical strength model for plastic bonded granular materials at high strain rates and large strains

    SciTech Connect

    Browning, R.V.; Scammon, R.J.

    1997-07-01

    Modeling impact events on systems containing plastic bonded explosive materials requires accurate models for stress evolution at high strain rates out to large strains. For example, in the Steven test geometry reactions occur after strains of 0.5 or more are reached for PBX-950l. The morphology of this class of materials and properties of the constituents are briefly described. We then review the viscoelastic behavior observed at small strains for this class of material, and evaluate large strain models used for granular materials such as cap models. Dilatation under shearing deformations of the PBX is experimentally observed and is one of the key features modeled in cap style plasticity theories, together with bulk plastic flow at high pressures. We propose a model that combines viscoelastic behavior at small strains but adds intergranular stresses at larger strains. A procedure using numerical simulations and comparisons with results from flyer plate tests and low rate uniaxial stress tests is used to develop a rough set of constants for PBX-9501. Comparisons with the high rate flyer plate tests demonstrate the viscoelastic based model show that the observed characteristic behavior is captured by this model.

  13. A one-dimensional strain-rate-dependent constitutive model for superelastic shape memory alloys

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Li, Hongnan; Song, Gangbing

    2007-02-01

    Recently, there is increasing interest in using superelastic shape memory alloys (SMAs) in civil, mechanical and aerospace engineering, attributed to their large recoverable strain range (up to 6-8%), high damping capacity, and excellent fatigue property. In this research, an improved Graesser's model is proposed to model the strain-rate-dependent hysteretic behavior of superelastic SMA wires. Cyclic loading tests of superelastic SMA wires are first performed to determine their hysteresis properties. The effects of the strain amplitude and the loading rate on the mechanical properties are studied and formulated using the least-square method. Based on Graesser's model, an improved model is developed. The improved model divides the full loop into three parts: the loading branch, the unloading branch before the completion of the reverse transformation and the elastic unloading branch after the completion of reverse transformation, where each part adopts its respective parameters. Numerical simulations are conducted using both the original and the improved Graesser's models. Comparisons indicate that the improved Graesser's model accurately reflects the hysteresis characteristics and provides a better prediction of the SMAs' actual hysteresis behavior than the original Graesser's model at varying levels of strain and loading rate.

  14. Rate-Dependent Embedded Discontinuity Approach Incorporating Heterogeneity for Numerical Modeling of Rock Fracture

    NASA Astrophysics Data System (ADS)

    Saksala, Timo

    2015-07-01

    In this paper, the embedded discontinuity approach is applied in finite element modeling of rock in compression and tension. For this end, a rate-dependent constitutive model based on (strong) embedded displacement discontinuity model is developed to describe the mode I, mode II and mixed mode fracture of rock. The constitutive model describes the bulk material as linear elastic until reaching the elastic limit. Beyond the elastic limit, the rate-dependent exponential softening law governs the evolution of the displacement jump. Rock heterogeneity is incorporated in the present approach by random description of the mineral texture of rock. Moreover, initial microcrack population always present in natural rocks is accounted for as randomly-oriented embedded discontinuities. In the numerical examples, the model properties are extensively studied in uniaxial compression. The effect of loading rate and confining pressure is also tested in the 2D (plane strain) numerical simulations. These simulations demonstrate that the model captures the salient features of rock in confined compression and uniaxial tension. The developed method has the computational efficiency of continuum plasticity models. However, it also has the advantage, over these models, of accounting for the orientation of introduced microcracks. This feature is crucial with respect to the fracture behavior of rock in compression as shown in this paper.

  15. A method for retrieving the cumulus entrainment rate from ground based observations

    NASA Astrophysics Data System (ADS)

    Wagner, Timothy J.

    2011-12-01

    The entrainment of drier environmental air into cumulus clouds affects the impact that these clouds have on the environment by modifying their radiative, microphysical, and thermodynamic characteristics. Entrainment is a difficult parameter to observe directly, and heretofore has been obtained from occasional aircraft penetrations. To increase the number of cumulus entrainment rate observations under a wide range of atmospheric conditions, an algorithm for retrieving the cumulus entrainment rate from ground-based remote sensing observations has been developed. This algorithm, called the Entrainment Rate In Cumulus Algorithm (ERICA), uses the suite of instruments at the Southern Great Plains (SGP) site of the United States Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility as inputs into a Gauss-Newton optimal estimation scheme. The forward model in this algorithm is the Explicit Mixing Parcel Model (EMPM), a cloud parcel model that treats entrainment as a series of discrete mixing events. Output from EMPM is used to calculate quantities that can be observed from the surface, including effective radius and liquid water path. The entrainment rate in EMPM is adjusted iteratively until the modeled output converges to the observations. Sensitivity testing and error and information content analysis show that ERICA is a robust method for obtaining accurate estimates of entrainment rate without the drawbacks of aircraft observations. Results from a three-month trial of ERICA show significant variability of the entrainment rate of clouds in a single day and from one day to the next. The mean value from this analysis corresponds well with prior knowledge of the entrainment rate.

  16. Can organic matter flux profiles be diagnosed using remineralisation rates derived from observed tracers and modelled ocean transport rates?

    NASA Astrophysics Data System (ADS)

    Wilson, J. D.; Ridgwell, A.; Barker, S.

    2015-03-01

    The average depth in the ocean at which the majority of sinking organic matter particles remineralise is a fundamental parameter in the oceans role in regulating atmospheric CO2. Observed spatial patterns in sinking fluxes and relationships between the fluxes of different particles in the modern ocean have widely been used to invoke controlling mechanisms with important implications for CO2 regulation. However, such analyses are limited by the sparse spatial sampling of the available sediment trap data. Here we explore whether model ocean circulation rates, in the form of a transport matrix, can be used to derive remineralisation rates and sinking particle flux curves from the much more highly resolved observations of dissolved nutrient concentrations. Initially we use the Earth system model GENIE to generate a synthetic tracer dataset to explore the methods and its sensitivity to key sources of uncertainty arising from errors in the tracer observations and in the model circulation. We use a perturbed physics ensemble to generate 54 different estimates of circulation to explore errors associated with model transport rates. We find that reconstructed remineralisation rates are highly sensitive to both errors in observations and our ensemble uncertainty in model circulation rates such that a simple inversion does not provide a robust estimate of particulate flux profiles. Inferred remineralisation rates are particularly sensitive to differences between the "observed" and modelled transport fluxes because remineralisation rates are 3-4 magnitudes smaller than circulation rates. We also find that when inferring particle flux curves from remineralisation rates the cycling of dissolved organic matter also creates biases that have a similar magnitude and spatial variability to flux curves diagnosed using sediment trap data. We end with a discussion on the potential future directions and pitfalls of estimating remineralisation rates using model circulation schemes.

  17. The influence of model structure on groundwater recharge rates in climate-change impact studies

    NASA Astrophysics Data System (ADS)

    Moeck, Christian; Brunner, Philip; Hunkeler, Daniel

    2016-08-01

    Numerous modeling approaches are available to provide insight into the relationship between climate change and groundwater recharge. However, several aspects of how hydrological model choice and structure affect recharge predictions have not been fully explored, unlike the well-established variability of climate model chains—combination of global climate models (GCM) and regional climate models (RCM). Furthermore, the influence on predictions related to subsoil parameterization and the variability of observation data employed during calibration remain unclear. This paper compares and quantifies these different sources of uncertainty in a systematic way. The described numerical experiment is based on a heterogeneous two-dimensional reference model. Four simpler models were calibrated against the output of the reference model, and recharge predictions of both reference and simpler models were compared to evaluate the effect of model structure on climate-change impact studies. The results highlight that model simplification leads to different recharge rates under climate change, especially under extreme conditions, although the different models performed similarly under historical climate conditions. Extreme weather conditions lead to model bias in the predictions and therefore must be considered. Consequently, the chosen calibration strategy is important and, if possible, the calibration data set should include climatic extremes in order to minimise model bias introduced by the calibration. The results strongly suggest that ensembles of climate projections should be coupled with ensembles of hydrogeological models to produce credible predictions of future recharge and with the associated uncertainties.

  18. The influence of model structure on groundwater recharge rates in climate-change impact studies

    NASA Astrophysics Data System (ADS)

    Moeck, Christian; Brunner, Philip; Hunkeler, Daniel

    2016-02-01

    Numerous modeling approaches are available to provide insight into the relationship between climate change and groundwater recharge. However, several aspects of how hydrological model choice and structure affect recharge predictions have not been fully explored, unlike the well-established variability of climate model chains—combination of global climate models (GCM) and regional climate models (RCM). Furthermore, the influence on predictions related to subsoil parameterization and the variability of observation data employed during calibration remain unclear. This paper compares and quantifies these different sources of uncertainty in a systematic way. The described numerical experiment is based on a heterogeneous two-dimensional reference model. Four simpler models were calibrated against the output of the reference model, and recharge predictions of both reference and simpler models were compared to evaluate the effect of model structure on climate-change impact studies. The results highlight that model simplification leads to different recharge rates under climate change, especially under extreme conditions, although the different models performed similarly under historical climate conditions. Extreme weather conditions lead to model bias in the predictions and therefore must be considered. Consequently, the chosen calibration strategy is important and, if possible, the calibration data set should include climatic extremes in order to minimise model bias introduced by the calibration. The results strongly suggest that ensembles of climate projections should be coupled with ensembles of hydrogeological models to produce credible predictions of future recharge and with the associated uncertainties.

  19. Numerical solution of the Penna model of biological aging with age-modified mutation rate.

    PubMed

    Magdoń-Maksymowicz, M S; Maksymowicz, A Z

    2009-06-01

    In this paper we present results of numerical calculation of the Penna bit-string model of biological aging, modified for the case of a -dependent mutation rate m(a), where a is the parent's age. The mutation rate m(a) is the probability per bit of an extra bad mutation introduced in offspring inherited genome. We assume that m(a) increases with age a. As compared with the reference case of the standard Penna model based on a constant mutation rate m , the dynamics of the population growth shows distinct changes in age distribution of the population. Here we concentrate on mortality q(a), a fraction of items eliminated from the population when we go from age (a) to (a+1) in simulated transition from time (t) to next time (t+1). The experimentally observed q(a) dependence essentially follows the Gompertz exponential law for a above the minimum reproduction age. Deviation from the Gompertz law is however observed for the very old items, close to the maximal age. This effect may also result from an increase in mutation rate m with age a discussed in this paper. The numerical calculations are based on analytical solution of the Penna model, presented in a series of papers by Coe et al. [J. B. Coe, Y. Mao, and M. E. Cates, Phys. Rev. Lett. 89, 288103 (2002)]. Results of the numerical calculations are supported by the data obtained from computer simulation based on the solution by Coe et al. PMID:19658536

  20. Mathematical Modeling of Radiocesium Migration and Air Dose Rate Changes in Eastern Fukushima Prefecture

    NASA Astrophysics Data System (ADS)

    Kitamura, A.; Sakuma, K.; Kurikami, H.; Malins, A.; Okumura, M.; Itakura, M.; Yamada, S.; Machida, M.

    2015-12-01

    Radioactive cesium that was deposited over Fukushima Prefecture after the accident at the Fukushima Daiichi nuclear power plant station is one of the major concerns regarding health physics today. Its migration is primarily by soil erosion and sediment transport within surface water during times of heavy rainfall and flooding. In order to predict the future distribution of radioactive cesium and resulting air dose rate at any location in Fukushima, we have integrated a number of mathematical models covering different time and spatial scales. In this presentation we report our overall scheme of prediction starting from sediment and radioactive cesium movement and resulting long term air dose rate changes. Specifically, we present simulation results of sediment movement and radioactive cesium migration using semi-empirical and physics based watershed models, and that of sediment and radioactive cesium behavior in a dam reservoir using one and two dimensional river simulation models. The model's results are compared with ongoing field monitoring.

  1. Does overall reinforcer rate affect discrimination of time-based contingencies?

    PubMed

    Cowie, Sarah; Davison, Michael; Blumhardt, Luca; Elliffe, Douglas

    2016-05-01

    Overall reinforcer rate appears to affect choice. The mechanism for such an effect is uncertain, but may relate to reinforcer rate changing the discrimination of the relation between stimuli and reinforcers. We assessed whether a quantitative model based on a stimulus-control approach could be used to account for the effects of overall reinforcer rate on choice under changing time-based contingencies. On a two-key concurrent schedule, the likely availability of a reinforcer reversed when a fixed time had elapsed since the last reinforcer, and the overall reinforcer rate was varied across conditions. Changes in the overall reinforcer rate produced a change in response bias, and some indication of a change in discrimination. These changes in bias and discrimination always occurred quickly, usually within the first session of a condition. The stimulus-control approach provided an excellent account of the data, suggesting that changes in overall reinforcer rate affect choice because they alter the frequency of reinforcers obtained at different times, or in different stimulus contexts, and thus change the discriminated relation between stimuli and reinforcers. These findings support the notion that temporal and spatial discriminations can be understood in terms of discrimination of reinforcers across time and space. PMID:27151836

  2. Estimating mental fatigue based on electroencephalogram and heart rate variability

    NASA Astrophysics Data System (ADS)

    Zhang, Chong; Yu, Xiaolin

    2010-01-01

    The effects of long term mental arithmetic task on psychology are investigated by subjective self-reporting measures and action performance test. Based on electroencephalogram (EEG) and heart rate variability (HRV), the impacts of prolonged cognitive activity on central nervous system and autonomic nervous system are observed and analyzed. Wavelet packet parameters of EEG and power spectral indices of HRV are combined to estimate the change of mental fatigue. Then wavelet packet parameters of EEG which change significantly are extracted as the features of brain activity in different mental fatigue state, support vector machine (SVM) algorithm is applied to differentiate two mental fatigue states. The experimental results show that long term mental arithmetic task induces the mental fatigue. The wavelet packet parameters of EEG and power spectral indices of HRV are strongly correlated with mental fatigue. The predominant activity of autonomic nervous system of subjects turns to the sympathetic activity from parasympathetic activity after the task. Moreover, the slow waves of EEG increase, the fast waves of EEG and the degree of disorder of brain decrease compared with the pre-task. The SVM algorithm can effectively differentiate two mental fatigue states, which achieves the maximum classification accuracy (91%). The SVM algorithm could be a promising tool for the evaluation of mental fatigue. Fatigue, especially mental fatigue, is a common phenomenon in modern life, is a persistent occupational hazard for professional. Mental fatigue is usually accompanied with a sense of weariness, reduced alertness, and reduced mental performance, which would lead the accidents in life, decrease productivity in workplace and harm the health. Therefore, the evaluation of mental fatigue is important for the occupational risk protection, productivity, and occupational health.

  3. PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODELING

    EPA Science Inventory

    Physiologically-based pharmacokinetic (PB-PK) models attempt to provide both a realistic anatomic description of the animal to which a drug or toxic chemical has been administered and a biologically accurate representation of the physiological pathways for chemical storage, metab...

  4. Genome Informed Trait-Based Models

    NASA Astrophysics Data System (ADS)

    Karaoz, U.; Cheng, Y.; Bouskill, N.; Tang, J.; Beller, H. R.; Brodie, E.; Riley, W. J.

    2013-12-01

    Trait-based approaches are powerful tools for representing microbial communities across both spatial and temporal scales within ecosystem models. Trait-based models (TBMs) represent the diversity of microbial taxa as stochastic assemblages with a distribution of traits constrained by trade-offs between these traits. Such representation with its built-in stochasticity allows the elucidation of the interactions between the microbes and their environment by reducing the complexity of microbial community diversity into a limited number of functional ';guilds' and letting them emerge across spatio-temporal scales. From the biogeochemical/ecosystem modeling perspective, the emergent properties of the microbial community could be directly translated into predictions of biogeochemical reaction rates and microbial biomass. The accuracy of TBMs depends on the identification of key traits of the microbial community members and on the parameterization of these traits. Current approaches to inform TBM parameterization are empirical (i.e., based on literature surveys). Advances in omic technologies (such as genomics, metagenomics, metatranscriptomics, and metaproteomics) pave the way to better-initialize models that can be constrained in a generic or site-specific fashion. Here we describe the coupling of metagenomic data to the development of a TBM representing the dynamics of metabolic guilds from an organic carbon stimulated groundwater microbial community. Illumina paired-end metagenomic data were collected from the community as it transitioned successively through electron-accepting conditions (nitrate-, sulfate-, and Fe(III)-reducing), and used to inform estimates of growth rates and the distribution of metabolic pathways (i.e., aerobic and anaerobic oxidation, fermentation) across a spatially resolved TBM. We use this model to evaluate the emergence of different metabolisms and predict rates of biogeochemical processes over time. We compare our results to observational

  5. Estimating taxonomic diversity, extinction rates, and speciation rates from fossil data using capture-recapture models

    USGS Publications Warehouse

    Nichols, J.D.; Pollock, K.H.

    1983-01-01

    Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.

  6. The Use of Leaf Functional Traits for Modeling the Timing and Rate of Canopy Development

    NASA Astrophysics Data System (ADS)

    Savoy, P.; Mackay, D. S.

    2015-12-01

    Leaves vary in their habit, with some being short lived and possessing high intrinsic photosynthetic rates and others being long lived with lower photosynthetic capacity. Longer lived leaves will thus tend to cost more to produce and be able to assimilate carbon over a longer period of time. The timing and seasonality of forest canopies is a cost benefit strategy for the exploitation of favorable environmental conditions and avoidance of unfavorable conditions. Because of the selective pressure for plants to gather a return on leaf investment in relation to their leaf habit we propose that there is a relationship between plant functional traits and the timing and rate of canopy development. In a recent study it was shown that errors in predicted canopy dynamics could be reduced via a single parameter (τ) which modified the timing and rate of canopy development (Savoy & Mackay 2015). If τ is related to underlying mechanisms of plant physiology then it should vary predictably. To test this we will first examine the relationship between τ and observable biophysical variables which vary in ecologically meaningful ways. Then we will develop a model based on leaf traits which will regulate the timing and rate at which vegetation reaches peak rates of assimilation. The model will then be tested at eddy covariance sites which span a range environmental conditions. Preliminary results demonstrate a strong relationship (R2 = 0.58) between estimated values of τ and leaf carbon to nitrogen ratio, which is important for representing the costs of leaf construction and nitrogen investment into photosynthetic machinery of leaves. By developing a canopy seasonality model based on plant functional traits and rooted in the framework of leaf economics it is possible to have a more flexible and generalized model. Such a model will be more adept at making predictions under novel environmental conditions than purely correlative empirical models.

  7. Tuberculosis reinfection rate as a proportion of total infection rate correlates with the logarithm of the incidence rate: a mathematical model

    PubMed Central

    Uys, Pieter W; van Helden, Paul D; Hargrove, John W

    2008-01-01

    In a significant number of instances, an episode of tuberculosis can be attributed to a reinfection event. Because reinfection is more likely in high incidence regions than in regions of low incidence, more tuberculosis (TB) cases due to reinfection could be expected in high-incidence regions than in low-incidence regions. Empirical data from regions with various incidence rates appear to confirm the conjecture that, in fact, the incidence rate due to reinfection only, as a proportion of all cases, correlates with the logarithm of the incidence rate, rather than with the incidence rate itself. A theoretical model that supports this conjecture is presented. A Markov model was used to obtain a relationship between incidence and reinfection rates. It was assumed in this model that the rate of reinfection is a multiple, ρ (the reinfection factor), of the rate of first-time infection, λ. The results obtained show a relationship between the proportion of cases due to reinfection and the rate of incidence that is approximately logarithmic for a range of values of the incidence rate typical of those observed in communities across the globe. A value of ρ is determined such that the relationship between the proportion of cases due to reinfection and the logarithm of the incidence rate closely correlates with empirical data. From a purely theoretical investigation, it is shown that a simple relationship can be expected between the logarithm of the incidence rates and the proportions of cases due to reinfection after a prior episode of TB. This relationship is sustained by a rate of reinfection that is higher than the rate of first-time infection and this latter consideration underscores the great importance of monitoring recovered TB cases for repeat disease episodes, especially in regions where TB incidence is high. Awareness of this may assist in attempts to control the epidemic. PMID:18577502

  8. Tuberculosis reinfection rate as a proportion of total infection rate correlates with the logarithm of the incidence rate: a mathematical model.

    PubMed

    Uys, Pieter W; van Helden, Paul D; Hargrove, John W

    2009-01-01

    In a significant number of instances, an episode of tuberculosis can be attributed to a reinfection event. Because reinfection is more likely in high incidence regions than in regions of low incidence, more tuberculosis (TB) cases due to reinfection could be expected in high-incidence regions than in low-incidence regions. Empirical data from regions with various incidence rates appear to confirm the conjecture that, in fact, the incidence rate due to reinfection only, as a proportion of all cases, correlates with the logarithm of the incidence rate, rather than with the incidence rate itself. A theoretical model that supports this conjecture is presented. A Markov model was used to obtain a relationship between incidence and reinfection rates. It was assumed in this model that the rate of reinfection is a multiple, rho (the reinfection factor), of the rate of first-time infection, lambda. The results obtained show a relationship between the proportion of cases due to reinfection and the rate of incidence that is approximately logarithmic for a range of values of the incidence rate typical of those observed in communities across the globe. A value of rho is determined such that the relationship between the proportion of cases due to reinfection and the logarithm of the incidence rate closely correlates with empirical data. From a purely theoretical investigation, it is shown that a simple relationship can be expected between the logarithm of the incidence rates and the proportions of cases due to reinfection after a prior episode of TB. This relationship is sustained by a rate of reinfection that is higher than the rate of first-time infection and this latter consideration underscores the great importance of monitoring recovered TB cases for repeat disease episodes, especially in regions where TB incidence is high. Awareness of this may assist in attempts to control the epidemic. PMID:18577502

  9. Mechanical strength model for plastic bonded granular materials at high strain rates and large strains

    SciTech Connect

    Browning, R.V.; Scammon, R.J.

    1998-07-01

    Modeling impact events on systems containing plastic bonded explosive materials requires accurate models for stress evolution at high strain rates out to large strains. For example, in the Steven test geometry reactions occur after strains of 0.5 or more are reached for PBX-9501. The morphology of this class of materials and properties of the constituents are briefly described. We then review the viscoelastic behavior observed at small strains for this class of material, and evaluate large strain models used for granular materials such as cap models. Dilatation under shearing deformations of the PBX is experimentally observed and is one of the key features modeled in cap style plasticity theories, together with bulk plastic flow at high pressures. We propose a model that combines viscoelastic behavior at small strains but adds intergranular stresses at larger strains. A procedure using numerical simulations and comparisons with results from flyer plate tests and low rate uniaxial stress tests is used to develop a rough set of constants for PBX-9501. Comparisons with the high rate flyer plate tests demonstrate that the observed characteristic behavior is captured by this viscoelastic based model. {copyright} {ital 1998 American Institute of Physics.}

  10. Financial modeling in medicine: cash flow, basic metrics, the time value of money, discount rates, and internal rate of return.

    PubMed

    Lexa, Frank James; Berlin, Jonathan W

    2005-03-01

    In this article, the authors cover tools for financial modeling. Commonly used time lines and cash flow diagrams are discussed. Commonly used but limited terms such as payback and breakeven are introduced. The important topics of the time value of money and discount rates are introduced to lay the foundation for their use in modeling and in more advanced metrics such as the internal rate of return. Finally, the authors broach the more sophisticated topic of net present value. PMID:17411805

  11. The activity-based anorexia mouse model.

    PubMed

    Klenotich, Stephanie J; Dulawa, Stephanie C

    2012-01-01

    Animals housed with running wheels and subjected to daily food restriction show paradoxical reductions in food intake and increases in running wheel activity. This phenomenon, known as activity-based anorexia (ABA), leads to marked reductions in body weight that can ultimately lead to death. Recently, ABA has been proposed as a model of anorexia nervosa (AN). AN affects about 8 per 100,000 females and has the highest mortality rate among all psychiatric illnesses. Given the reductions in quality of life, high mortality rate, and the lack of pharmacological treatments for AN, a better understanding of the mechanisms underlying AN-like behavior is greatly needed. This chapter provides basic guidelines for conducting ABA experiments using mice. The ABA mouse model provides an important tool for investigating the neurobiological underpinnings of AN-like behavior and identifying novel treatments. PMID:22231828

  12. Volatile organic compound emission rates from mechanical surface aerators: Mass-transfer modeling

    SciTech Connect

    Chern, J.M.; Chou, S.R.

    1999-08-01

    In wastewater treatment plants, many operation units such as equalization and aeration involve oxygen transfer between wastewater and air. While oxygen is transferred from air to wastewater, volatile organic compounds (VOCs) are stripped from wastewater to air. Because of increasingly stringent environmental regulations, wastewater treatment operators have to do VOC inventory of their facilities. A new mass-transfer model has been developed to predict the VOC emission rates from batch and continuous aeration tanks with mechanical surface aerators. The model takes into consideration that the VOC mass transfer occurs in two separate mass-transfer zones instead of lumping the overall VOC transfer in the whole aeration tank as is done in the conventional ASCE-based model. The predictive capabilities of the two-zone and the ASCE-based models were examined by calculating the emission rates of 10 priority pollutants from aeration tanks. The effects of the hydraulic retention time, the Henry`s law constant, gas-phase resistance, and the water and air environmental conditions on the VOC emission rates were predicted by the two models.

  13. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    NASA Technical Reports Server (NTRS)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  14. A growth inhibitory model with SOx influenced effective growth rate for estimation of algal biomass concentration under flue gas atmosphere

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A theoretical model for the prediction of biomass concentration under real flue gas emission has been developed. The model considers the CO2 mass transfer rate, the critical SOx concentration and its role on pH based inter-conversion of bicarbonate in model building. The calibration and subsequent v...

  15. MODELING THE RATE-CONTROLLED SORPTION OF HEXAVALENT CHROMIUM.

    USGS Publications Warehouse

    Grove, D.B.; Stollenwerk, K.G.

    1985-01-01

    Sorption of chromium VI on the iron-oxide- and hydroxide-coated surface of alluvial material was numerically simulated with rate-controlled reactions. Reaction kinetics and diffusional processes, in the form of film, pore, and particle diffusion, were simulated and compared with experimental results. The use of empirically calculated rate coefficients for diffusion through the reacting surface was found to simulate experimental data; pore or particle diffusion is believed to be a possible rate-controlling mechanism. The use of rate equations to predict conservative transport and rate- and local-equilibrium-controlled reactions was shown to be feasible.

  16. Residual layer effects on the modeling of convective boundary layer growth rates with a slab model using FIFE data

    NASA Astrophysics Data System (ADS)

    Freire, Livia S.; Dias, Nelson L.

    2013-12-01

    The ability of slab models for the growth of the convective boundary layer (CBL) to work in the presence of residual layers above is analyzed in detail with a large data set from the First International Satellite Land Surface Climatology Project Field Experiment (FIFE)-87 and FIFE-89 experiments. We confirm that the critical element that allows the models to predict the growth as the CBL coalesces with the residual layer is the adoption of a variable lapse rate above the growing CBL. This is not a new finding per se, as several previous studies in the literature have adopted this strategy. Different parameterizations of a slab model, based on the turbulence kinetic energy budget at the top of the CBL, were tested, and the inclusion of the storage and dissipation terms did not improve the model. Among the alternatives for choosing the lapse rate as the CBL grows, fair results are obtained with a simple choice of two values, with absolute mean errors of the order of 270 m for cases with and without a residual layer. Both lapse rates, representative of the residual layer and the free atmosphere, are obtained from the first morning sounding. Finally, an alternative model for the CBL growth into a residual layer that assumes a zero-lapse rate above the CBL did not perform well with the FIFE data.

  17. On a sparse pressure-flow rate condensation of rigid circulation models.

    PubMed

    Schiavazzi, D E; Hsia, T Y; Marsden, A L

    2016-07-26

    Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol׳ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219

  18. Modeling the 1992 Landers Earthquake with a Rate and State Friction Model.

    NASA Astrophysics Data System (ADS)

    Mohammedi, H.; Madariaga, R.; Perrin, G.

    2002-12-01

    We study rupture propagation in realistic earthquake models under rate and state dependent friction and we apply it to the modeling of the 28 June 1992, Landers earthquake. In our simulations we use a modified version of rate and state proposed by Perrin, Rice and Zheng, the so called PRZ law. Full inversion with PRZ is not yet possible because of the much higher numerical cost of modeling a fault under rate and state than with slip weakening friction laws (SW). Also PRZ has a larger number of independent parameters than slip weakening. We obtain reasonable initial models through the use of the ratio κ between available strain energy and energy relase rate. Because in PRZ friction there are more parameters than in SW we have not yet been able to identify all relevant non-dimensional numbers that control rupture in this model, but a very important one is a logarithmic map that controls whether instable slip may occur or not. This map has the form log ˙ D/v0 = λ ˙ D/v0, where λ is a nondimensional number akin to κ . It includes the parameters of the friction law and the characteristic length of the initial stress, velocity or state fields. ˙ D is slip velocity and v0 a reference speed that defines the initial stress field. Using the results of dynamic inversion from Peyrat et al, we find reasonable rupture models for the initiation of the Landers earthquake. The slip weakening distance in rate and state Dc, as defined by Bizarri and Cocco, is of the order of a few tens of cm. Dc is determined from L, the relaxation length in rate and state, as a subproduct of the logarithmic map cited above.

  19. Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling

    SciTech Connect

    Flach, G. P.

    2015-05-12

    Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.

  20. [In vitro dissolution rate of Liuwei Wuling tablet based on biological potency and integrated dissolution].

    PubMed

    Zheng, Juan; Cheng, Ling; Shen, Cheng-ying; Li, Juan-juan; Qiu, Ling; Shen, Gang; Han, Jin; Yuan, Hai-long

    2015-11-01

    To explore the feasibility of chemical and biological method in evaluation of the in vitro dissolution rate of Liuwei Wuling tablet (LWT), this experiment investigated the inhibitory effect of LWT dissolving solutions on LX-2 hepatic stellate cells in 0.1% SDS dissolution medium in different dissolving periods. From these results, the cumulative dissolution rate of LWT was obtained based on the cell inhibitory rate. The dissolution rates of deoxyschizandrin, phillyrin, and Specnuezhenide were determined by HPLC method. A novel approach of self-defined weighting coefficient had been created to establish the integrated dissolution rate model. Then f2 similar factor method was used to evaluate the relevance of these two methods. The results showed that f2 values for deoxyschizandrin, phillyrin, Specnuezhenide, and the integrated dissolution were 61, 43, 61 and 75 respectively, indicating that the dissolution of multi-component integration could fully reflect the biological potency of the whole recipe. The dissolution evaluation method for multicomponent integration based on biological activity is expected to be one of the effective means for in vitro dissolution test of LWT. PMID:27097413

  1. Philosophy of the Spike: Rate-Based vs. Spike-Based Theories of the Brain

    PubMed Central

    Brette, Romain

    2015-01-01

    Does the brain use a firing rate code or a spike timing code? Considering this controversial question from an epistemological perspective, I argue that progress has been hampered by its problematic phrasing. It takes the perspective of an external observer looking at whether those two observables vary with stimuli, and thereby misses the relevant question: which one has a causal role in neural activity? When rephrased in a more meaningful way, the rate-based view appears as an ad hoc methodological postulate, one that is practical but with virtually no empirical or theoretical support. PMID:26617496

  2. Modeling of atmospheric OH reaction rates using newly developed variable distance weighted zero order connectivity index

    NASA Astrophysics Data System (ADS)

    Markelj, Jernej; Pompe, Matevž

    2016-04-01

    A new variable distance weighted zero order connectivity index was used for development of structure-activity relationship for modeling reactivity of OH radical with alkanes and non-conjugated alkenes in the atmosphere. The proposed model is based on the assumptions that the total reaction rate can be obtained by summing all partial reaction rates and that all reaction sites are interrelated by influencing each other. The results suggest that these assumptions are justified. The model was compared with the EPA implemented model in the studied application domain and showed superior prediction capabilities. Further, optimized values of the weights that were used in our model permit some insight into mechanisms that govern the reaction OH + alkane/alkene. The most important conclusion is that the branching degree of the forming radical seems to play a major role in site specific reaction rates. Relative qualitative structural interpretation is possible, e.g. allylic site is suggested to be much more reactive than even tertiary sp3 carbon. Novel modeling software MACI, which was developed in our lab and is now available for research purposes, was used for calculations. Various variable topological indices that are again starting to be recognized because of their great potentials in simplicity, fast calculations, very good correlations and structural information, were implemented in the program.

  3. A Dynamic Feedback Model for High Repetition Rate LINAC-Driven FELS

    SciTech Connect

    Mellado Munoz, M.; Doolittle, L.; Emma, P.; Huang, G.; Ratti, A.; Serrano, C.; Byrd, J. M.

    2012-05-20

    One of the concepts for the next generation of linacdriven FELs is a CW superconducting linac driving an electron beam with MHz repetition rates. One of the challenges for next generation FELs is improve the stability of the xray pulses by improving the shot-to-shot stability of the energy, charge, peak current, and timing jitter of the electron beam. A high repetition rate FEL with a CW linac presents an opportunity to use a variety of broadband feedbacks to stabilize the beam parameters. To understand the performance of such a feedback system, we are developing a dynamic model of the machine with a focus on the longitudinal beam properties. The model is being developed as an extension of the LITrack code and includes the dynamics of the beam-cavity interaction, RF feedback, beam-based feedback, and multibunch effects. In this paper, we present a detailed description of this model.

  4. Constraint of the limited information content of discharge measurements on the benefits of rating curve models with increased complexities

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Verhoest, Niko; De Mulder, Tom

    2015-04-01

    Discharge assessment through rating curves is a widespread technique in the field of hydrologic monitoring. In practical applications, this technique often consists of the use of one or multiple power laws, based on rather stringent assumptions concerning the nature of the prevailing flow conditions. In reality, those assumptions are regularly violated, inducing considerable uncertainties in the calculated discharges. It is thus important to estimate the effect of those simplifications when performing an overall uncertainty analysis of rating curve discharges. In this study, different rating curve formulations are compared based on both results of a hydrodynamic model and measured water levels and discharges. The results of a hydrodynamic model are used to justify the applicability of several rating curve models with increased complexity as an alternative for a single power law equation. With the hydrodynamic model, situations are simulated that correspond to steady state conditions and to minimal effect of downstream boundaries. Comparison of simulations results with those of measurement-driven simulations leads to an increased understanding of the rating curve dynamics. It allows for evaluation of rating curve formulations accounting for the influences of hysteresis and backwater effects which are neglected in power law rating curves. Subsequently, the performance of those rating curve models and the identifiability of their parameters are assessed based on available stage-discharge measurements and their accompanying uncertainties as described in literature. This assessment is performed based on the Generalised Likelihood Uncertainty Estimation (GLUE). Rejection criteria to distinguish behavioural from non-behavioural models are defined by uncertainties on both water levels and discharge measurements that envelop measured data points. The results of the hydrodynamic model clearly indicate benefits of adding complexity to the rating curve model, mainly by

  5. A case mix simulation decision support system model for negotiating hospital rates.

    PubMed

    Hatcher, M E; Connelly, C

    1988-12-01

    The institution of prospective payment systems by many health care insurers has drawn increased attention to case-based financial planning in hospitals. When hospital revenues are directly linked to patient diagnoses rather than to the types and quantities of services supplied to patients, managers must be aware of the financial implications of different case mixes and must be prepared to influence insurers' price structures. A case-based financial planning model is presented here for the purpose of assisting managerial decision making in the strategic areas of case mix planning and pricing. The computerized model characterizes hospitals as product manufacturers, the product being discharged patients. Diagnosis serves to differentiate the "products"; however, diagnoses are grouped by payor and similar treatment cost experiences to create a limited set of managerially meaningful case types. Diagnostic and treatment costs are also aggregated to facilitate the modeling of the hospital production process. The computerized model projects the number of patients of each case-type and total patient volume, based on estimated patient volume growth rates. The model also projects prices and contribution margins for each case-type, as well as total contribution to hospital overhead. Testing the model with a hypothetical example of a hospital strategic planning problem demonstrates the model's potential as a decision-making aid in case mix planning and case-type pricing. It also reveals several model shortcomings that require further developmental effort. PMID:3148679

  6. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H.; Lehman, Sean K.; Goodman, Dennis M.

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  7. Influence of satellite-derived photolysis rates and NOx emissions on Texas ozone modeling

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Pour-Biazar, A.; Lamsal, L. N.; White, A.; Xiao, X.; Zhou, W.; Henderson, B. H.; Lash, B.

    2014-09-01

    Uncertain photolysis rates and emission inventory impair the accuracy of state-level ozone (O3) regulatory modeling. Past studies have separately used satellite-observed clouds to correct the model-predicted photolysis rates, or satellite-constrained top-down NOx emissions to identify and reduce uncertainties in bottom-up NOx emissions. However, the joint application of multiple satellite-derived model inputs to improve O3 State Implementation Plan (SIP) modeling has rarely been explored. In this study, Geostationary Operational Environmental Satellite (GOES) observations of clouds are applied to derive the photolysis rates, replacing those used in Texas SIP modeling. This changes modeled O3 concentrations by up to 80 ppb and improves O3 simulations by reducing modeled normalized mean bias (NMB) and normalized mean error (NME) by up to 0.1. A sector-based discrete Kalman filter (DKF) inversion approach is incorporated with the Comprehensive Air Quality Model with extensions (CAMx)-Decoupled Direct Method (DDM) model to adjust Texas NOx emissions using a high resolution Ozone Monitoring Instrument (OMI) NO2 product. The discrepancy between OMI and CAMx NO2 vertical column densities (VCD) is further reduced by increasing modeled NOx lifetime and adding an artificial amount of NO2 in the upper troposphere. The sector-based DKF inversion tends to scale down area and non-road NOx emissions by 50%, leading to a 2-5 ppb decrease in ground 8 h O3 predictions. Model performance in simulating ground NO2 and O3 are improved using inverted NOx emissions, with 0.25 and 0.04 reductions in NMBs and 0.13 and 0.04 reductions in NMEs, respectively. Using both GOES-derived photolysis rates and OMI-constrained NOx emissions together reduces modeled NMB and NME by 0.05 and increases the model correlation with ground measurement in O3 simulations and makes O3 more sensitive to NOx emissions in the O3 non-attainment areas.

  8. Robust linear regression model of Ki-67 for mitotic rate in gastrointestinal stromal tumors

    PubMed Central

    KEMMERLING, RALF; WEYLAND, DENIS; KIESSLICH, TOBIAS; ILLIG, ROMANA; KLIESER, ECKHARD; JÄGER, TARKAN; DIETZE, OTTO; NEUREITER, DANIEL

    2014-01-01

    Risk stratification of gastrointestinal stromal tumors (GISTs) by tumor size, lymph node and metastasis status is crucially affected by mitotic activity. To date, no studies have quantitatively compared mitotic activity in hematoxylin and eosin (H&E)-stained tissue sections with immunohistochemical markers, such as phosphohistone H3 (PHH3) and Ki-67. According to the TNM guidelines, the mitotic count on H&E sections and immunohistochemical PHH3-stained slides has been assessed per 50 high-power fields of 154 specimens of clinically documented GIST cases. The Ki-67-associated proliferation rate was evaluated on three digitalized hot spots using image analysis. The H&E-based mitotic rate was found to correlate significantly better with Ki-67-assessed proliferation activity than with PHH3-assessed proliferation activity (r=0.780; P<0.01). A linear regression model (analysis of variance; P<0.001) allowed reliable predictions of the H&E-associated mitoses based on the Ki-67 expression alone. Additionally, the Ki-67-associated proliferation revealed a higher and significant impact on the recurrence and metastasis rate of the GIST cases than by the classical H&E-based mitotic rate. The results of the present study indicated that the mitotic rate may be reliably and time-efficiently estimated by immunohistochemistry of Ki-67 using only three hot spots. PMID:24527082

  9. High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories

    PubMed Central

    Sullivan, Shane Z.; Muir, Ryan D.; Newman, Justin A.; Carlsen, Mark S.; Sreehari, Suhas; Doerge, Chris; Begue, Nathan J.; Everly, R. Michael; Bouman, Charles A.; Simpson, Garth J.

    2014-01-01

    A simple beam-scanning optical design based on Lissajous trajectory imaging is described for achieving up to kHz frame-rate optical imaging on multiple simultaneous data acquisition channels. In brief, two fast-scan resonant mirrors direct the optical beam on a circuitous trajectory through the field of view, with the trajectory repeat-time given by the least common multiplier of the mirror periods. Dicing the raw time-domain data into sub-trajectories combined with model-based image reconstruction (MBIR) 3D in-painting algorithms allows for effective frame-rates much higher than the repeat time of the Lissajous trajectory. Since sub-trajectory and full-trajectory imaging are simply different methods of analyzing the same data, both high-frame rate images with relatively low resolution and low frame rate images with high resolution are simultaneously acquired. The optical hardware required to perform Lissajous imaging represents only a minor modification to established beam-scanning hardware, combined with additional control and data acquisition electronics. Preliminary studies based on laser transmittance imaging and polarization-dependent second harmonic generation microscopy support the viability of the approach both for detection of subtle changes in large signals and for trace-light detection of transient fluctuations. PMID:25321997

  10. Radiocarbon Based Ages and Growth Rates: Hawaiian Deep Sea Corals

    SciTech Connect

    Roark, E B; Guilderson, T P; Dunbar, R B; Ingram, B L

    2006-01-13

    The radial growth rates and ages of three different groups of Hawaiian deep-sea 'corals' were determined using radiocarbon measurements. Specimens of Corallium secundum, Gerardia sp., and Leiopathes glaberrima, were collected from 450 {+-} 40 m at the Makapuu deep-sea coral bed using a submersible (PISCES V). Specimens of Antipathes dichotoma were collected at 50 m off Lahaina, Maui. The primary source of carbon to the calcitic C. secundum skeleton is in situ dissolved inorganic carbon (DIC). Using bomb {sup 14}C time markers we calculate radial growth rates of {approx} 170 {micro}m y{sup -1} and ages of 68-75 years on specimens as tall as 28 cm of C. secundum. Gerardia sp., A. dichotoma, and L. glaberrima have proteinaceous skeletons and labile particulate organic carbon (POC) is their primary source of architectural carbon. Using {sup 14}C we calculate a radial growth rate of 15 {micro}m y{sup -1} and an age of 807 {+-} 30 years for a live collected Gerardia sp., showing that these organisms are extremely long lived. Inner and outer {sup 14}C measurements on four sub-fossil Gerardia spp. samples produce similar growth rate estimates (range 14-45 {micro}m y{sup -1}) and ages (range 450-2742 years) as observed for the live collected sample. Similarly, with a growth rate of < 10 {micro}m y{sup -1} and an age of {approx}2377 years, L. glaberrima at the Makapuu coral bed, is also extremely long lived. In contrast, the shallow-collected A. dichotoma samples yield growth rates ranging from 130 to 1,140 {micro}m y{sup -1}. These results show that Hawaiian deep-sea corals grow more slowly and are older than previously thought.

  11. Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.

    PubMed

    Habershon, Scott

    2016-04-12

    In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles. PMID:26938837

  12. Model-based safety assessments

    SciTech Connect

    Carlson, D.D.; Jones, T.R.

    1998-04-01

    Sandia National Laboratories performs systems analysis of high risk, high consequence systems. In particular, Sandia is responsible for the engineering of nuclear weapons, exclusive of the explosive physics package. In meeting this responsibility, Sandia has developed fundamental approaches to safety and a process for evaluating safety based on modeling and simulation. These approaches provide confidence in the safety of our nuclear weapons. Similar concepts may be applied to improve the safety of other high consequence systems.

  13. High removal rate laser-based coating removal system

    DOEpatents

    Matthews, Dennis L.; Celliers, Peter M.; Hackel, Lloyd; Da Silva, Luiz B.; Dane, C. Brent; Mrowka, Stanley

    1999-11-16

    A compact laser system that removes surface coatings (such as paint, dirt, etc.) at a removal rate as high as 1000 ft.sup.2 /hr or more without damaging the surface. A high repetition rate laser with multiple amplification passes propagating through at least one optical amplifier is used, along with a delivery system consisting of a telescoping and articulating tube which also contains an evacuation system for simultaneously sweeping up the debris produced in the process. The amplified beam can be converted to an output beam by passively switching the polarization of at least one amplified beam. The system also has a personal safety system which protects against accidental exposures.

  14. Mass Customization Production Planning System by Advance Demand Information Based on Unfulfilled-order-rate

    NASA Astrophysics Data System (ADS)

    Ueno, Nobuyuki; Kawasaki, Masaya; Okuhara, Koji

    In this paper, we try to model for ‘Naiji System’ which is a unique corporation between a maker and suppliers in Japan. We propose Mass Customization Production Planning & Management System (MCPS) based on unfulfilled-order-rate by using Advance Demand Information, which is called ‘Naiji’. This model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to the set of probabilistic constraint and some linear production constraints. We propose the new upper bound SOn (ρmin) to estimate the unfulfilled-order-rate more strictly. The procedure to find a good solution is developed by solving the linear programming problem repeatedly on the basic solution strategy that is ‘relaxation’. A computational load to obtain a solution by the proposed indicator is shown to be very small. Finally, an availability of the procedure is shown.

  15. Calibration of reaction rates for the CREST reactive-burn model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline

    2015-06-01

    In recent years, the hydrocode-based CREST reactive-burn model has had success in modelling a range of shock initiation and detonation propagation phenomena in polymer bonded explosives. CREST uses empirical reaction rates that depend on a function of the entropy of the non-reacted explosive, allowing the effects of initial temperature, porosity and double-shock desensitisation to be simulated without any modifications to the model. Until now, the sixteen reaction-rate coefficients have been manually calibrated by trial and error, using hydrocode simulations of a subset of sustained-shock initiation gas-gun experiments and the detonation size-effect curve for the explosive. This paper will describe the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using the well-established Particle Swarm Optimisation (PSO) technique. The automatic method submits multiple hydrocode simulations for each ``particle'' and analyses the results to determine the ``misfit'' to gas-gun and size-effect data. Over ~40 ``generations,'' the PSO code finds a best set of reaction-rate coefficients that minimises the misfit. The method will be demonstrated by developing a new CREST model for EDC32, a conventional high explosive.

  16. Availability analysis of subsea blowout preventer using Markov model considering demand rate

    NASA Astrophysics Data System (ADS)

    Kim, Sunghee; Chung, Soyeon; Yang, Youngsoon

    2014-12-01

    Availabilities of subsea Blowout Preventers (BOP) in the Gulf of Mexico Outer Continental Shelf (GoM OCS) is investigated using a Markov method. An updated β factor model by SINTEF is used for common-cause failures in multiple redundant systems. Coefficient values of failure rates for the Markov model are derived using the β factor model of the PDS (reliability of computer-based safety systems, Norwegian acronym) method. The blind shear ram preventer system of the subsea BOP components considers a demand rate to reflect reality more. Markov models considering the demand rate for one or two components are introduced. Two data sets are compared at the GoM OCS. The results show that three or four pipe ram preventers give similar availabilities, but redundant blind shear ram preventers or annular preventers enhance the availability of the subsea BOP. Also control systems (PODs) and connectors are contributable components to improve the availability of the subsea BOPs based on sensitivity analysis.

  17. Theoretical modeling of the effect of polymer chain immobilization rates on holographic recording in photopolymers.

    PubMed

    Mackey, Dana; O'Reilly, Paul; Naydenova, Izabela

    2016-05-01

    This paper introduces an improved mathematical model for holographic grating formation in an acrylamide-based photopolymer, which consists of partial differential equations derived from physical laws. The model is based on the two-way diffusion theory of [Appl. Opt.43, 2900 (2004)10.1364/AO.43.002900APOPAI1559-128X], which assumes short polymer chains are free to diffuse, and generalizes a similar model presented in [J. Opt. Soc. Am. B27, 197 (2010)10.1364/JOSAB.27.000197JOBPDE0740-3224] by introducing an immobilization rate governed by chain growth and cross-linking. Numerical simulations were carried out in order to investigate the behavior of the photopolymer system for short and long exposures, with particular emphasis on the effect of recording parameters (such as illumination frequency and intensity), as well as material permeability, on refractive index modulation, refractive index profile, and grating distortion. The model reproduces many well-known experimental observations, such as the decrease of refractive index modulation at high spatial frequencies and appearance of higher harmonics in the refractive index profile when the diffusion rate is much slower than the polymerization rate. These properties are supported by a theoretical investigation which uses perturbation techniques to approximate the solution over various time scales. PMID:27140889

  18. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    SciTech Connect

    Dahlen, Oda Erp, Titus S. van

    2015-06-21

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimental data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.

  19. A review of air exchange rate models for air pollution exposure assessments.

    PubMed

    Breen, Michael S; Schultz, Bradley D; Sohn, Michael D; Long, Thomas; Langstaff, John; Williams, Ronald; Isaacs, Kristin; Meng, Qing Yu; Stallings, Casson; Smith, Luther

    2014-11-01

    A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings where people spend their time. The AER, which is the rate of exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pollutants and for removal of indoor-emitted air pollutants. This paper presents an overview and critical analysis of the scientific literature on empirical and physically based AER models for residential and commercial buildings; the models highlighted here are feasible for exposure assessments as extensive inputs are not required. Models are included for the three types of airflows that can occur across building envelopes: leakage, natural ventilation, and mechanical ventilation. Guidance is provided to select the preferable AER model based on available data, desired temporal resolution, types of airflows, and types of buildings included in the exposure assessment. For exposure assessments with some limited building leakage or AER measurements, strategies are described to reduce AER model uncertainty. This review will facilitate the selection of AER models in support of air pollution exposure assessments. PMID:23715084

  20. Determination of the polymerisation rate of a low-toxicity diacetone acrylamide-based holographic photopolymer using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Cody, D.; Mihaylova, E.; O'Neill, L.; Naydenova, I.

    2015-10-01

    The polymerisation rate of a low-toxicity Diacetone Acrylamide (DA)-based photopolymer has been measured for the first time using Raman spectroscopy. A value for the polymerisation rate of 0.020 s-1 has been obtained for the DA photopolymer by modelling the polymerisation reaction dynamics as a stretched exponential or Kohlrausch decay function. This is significantly lower than the polymerisation rate of 0.100 s-1 measured for the well known Acrylamide (AA)-based photopolymer composition. The effect of the additive glycerol on the polymerisation rate of the DA-based photopolymer has also been investigated. The inclusion of glycerol is observed to increase the rate of polymerisation of the DA photopolymer by up to 60%. It is also observed that the polymerisation rate of the DA photopolymer is less dependent on the recording intensity when glycerol is present.

  1. A liquid cooled garment temperature controller based on sweat rate

    NASA Technical Reports Server (NTRS)

    Chambers, A. B.; Blackaby, J. R.

    1972-01-01

    An automatic controller for liquid cooled space suits is reported that utilizes human sweat rate as the primary input signal. The controller is so designed that the coolant inlet temperature is inversely proportional to the subject's latent heat loss as evidenced by evaporative water loss.

  2. An Empirical Approach to Determining Employee Deviance Base Rates.

    ERIC Educational Resources Information Center

    Slora, Karen B.

    Employee deviance may reflect either acts of employee theft or of production deviance. Employee theft refers to the unauthorized taking of cash, merchandise, or property. Production deviance refers to counterproductive activities which serve to slow the rate or quality of output, such as intentionally doing slow or sloppy work or using drugs on…

  3. What Explains Usage of Mobile Physician-Rating Apps? Results From a Web-Based Questionnaire

    PubMed Central

    Terlutter, Ralf; Röttl, Johanna

    2014-01-01

    Background Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. Objective This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. Methods A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. Results The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients’ value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients’ value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of

  4. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  5. High Strain Rate Deformation Modeling of a Polymer Matrix Composite. Part 2; Composite Micromechanical Model

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Stouffer, Donald C.

    1998-01-01

    Recently applications have exposed polymer matrix composite materials to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under these extreme conditions. In this second paper of a two part report, a three-dimensional composite micromechanical model is described which allows for the analysis of the rate dependent, nonlinear deformation response of a polymer matrix composite. Strain rate dependent inelastic constitutive equations utilized to model the deformation response of a polymer are implemented within the micromechanics method. The deformation response of two representative laminated carbon fiber reinforced composite materials with varying fiber orientation has been predicted using the described technique. The predicted results compare favorably to both experimental values and the response predicted by the Generalized Method of Cells, a well-established micromechanics analysis method.

  6. Analytical Modeling of the High Strain Rate Deformation of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Roberts, Gary D.; Gilat, Amos

    2003-01-01

    The results presented here are part of an ongoing research program to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. State variable constitutive equations originally developed for metals have been modified in order to model the nonlinear, strain rate dependent deformation of polymeric matrix materials. To account for the effects of hydrostatic stresses, which are significant in polymers, the classical 5 plasticity theory definitions of effective stress and effective plastic strain are modified by applying variations of the Drucker-Prager yield criterion. To verify the revised formulation, the shear and tensile deformation of a representative toughened epoxy is analyzed across a wide range of strain rates (from quasi-static to high strain rates) and the results are compared to experimentally obtained values. For the analyzed polymers, both the tensile and shear stress-strain curves computed using the analytical model correlate well with values obtained through experimental tests. The polymer constitutive equations are implemented within a strength of materials based micromechanics method to predict the nonlinear, strain rate dependent deformation of polymer matrix composites. In the micromechanics, the unit cell is divided up into a number of independently analyzed slices, and laminate theory is then applied to obtain the effective deformation of the unit cell. The composite mechanics are verified by analyzing the deformation of a representative polymer matrix composite (composed using the representative polymer analyzed for the correlation of the polymer constitutive equations) for several fiber orientation angles across a variety of strain rates. The computed values compare favorably to experimentally obtained results.

  7. A model study with light-dependent mortality rates of copepod stages

    NASA Astrophysics Data System (ADS)

    Neumann, Thomas; Kremp, Christine

    2005-06-01

    This paper is based on an advanced ecosystem model of the Baltic Sea (ERGOM [ J. Mar. Sys. 25 (3-4) (2005) 405]), but with an increased resolution of the zooplankton stage variable [ J. Plankton Res. 23 (2001) 1217; ICES Marine Science 219 (2003) 208]. The model copepods are represented by five stages: eggs, an aggregated variable of nauplii, two aggregated groups of copepodites and adults. The transfer among the stages, i.e., hatching, molting and reproduction, is controlled by food availability and temperature. As usual, the model food web is truncated at the level of zooplankton. The study explores the effects of different parametrization of zooplankton mortality and looks in particular on light-dependent rates. The light climate may serve a proxy for the effects of visual feeding of fish larvae and fish. Different choices of the mortality parameters can result in remarkable differences in abundances and biomass of the model zooplankton and in the timing of its development. It is found that the different choices of mortality affect the development of populations in several ways: Relative small initial differences of abundances at the beginning of the spring bloom are important for the development of the model populations. Higher mortality rates are less important at food rich conditions than at scarce resources. At low phytoplankton levels, the individual development of the copepods through the stages can be faster for elevated mortality rates because then less animals have to share the available food.

  8. Study of tissue oxygen supply rate in a macroscopic photodynamic therapy singlet oxygen model

    PubMed Central

    Zhu, Timothy C.; Liu, Baochang; Penjweini, Rozhin

    2015-01-01

    Abstract. An appropriate expression for the oxygen supply rate (Γs) is required for the macroscopic modeling of the complex mechanisms of photodynamic therapy (PDT). It is unrealistic to model the actual heterogeneous tumor microvascular networks coupled with the PDT processes because of the large computational requirement. In this study, a theoretical microscopic model based on uniformly distributed Krogh cylinders is used to calculate Γs=g (1−[O32]/[O32]0) that can replace the complex modeling of blood vasculature while maintaining a reasonable resemblance to reality; g is the maximum oxygen supply rate and [O32]/[O32]0 is the volume-average tissue oxygen concentration normalized to its value prior to PDT. The model incorporates kinetic equations of oxygen diffusion and convection within capillaries and oxygen saturation from oxyhemoglobin. Oxygen supply to the tissue is via diffusion from the uniformly distributed blood vessels. Oxygen can also diffuse along the radius and the longitudinal axis of the cylinder within tissue. The relations of Γs to [3O2]/[3O2]0 are examined for a biologically reasonable range of the physiological parameters for the microvasculature and several light fluence rates (ϕ). The results show a linear relationship between Γs and [3O2]/[3O2]0, independent of ϕ and photochemical parameters; the obtained g ranges from 0.4 to 1390  μM/s. PMID:25741665

  9. Elliptic-blending second-moment turbulence closure using an algebraic anisotropic dissipation rate tensor model

    NASA Astrophysics Data System (ADS)

    Shin, Jong-Keun; Seo, Jeong-Sik; Choi, Young-Don

    2009-06-01

    This study describes the amendment of an algebraic anisotropic dissipation rate model (ADRM) and its application to various turbulent flows to test the model's performance. Modeling anisotropies for the turbulence dissipation rate is considered by an analysis of the exact transport equation for the dissipation rate tensor. The second-moment closure, which is based on the explicit amended ADRM, is proposed and it is closely linked to the elliptic-blending model that is used for the prediction of Reynolds stresses. To develop and calibrate the present elliptic-blending second-moment closure that uses the amended ADRM, firstly, the distributions of both the mean velocity and Reynolds stress are solved for flows in a fully developed non-rotating channel and a straight square duct. And then, the fully developed turbulent flows in a rotating channel and a rotating straight square duct are predicted to test the ability of the explicit amended ADRM that is combined with the rotation effect. The prediction results are directly compared with the DNS and the large-eddy simulation (LES) to assess the performance of the new model predictions and to show their reasonable agreement with the DNS and LES data for all the flow fields that are analyzed for the present study. This paper is a modified version of the original article from the Proceedings of the 5th International Symposium on Turbulence and Shear Flow Phenomena held in Munich, Germany on 27-29 August 2007.

  10. High removal rate laser-based coating removal system

    SciTech Connect

    Matthews, D.L.; Celliers, P.M.; Hackel, L.; Da Silva, L.B.; Dane, C.B.; Mrowka, S.

    1999-11-16

    A compact laser system is disclosed that removes surface coatings (such as paint, dirt, etc.) at a removal rate as high as 1,000 ft{sup 2}/hr or more without damaging the surface. A high repetition rate laser with multiple amplification passes propagating through at least one optical amplifier is used, along with a delivery system consisting of a telescoping and articulating tube which also contains an evacuation system for simultaneously sweeping up the debris produced in the process. The amplified beam can be converted to an output beam by passively switching the polarization of at least one amplified beam. The system also has a personal safety system which protects against accidental exposures.

  11. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    NASA Astrophysics Data System (ADS)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare

  12. Infant breathing rate counter based on variable resistor for pneumonia

    NASA Astrophysics Data System (ADS)

    Sakti, Novi Angga; Hardiyanto, Ardy Dwi; La Febry Andira R., C.; Camelya, Kesa; Widiyanti, Prihartini

    2016-03-01

    Pneumonia is one of the leading causes of death in new born baby in Indonesia. According to WHO in 2002, breathing rate is very important index to be the symptom of pneumonia. In the Community Health Center, the nurses count with a stopwatch for exactly one minute. Miscalculation in Community Health Center occurs because of long time concentration and focus on two object at once. This calculation errors can cause the baby who should be admitted to the hospital only be attended at home. Therefore, an accurate breathing rate counter at Community Health Center level is necessary. In this work, resistance change of variable resistor is made to be breathing rate counter. Resistance change in voltage divider can produce voltage change. If the variable resistance moves periodically, the voltage will change periodically too. The voltage change counted by software in the microcontroller. For the every mm shift at the variable resistor produce average 0.96 voltage change. The software can count the number of wave generated by shifting resistor.

  13. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  14. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  15. On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi

    2010-12-01

    The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.

  16. Modeling changes in glucose and glycerol rates of appearance when true basal rates of appearance cannot be readily determined.

    PubMed

    Pyle, Laura; Bergman, Bryan C; Nadeau, Kristen J; Cree-Green, Melanie

    2016-03-01

    Advancing diabetes care requires accurate physiological assessments. Hyperinsulinemic clamps with stable isotope tracers can simultaneously measure insulin's ability to suppress lipolysis and hepatic glucose release. Traditionally, these methods require an assessment of basal glucose and glycerol rate of appearance (Ra). Basal Ra is challenging to measure in insulin-dependent diabetes, where exogenous insulin required to maintain normoglycemia can raise peripheral insulin concentrations sufficiently to suppress basal Ra. Thus we identified two alternative statistical approaches to describe changes in glucose and glycerol Ra that are less reliant on basal assessments. Sixteen youths (4 type 1 diabetic, 4 type 2 diabetic, 4 lean controls, and 4 obese nondiabetic) underwent a four-phase ("basal" and 10, 16, and 80 mU·m(2)·min(-1)) hyperinsulinemic euglycemic clamp with glucose and glycerol tracers. Glucose and glycerol Ra were calculated per phase. A statistical method, the standard two-stage (STS) algorithm, was applied to the individual log insulin vs. Ra curves to calculate a single predicted Ra value. A population-based mixed-effects model (MEM) compared the group average Ra with log insulin curves and described individual deviations from group means and was used to calculate individual predicted Ra. Both models were applied to the participant data, and predicted Ras at the mean insulin concentration per phase (10 for glycerol, 16 for glucose) were calculated, with good agreement between observed and predicted values. In our data set, the MEM was better able to detect group differences. Both STS and MEM can model lipolysis and endogenous glucose release in insulin-dependent states when basal Ra cannot be accurately measured. PMID:26714848

  17. Process-Based Modeling of Constructed Wetlands

    NASA Astrophysics Data System (ADS)

    Baechler, S.; Brovelli, A.; Rossi, L.; Barry, D. A.

    2007-12-01

    Constructed wetlands (CWs) are widespread facilities for wastewater treatment. In subsurface flow wetlands, contaminated wastewater flows through a porous matrix, where oxidation and detoxification phenomena occur. Despite the large number of working CWs, system design and optimization are still mainly based upon empirical equations or simplified first-order kinetics. This results from an incomplete understanding of the system functioning, and may in turn hinder the performance and effectiveness of the treatment process. As a result, CWs are often considered not suitable to meet high water quality-standards, or to treat water contaminated with recalcitrant anthropogenic contaminants. To date, only a limited number of detailed numerical models have been developed and successfully applied to simulate constructed wetland behavior. Among these, one of the most complete and powerful is CW2D, which is based on Hydrus2D. The aim of this work is to develop a comprehensive simulator tailored to model the functioning of horizontal flow constructed wetlands and in turn provide a reliable design and optimization tool. The model is based upon PHWAT, a general reactive transport code for saturated flow. PHWAT couples MODFLOW, MT3DMS and PHREEQC-2 using an operator-splitting approach. The use of PHREEQC to simulate reactions allows great flexibility in simulating biogeochemical processes. The biogeochemical reaction network is similar to that of CW2D, and is based on the Activated Sludge Model (ASM). Kinetic oxidation of carbon sources and nutrient transformations (nitrogen and phosphorous primarily) are modeled via Monod-type kinetic equations. Oxygen dissolution is accounted for via a first-order mass-transfer equation. While the ASM model only includes a limited number of kinetic equations, the new simulator permits incorporation of an unlimited number of both kinetic and equilibrium reactions. Changes in pH, redox potential and surface reactions can be easily incorporated

  18. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  19. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    NASA Astrophysics Data System (ADS)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  20. Predicting key malaria transmission factors, biting and entomological inoculation rates, using modelled soil moisture in Kenya.

    PubMed

    Patz, J A; Strzepek, K; Lele, S; Hedden, M; Greene, S; Noden, B; Hay, S I; Kalkstein, L; Beier, J C

    1998-10-01

    While malaria transmission varies seasonally, large inter-annual heterogeneity of malaria incidence occurs. Variability in entomological parameters, biting rates and entomological inoculation rates (EIR) have been strongly associated with attack rates in children. The goal of this study was to assess the weather's impact on weekly biting and EIR in the endemic area of Kisian, Kenya. Entomological data collected by the U.S. Army from March 1986 through June 1988 at Kisian, Kenya was analysed with concurrent weather data from nearby Kisumu airport. A soil moisture model of surface-water availability was used to combine multiple weather parameters with landcover and soil features to improve disease prediction. Modelling soil moisture substantially improved prediction of biting rates compared to rainfall; soil moisture lagged two weeks explained up to 45% of An. gambiae biting variability, compared to 8% for raw precipitation. For An. funestus, soil moisture explained 32% variability, peaking after a 4-week lag. The interspecies difference in response to soil moisture was significant (P < 0.00001). A satellite normalized differential vegetation index (NDVI) of the study site yielded a similar correlation (r = 0.42 An. gambiae). Modelled soil moisture accounted for up to 56% variability of An. gambiae EIR, peaking at a lag of six weeks. The relationship between temperature and An. gambiae biting rates was less robust; maximum temperature r2 = -0.20, and minimum temperature r2 = 0.12 after lagging one week. Benefits of hydrological modelling are compared to raw weather parameters and to satellite NDVI. These findings can improve both current malaria risk assessments and those based on El Niño forecasts or global climate change model projections. PMID:9809915

  1. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically. PMID:25736464

  2. 75 FR 72581 - Assessments, Assessment Base and Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ...The FDIC is proposing to amend its regulations to implement revisions to the Federal Deposit Insurance Act made by the Dodd-Frank Wall Street Reform and Consumer Protection Act regarding the definition of an institution's deposit insurance assessment base; alter the unsecured debt adjustment in light of the changes to the assessment base; add an adjustment for long-term debt held by an insured......

  3. Particle-based model for skiing traffic

    NASA Astrophysics Data System (ADS)

    Holleczek, Thomas; Tröster, Gerhard

    2012-05-01

    We develop and investigate a particle-based model for ski slope traffic. Skiers are modeled as particles with a mass that are exposed to social and physical forces, which define the riding behavior of skiers during their descents on ski slopes. We also report position and speed data of 21 skiers recorded with GPS-equipped cell phones on two ski slopes. A comparison of these data with the trajectories resulting from computer simulations of our model shows a good correspondence. A study of the relationship among the density, speed, and flow of skiers reveals that congestion does not occur even with arrival rates of skiers exceeding the maximum ski lift capacity. In a sensitivity analysis, we identify the kinetic friction coefficient of skis on snow, the skier mass, the range of repelling social forces, and the arrival rate of skiers as the crucial parameters influencing the simulation results. Our model allows for the prediction of speed zones and skier densities on ski slopes, which is important in the prevention of skiing accidents.

  4. REFINED PHOTOLYSIS RATES FOR ADVANCED AIR QUALITY MODELING SYSTEM

    EPA Science Inventory

    Accurate modeling of photochemistry is critical and fundamental to reducing the uncertainty in air quality model predictions. lmost all chemical reactions in the atmosphere are initiated by the photodissociation of a number of trace gases. irect measure of this photodissociation ...

  5. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  6. Moving from gamma passing rates to patient DVH-based QA metrics in pretreatment dose QA

    SciTech Connect

    Zhen, Heming; Nelms, Benjamin E.; Tome, Wolfgang A.

    2011-10-15

    Purpose: The purpose of this work is to explore the usefulness of the gamma passing rate metric for per-patient, pretreatment dose QA and to validate a novel patient-dose/DVH-based method and its accuracy and correlation. Specifically, correlations between: (1) gamma passing rates for three 3D dosimeter detector geometries vs clinically relevant patient DVH-based metrics; (2) Gamma passing rates of whole patient dose grids vs DVH-based metrics, (3) gamma passing rates filtered by region of interest (ROI) vs DVH-based metrics, and (4) the capability of a novel software algorithm that estimates corrected patient Dose-DVH based on conventional phan-tom QA data are analyzed. Methods: Ninety six unique ''imperfect'' step-and-shoot IMRT plans were generated by applying four different types of errors on 24 clinical Head/Neck patients. The 3D patient doses as well as the dose to a cylindrical QA phantom were then recalculated using an error-free beam model to serve as a simulated measurement for comparison. Resulting deviations to the planned vs simulated measured DVH-based metrics were generated, as were gamma passing rates for a variety of difference/distance criteria covering: dose-in-phantom comparisons and dose-in-patient comparisons, with the in-patient results calculated both over the whole grid and per-ROI volume. Finally, patient dose and DVH were predicted using the conventional per-beam planar data as input into a commercial ''planned dose perturbation'' (PDP) algorithm, and the results of these predicted DVH-based metrics were compared to the known values. Results: A range of weak to moderate correlations were found between clinically relevant patient DVH metrics (CTV-D95, parotid D{sub mean}, spinal cord D1cc, and larynx D{sub mean}) and both 3D detector and 3D patient gamma passing rate (3%/3 mm, 2%/2 mm) for dose-in-phantom along with dose-in-patient for both whole patient volume and filtered per-ROI. There was considerable scatter in the gamma passing rate

  7. Variable-rate colour image quantization based on quadtree segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Y. C.; Li, C. Y.; Chuang, J. C.; Lo, C. C.

    2011-09-01

    A novel variable-sized block encoding with threshold control for colour image quantization (CIQ) is presented in this paper. In CIQ, the colour palette used has a great influence on the reconstructed image quality. Typically, a higher image quality and a larger storage cost are obtained when a larger-sized palette is used in CIQ. To cut down the storage cost while preserving quality of the reconstructed images, the threshold control policy for quadtree segmentation is used in this paper. Experimental results show that the proposed method adaptively provides desired bit rates while having better image qualities comparing to CIQ with the usage of multiple palettes of different sizes.

  8. Global scale analysis of the stream power law parameters based on worldwide 10Be denudation rates

    NASA Astrophysics Data System (ADS)

    Harel, Marie-Alice; Mudd, Simon; Attal, Mikael

    2015-04-01

    The stream power law, expressed as E = KAmSn where E is erosion rate [LT-1], K is erodibility [T-1L(1-2m)], A is drainage area [L2], S is channel gradient [L/L] and m and n are constants, is the most widely used model for bedrock channel incision. Despite its simplicity and limitations, the model has proved useful for a large number of applications such as topographic evolution, knickpoint migration, palaeotopography reconstruction, and the determination of uplift patterns and rates. However, the unknown parameters K, m and n are often fixed arbitrarily or are based on assumptions about the physics of the erosion processes that are not always valid, which considerably alters the use and interpretation of the model. In this study, we compile published 10Be basin-wide erosion rates (n = 1335) in order to assess the m/n ratio (or concavity index), the slope exponent n and erodibility coefficient K using the integral method of channel profile analysis. These three parameters are calculated for 66 areas and allow for a global scale analysis in terms of climatic, tectonic and environmental settings. Our results suggest that (i) many sites are too noisy or do not have enough data to predict n and K with a satisfying level of confidence; (ii) the slope exponent is predominantly greater than one, meaning that the relationship between erosion rate and the channel gradient is non-linear, supporting the idea that incision is a threshold controlled process. Furthermore, a multi-regression analysis and the calculation of n and K using a reference concavity index m/n = 0.45 demonstrate that (iii) many intuitive or previously demonstrated local-scale trends, such as the correlation between erosion rate and climate, do not appear at a global scale.

  9. Global Scale Analysis of the Stream Power Law Parameters based on Worldwide 10Be Denudation Rates

    NASA Astrophysics Data System (ADS)

    Harel, M. A.; Mudd, S. M.; Attal, M.

    2015-12-01

    The stream power law, expressed as E = KAmSn where E is erosion rate [LT-1], K is erodibility [T-1L(1-2m)], A is drainage area [L2], S is channel gradient [L/L] and m and n are constants, is the most widely used model for bedrock channel incision. Despite its simplicity and limitations, the model has proved useful for a large number of applications such as topographic evolution, knickpoint migration, palaeotopography reconstruction, and the determination of uplift patterns and rates. However, the unknown parameters K, m and n are often fixed arbitrarily or are based on assumptions about the physics of the erosion processes that are not always valid, which considerably alters the use and interpretation of the model. In this study, we compile published 10Be basin-wide erosion rates (N= 1423) in order to assess the m/n ratio (or concavity index), the slope exponent n and erodibility coefficient K using the integral method of channel profile analysis. These three parameters are calculated for 67 areas and allow for a global scale analysis in terms of climatic, tectonic and environmental settings. Our results suggest that (i) many sites are too noisy or do not have enough data to predict n and K with a satisfying level of confidence; (ii) the slope exponent is predominantly greater than one, meaning that the relationship between erosion rate and the channel gradient is non-linear, supporting the idea that incision is a threshold controlled process. Furthermore, a multi-regression analysis and the calculation of n and K using a reference concavity index m/n = 0.45 demonstrates that (iii) many intuitive or previously demonstrated local-scale trends, such as the correlation between erosion rate and climate, do not appear at a global scale.

  10. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed

  11. A Bayesian Hierarchical Modeling Scheme for Estimating Erosion Rates Under Current Climate Conditions

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2014-12-01

    Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.

  12. Chlorination and dechlorination rates in a forest soil - A combined modelling and experimental approach.

    PubMed

    Montelius, Malin; Svensson, Teresia; Lourino-Cabana, Beatriz; Thiry, Yves; Bastviken, David

    2016-06-01

    Much of the total pool of chlorine (Cl) in soil consists of naturally produced organic chlorine (Clorg). The chlorination of bulk organic matter at substantial rates has been experimentally confirmed in various soil types. The subsequent fates of Clorg are important for ecosystem Cl cycling and residence times. As most previous research into dechlorination in soils has examined either single substances or specific groups of compounds, we lack information about overall bulk dechlorination rates. Here we assessed bulk organic matter chlorination and dechlorination rates in coniferous forest soil based on a radiotracer experiment conducted under various environmental conditions (additional water, labile organic matter, and ammonium nitrate). Experiment results were used to develop a model to estimate specific chlorination (i.e., fraction of Cl(-) transformed to Clorg per time unit) and specific dechlorination (i.e., fraction of Clorg transformed to Cl(-) per time unit) rates. The results indicate that chlorination and dechlorination occurred simultaneously under all tested environmental conditions. Specific chlorination rates ranged from 0.0005 to 0.01 d(-1) and were hampered by nitrogen fertilization but were otherwise similar among the treatments. Specific dechlorination rates were 0.01-0.03d(-1) and were similar among all treatments. This study finds that soil Clorg levels result from a dynamic equilibrium between the chlorination and rapid dechlorination of some Clorg compounds, while another Clorg pool is dechlorinated more slowly. Altogether, this study demonstrates a highly active Cl cycling in soils. PMID:26950634

  13. An Administrative Claims Model for Profiling Hospital 30-Day Mortality Rates for Pneumonia Patients

    PubMed Central

    Bratzler, Dale W.; Normand, Sharon-Lise T.; Wang, Yun; O'Donnell, Walter J.; Metersky, Mark; Han, Lein F.; Rapp, Michael T.; Krumholz, Harlan M.

    2011-01-01

    Background Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Methodology/Principal Findings Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998–2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998–2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25th, 50th, and 75th percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). Conclusions/Significance An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model. PMID:21532758

  14. Modeling climate effects on hip fracture rate by the multivariate GARCH model in Montreal region, Canada.

    PubMed

    Modarres, Reza; Ouarda, Taha B M J; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2014-07-01

    Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMAX-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56% of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk. PMID:23722925

  15. Modeling climate effects on hip fracture rate by the multivariate GARCH model in Montreal region, Canada

    NASA Astrophysics Data System (ADS)

    Modarres, Reza; Ouarda, Taha B. M. J.; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2014-07-01

    Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMA X-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56 % of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk.

  16. A medical cloud-based platform for respiration rate measurement and hierarchical classification of breath disorders.

    PubMed

    Fekr, Atena Roshan; Janidarmian, Majid; Radecka, Katarzyna; Zilic, Zeljko

    2014-01-01

    The measurement of human respiratory signals is crucial in cyberbiological systems. A disordered breathing pattern can be the first symptom of different physiological, mechanical, or psychological dysfunctions. Therefore, a real-time monitoring of the respiration patterns, as well as respiration rate is a critical need in medical applications. There are several methods for respiration rate measurement. However, despite their accuracy, these methods are expensive and could not be integrated in a body sensor network. In this work, we present a real-time cloud-based platform for both monitoring the respiration rate and breath pattern classification, remotely. The proposed system is designed particularly for patients with breathing problems (e.g., respiratory complications after surgery) or sleep disorders. Our system includes calibrated accelerometer sensor, Bluetooth Low Energy (BLE) and cloud-computing model. We also suggest a procedure to improve the accuracy of respiration rate for patients at rest positions. The overall error in the respiration rate calculation is obtained 0.53% considering SPR-BTA spirometer as the reference. Five types of respiration disorders, Bradapnea, Tachypnea, Cheyn-stokes, Kaussmal, and Biot's breathing are classified based on hierarchical Support Vector Machine (SVM) with seven different features. We have evaluated the performance of the proposed classification while it is individualized to every subject (case 1) as well as considering all subjects (case 2). Since the selection of kernel function is a key factor to decide SVM's performance, in this paper three different kernel functions are evaluated. The experiments are conducted with 11 subjects and the average accuracy of 94.52% for case 1 and the accuracy of 81.29% for case 2 are achieved based on Radial Basis Function (RBF). Finally, a performance evaluation has been done for normal and impaired subjects considering sensitivity, specificity and G-mean parameters of different kernel

  17. A Medical Cloud-Based Platform for Respiration Rate Measurement and Hierarchical Classification of Breath Disorders

    PubMed Central

    Fekr, Atena Roshan; Janidarmian, Majid; Radecka, Katarzyna; Zilic, Zeljko

    2014-01-01

    The measurement of human respiratory signals is crucial in cyberbiological systems. A disordered breathing pattern can be the first symptom of different physiological, mechanical, or psychological dysfunctions. Therefore, a real-time monitoring of the respiration patterns, as well as respiration rate is a critical need in medical applications. There are several methods for respiration rate measurement. However, despite their accuracy, these methods are expensive and could not be integrated in a body sensor network. In this work, we present a real-time cloud-based platform for both monitoring the respiration rate and breath pattern classification, remotely. The proposed system is designed particularly for patients with breathing problems (e.g., respiratory complications after surgery) or sleep disorders. Our system includes calibrated accelerometer sensor, Bluetooth Low Energy (BLE) and cloud-computing model. We also suggest a procedure to improve the accuracy of respiration rate for patients at rest positions. The overall error in the respiration rate calculation is obtained 0.53% considering SPR-BTA spirometer as the reference. Five types of respiration disorders, Bradapnea, Tachypnea, Cheyn-stokes, Kaussmal, and Biot's breathing are classified based on hierarchical Support Vector Machine (SVM) with seven different features. We have evaluated the performance of the proposed classification while it is individualized to every subject (case 1) as well as considering all subjects (case 2). Since the selection of kernel function is a key factor to decide SVM's performance, in this paper three different kernel functions are evaluated. The experiments are conducted with 11 subjects and the average accuracy of 94.52% for case 1 and the accuracy of 81.29% for case 2 are achieved based on Radial Basis Function (RBF). Finally, a performance evaluation has been done for normal and impaired subjects considering sensitivity, specificity and G-mean parameters of different kernel

  18. Rate and Pulse Based Plasticity Governed by Local Synaptic State Variables

    PubMed Central

    Mayr, Christian G.; Partzsch, Johannes

    2010-01-01

    Classically, action-potential-based learning paradigms such as the Bienenstock–Cooper–Munroe (BCM) rule for pulse rates or spike timing-dependent plasticity for pulse pairings have been experimentally demonstrated to evoke long-lasting synaptic weight changes (i.e., plasticity). However, several recent experiments have shown that plasticity also depends on the local dynamics at the synapse, such as membrane voltage, Calcium time course and level, or dendritic spikes. In this paper, we introduce a formulation of the BCM rule which is based on the instantaneous postsynaptic membrane potential as well as the transmission profile of the presynaptic spike. While this rule incorporates only simple local voltage- and current dynamics and is thus neither directly rate nor timing based, it can replicate a range of experiments, such as various rate and spike pairing protocols, combinations of the two, as well as voltage-dependent plasticity. A detailed comparison of current plasticity models with respect to this range of experiments also demonstrates the efficacy of the new plasticity rule. All experiments can be replicated with a limited set of parameters, avoiding the overfitting problem of more involved plasticity rules. PMID:21423519

  19. 48 CFR 1616.7002 - Clause-contracts based on cost analysis (experience rated).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cost analysis (experience rated). 1616.7002 Section 1616.7002 Federal Acquisition Regulations System... based on cost analysis (experience rated). The clause at section 1652.216-71 shall be inserted in all FEHBP contracts based on cost analysis (experience rated)....

  20. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 44 Emergency Management and Assistance 1 2014-10-01 2014-10-01 false Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  1. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 44 Emergency Management and Assistance 1 2013-10-01 2013-10-01 false Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  2. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 44 Emergency Management and Assistance 1 2012-10-01 2011-10-01 true Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  3. 44 CFR 61.12 - Rates based on a flood protection system involving Federal funds.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Rates based on a flood... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.12 Rates based on a flood protection system...

  4. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    NASA Astrophysics Data System (ADS)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  5. Refining emission rate estimates using a coupled receptor-dispersion modeling approach

    NASA Astrophysics Data System (ADS)

    Selvaraju, N.; Pushpavanam, S.

    2010-10-01

    Receptor modeling techniques like chemical mass balance are used to attribute pollution levels at a point to different sources. Here we analyze the composition of particulate matter and use the source profiles of sources prevalent in a region to estimate quantitative source contributions. In dispersion modeling on the other hand the emission rates of various sources together with meteorological conditions are used to determine the concentrations levels at a point or in a region. The predictions using these two approaches are often inconsistent. In this work these differences are attributed to errors in emission inventory. Here an algorithm for coupling receptor and dispersion models is proposed to reduce the differences of the two predictions and determine the emission rates accurately. The proposed combined approach helps reconcile the differences arising when the two approaches are used in a stand-alone mode. This work is based on assuming that the models are perfect and uses a model-to-model comparison to illustrate the concept.

  6. Model-based reasoning: Troubleshooting

    NASA Astrophysics Data System (ADS)

    Davis, Randall; Hamscher, Walter C.

    1988-07-01

    To determine why something has stopped working, its useful to know how it was supposed to work in the first place. That simple observation underlies some of the considerable interest generated in recent years on the topic of model-based reasoning, particularly its application to diagnosis and troubleshooting. This paper surveys the current state of the art, reviewing areas that are well understood and exploring areas that present challenging research topics. It views the fundamental paradigm as the interaction of prediction and observation, and explores it by examining three fundamental subproblems: generating hypotheses by reasoning from a symptom to a collection of components whose misbehavior may plausibly have caused that symptom; testing each hypothesis to see whether it can account for all available observations of device behavior; then discriminating among the ones that survive testing. We analyze each of these independently at the knowledge level i.e., attempting to understand what reasoning capabilities arise from the different varieties of knowledge available to the program. We find that while a wide range of apparently diverse model-based systems have been built for diagnosis and troubleshooting, they are for the most part variations on the central theme outlined here. Their diversity lies primarily in the varying amounts of kinds of knowledge they bring to bear at each stage of the process; the underlying paradigm is fundamentally the same.

  7. Strengthened PAN-based carbon fibers obtained by slow heating rate carbonization

    PubMed Central

    Kim, Min-A; Jang, Dawon; Tejima, Syogo; Cruz-Silva, Rodolfo; Joh, Han-Ik; Kim, Hwan Chul; Lee, Sungho; Endo, Morinobu

    2016-01-01

    Large efforts have been made over the last 40 years to increase the mechanical strength of polyacrylonitrile (PAN)-based carbon fibers (CFs) using a variety of chemical or physical protocols. In this paper, we report a new method to increase CFs mechanical strength using a slow heating rate during the carbonization process. This new approach increases both the carbon sp3 bonding and the number of nitrogen atoms with quaternary bonding in the hexagonal carbon network. Theoretical calculations support a crosslinking model promoted by the interstitial carbon atoms located in the graphitic interlayer spaces. The improvement in mechanical performance by a controlled crosslinking between the carbon hexagonal layers of the PAN based CFs is a new concept that can contribute further in the tailoring of CFs performance based on the understanding of their microstructure down to the atomic scale. PMID:27004752

  8. Strengthened PAN-based carbon fibers obtained by slow heating rate carbonization

    NASA Astrophysics Data System (ADS)

    Kim, Min-A.; Jang, Dawon; Tejima, Syogo; Cruz-Silva, Rodolfo; Joh, Han-Ik; Kim, Hwan Chul; Lee, Sungho; Endo, Morinobu

    2016-03-01

    Large efforts have been made over the last 40 years to increase the mechanical strength of polyacrylonitrile (PAN)-based carbon fibers (CFs) using a variety of chemical or physical protocols. In this paper, we report a new method to increase CFs mechanical strength using a slow heating rate during the carbonization process. This new approach increases both the carbon sp3 bonding and the number of nitrogen atoms with quaternary bonding in the hexagonal carbon network. Theoretical calculations support a crosslinking model promoted by the interstitial carbon atoms located in the graphitic interlayer spaces. The improvement in mechanical performance by a controlled crosslinking between the carbon hexagonal layers of the PAN based CFs is a new concept that can contribute further in the tailoring of CFs performance based on the understanding of their microstructure down to the atomic scale.

  9. Strengthened PAN-based carbon fibers obtained by slow heating rate carbonization.

    PubMed

    Kim, Min-A; Jang, Dawon; Tejima, Syogo; Cruz-Silva, Rodolfo; Joh, Han-Ik; Kim, Hwan Chul; Lee, Sungho; Endo, Morinobu

    2016-01-01

    Large efforts have been made over the last 40 years to increase the mechanical strength of polyacrylonitrile (PAN)-based carbon fibers (CFs) using a variety of chemical or physical protocols. In this paper, we report a new method to increase CFs mechanical strength using a slow heating rate during the carbonization process. This new approach increases both the carbon sp(3) bonding and the number of nitrogen atoms with quaternary bonding in the hexagonal carbon network. Theoretical calculations support a crosslinking model promoted by the interstitial carbon atoms located in the graphitic interlayer spaces. The improvement in mechanical performance by a controlled crosslinking between the carbon hexagonal layers of the PAN based CFs is a new concept that can contribute further in the tailoring of CFs performance based on the understanding of their microstructure down to the atomic scale. PMID:27004752

  10. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    PubMed Central

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  11. Simulation of Fine Resist Profile Formation by Electron Beam Drawing and Development with Solubility Rate Based on Energy Deposition Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Komori, Takuya; Zhang, Yulong; Yin, You; Hosaka, Sumio

    2013-12-01

    We proposed a model for calculating the resist profile in electron beam drawing. The model predicts the solubility rate on the basis of the energy deposition distribution (EDD) for the development of latent patterns in the resist. By unifying the exposure dose D (via experiments) and EDDs (via calculations), we roughly determined solubility rates for three-dimensional EDDs, and established the proposed model. The development simulation was achieved by the sequential calculation method for solubility rates based on EDD which was calculated by Monte Carlo simulation. By determining a suitable EDD region to achieve good patterning, we obtained a sharp nanodot pattern of the resist. This simulation results are in good agreement with the experimental results obtained using a combination of 2.3 wt % tetramethylammonium hydroxide (TMAH) and 4 wt % NaCl as the developer. The model was demonstrated to be useful for predicting resist profiles with different experimental solubility rates of developers.

  12. Ductile damage modeling based on void coalescence and percolation theories

    SciTech Connect

    Tonks, D.L.; Zurek, A.K.; Thissell, W.R.

    1995-09-01

    A general model for ductile damage in metals is presented. It includes damage induced by shear stress as well as damage caused by volumetric tension. Spallation is included as a special case. Strain induced damage is also treated. Void nucleation and growth are included, and give rise to strain rate effects. Strain rate effects also arise in the model through elastic release wave propagation between damage centers. The underlying physics of the model is the nucleation, growth, and coalescence of voids in a plastically flowing solid. The model is intended for hydrocode based computer simulation. An experimental program is underway to validate the model.

  13. Modelling Oxygen Dynamics in an Intermittently Stratified Estuary: Estimation of Process Rates Using Field Data

    NASA Astrophysics Data System (ADS)

    Borsuk, M. E.; Stow, C. A.; Luettich, R. A.; Paerl, H. W.; Pinckney, J. L.

    2001-01-01

    The relationship between bottom water dissolved oxygen concentration, vertical stratification, and temperature was investigated for the Neuse River estuary, North Carolina, a shallow, intermittently-mixed estuary using approximately 10 years of weekly/biweekly, mid-channel data. A generalized additive model (GAM) was used to initially explore the major relationships among observed variables. The results of this statistical model guided the specification of a process-based model of oxygen dynamics that is consistent with theory yet simple enough to be parameterized using available field data. The nonlinear optimization procedure employed allows for the direct estimation of microbial oxygen consumption and physical reoxygenation rates, including the effects of temperature and vertical stratification. These estimated rates may better represent aggregate system behaviour than closed chamber measurements made in the laboratory and in situ. The resulting model describes 79% of the variation in dissolved oxygen concentration and is robust when compared across separate locations and time periods. Model predictions suggest that the spatial extent and duration of hypoxia in the bottom waters of the Neuse are controlled by the balance between the net oxygen depletion rate and the frequency of vertical mixing events. During cool months, oxygen consumption rates remain low enough to keep oxygen concentration well above levels of concern even under extended periods of stratification. A concentration below 4 mg l -1is only expected under extended periods without vertical mixing when bottom water temperature exceeds 15 °C, while a concentration below 2 mg l -1is only expected when water temperature exceeds 20 °C. To incorporate the effects of parameter uncertainty, model error, and natural variability on model prediction, we used Monte Carlo simulation to generate distributions for the predicted number of days of hypoxia during the summer season. The expected number of days with

  14. An Item Response Unfolding Model for Graphic Rating Scales

    ERIC Educational Resources Information Center

    Liu, Ying

    2009-01-01

    The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…

  15. MICROBIAL VOLATILE ORGANIC COMPOUND EMISSION RATES AND EXPOSURE MODEL

    EPA Science Inventory

    This paper presents the results from a study that examined microbial volatile organic compound (MVOC) emissions from six fungi and one bacterial species (Streptomyces spp.) commonly found in indoor environments. Data are presented on peak emission rates from inoculated agar plate...

  16. A Latent-Variable Causal Model of Faculty Reputational Ratings.

    ERIC Educational Resources Information Center

    King, Suzanne; Wolfle, Lee M.

    A reanalysis was conducted of Saunier's research (1985) on sources of variation in the National Research Council (NRC) reputational ratings of university faculty. Saunier conducted a stepwise regression analysis using 12 predictor variables. Due to problems with multicollinearity and because of the atheoretical nature of stepwise regression,…

  17. Sensitivity analysis of the Wetland Accretion Rate Model for Ecosystem Resilience (WARMER)

    NASA Astrophysics Data System (ADS)

    Swanson, K.; Drexler, J. Z.; Schoellhamer, D. H.; Thorne, K.; Spragens, K.; Takekawa, J.

    2010-12-01

    The San Francisco Estuary contains the largest extent of tidal marsh in the western United States. It is home to several state and federally listed species that are threatened or endangered. Climate change is a potential threat to these tidal marsh habitats through accelerated sea-level rise. The Wetland Accretion Rate Model for Ecosystem Resilience, or WARMER, is a 1-D vertical model of elevation at a point representative of target wetland habitat. WARMER incorporates both biological and physical components of vertical marsh accretion processes based on previous wetland models and is modified to incorporate mechanistic organic matter and inorganic deposition and the predicted SLR curve for San Francisco Estuary. Processes that are currently being modified include relative sea-level rise, inorganic sediment deposition, organic matter production, decomposition, and compaction. The model will be applied to marshes across the San Francisco Estuary and results will be used to evaluate the extent sea-level rise will reduce the functional habitat of the threatened black rail (Laterallus jamaicensis coturniculus), the endangered California clapper rail (Rallus longirostris obsoletus), and the endangered salt marsh harvest mouse (Reithrodontomys raviventris). Here we present a sensitivity analysis of key model parameters. Previous studies have noted that inorganic sediment deposition, initial elevation and pore space are the most sensitive parameters. Consistent with these studies, sensitivity analysis shows that pore space is the most sensitive parameter in the current model and the modified inorganic sediment deposition subroutine is particularly sensitive to the parameterization of settling velocity. Perturbations to initial elevation, the rate of sea level rise, organic matter input rates and percent refractory organic matter had small impacts on the modeled final elevation. Proper characterization of marsh sediment pore space and temporally variable sediment

  18. Integrating topography, hydrology and rock structure in weathering rate models of spring watersheds

    NASA Astrophysics Data System (ADS)

    Pacheco, Fernando A. L.; Van der Weijden, Cornelis H.

    2012-03-01

    SummaryWeathering rate models designed for watersheds combine chemical data of discharging waters with morphologic and hydrologic parameters of the catchments. At the spring watershed scale, evaluation of morphologic parameters is subjective due to difficulties in conceiving the catchment geometry. Besides, when springs emerge from crystalline massifs, rock structure must be accounted in formulas describing the area of minerals exposed to the percolating fluids, for a realistic evaluation of the rates. These particular features are not included in the available approaches and for that reason a new model was developed, coined THROW model. This is a lumped approach that integrates (T)opography, (H)ydrology, (RO)ck structure and (W)eathering in a single algorithm. The study area comprises several stream watersheds and spring sites of the Vouga River basin (northern Portugal), shaped on granites. Firstly, the THROW model couples a terrain modeling analysis with hydrologic models based on discharge rates, to determine hydraulic conductivities (K), effective porosities (ne) and annual recharges (Vr) at the stream watershed scale. Subsequently, these parameters are used in a water balance model to estimate concomitant groundwater travel times (t). The mean K [(4.7 ± 3.2) × 10-7 m s-1] and ne [(2.0 ± 1.3) × 10-2] values are adopted as proxies for the spring watersheds and a firm regression equation is defined between time and stream watershed area (A). Secondly, two more runs of terrain modeling analysis are executed to extrapolate morphologic parameters for the spring watersheds. The first run hinges on scaling properties of the drainage networks, known as Horton laws, and is used to scale watershed areas across stream orders (i). The scaling function is described by another regression equation. The second run evaluates the order of a spring watershed, defined as equivalent order (ieq) and equated to the mean order of the surrounding stream watersheds. Having

  19. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with

  20. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939

  1. CMOS low data rate imaging method based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Long-long; Liu, Kun; Han, Da-peng

    2012-07-01

    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  2. Strain Rate Behavior of HTPB-Based Magnetorheological Materials

    NASA Astrophysics Data System (ADS)

    Stoltz, Chad; Seminuk, Kenneth; Joshi, Vasant

    2013-06-01

    It is of particular interest to determine whether the mechanical properties of binder systems can be manipulated by adding ferrous or Magnetostrictive particulates. Strain rate response of two HTPB/Fe (Hydroxyl-terminated Polybutadiene/Iron) compositions under electromagnetic fields has been investigated using a Split Hopkinson Pressure bar arrangement equipped with aluminum bars. Two HTPB/Fe compositions were developed, the first without plasticizer and the second containing plasticizer. Samples were tested with and without the application of a 0.01 Tesla magnetic field coil. Strain gauge data taken from the Split Hopkinson Pressure bar has been used to determine what mechanical properties were changed by inducing a mild electromagnetic field onto each sample. The data reduction method to obtain stress-strain plots included dispersion corrections for deciphering minute changes due to compositional alterations. Data collected from the Split Hopkinson Pressure bar indicate changes in the Mechanical Stress-Strain curves and suggest that the impedance of a binder system can be altered by means of a magnetic field. We acknowledge the Defense Threat Reduction Agency for funding.

  3. A vorticity dynamics based model for the turbulent dissipation: Model development and validation

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liou, William W.; Shabbir, Aamir; Yang, Zhigang; Zhu, Jian

    1994-01-01

    A new model dissipation rate equation is proposed based on the dynamic equation of the mean-square vorticity fluctuation for large Reynolds number turbulence. The advantage of working with the vorticity fluctuation equation is that the physical meanings of the terms in this equation are more clear than those in the dissipation rate equation. Hence, the model development based on the vorticity fluctuation equation is more straightforward. The resulting form of the model equation is consistent with the spectral energy cascade analysis introduced by Lumley. The proposed model dissipation rate equation is numerically well behaved and can be applied to any level of turbulence modeling. It is applied to a realizable eddy viscosity model. Flows that are examined include: rotating homogeneous shear flows; free shear flows; a channel flow and flat plate boundary layers with and without pressure gradients; and backward facing step separated flows. In most cases, the present model predictions show considerable improvement over the standard kappa-epsilon model.

  4. Quantitative extraction of bedrock exposed rate based on unmanned aerial vehicle data and TM image in Karst Environment

    NASA Astrophysics Data System (ADS)

    wang, hongyan; li, qiangzi; du, xin; zhao, longcai

    2016-04-01

    In the karst regions of Southwest China, rocky desertification is one of the most serious problems of land degradation. The bedrock exposed rate is one of the important indexes to assess the degree of rocky desertification in the karst regions. Because of the inherent merits of macro scale, frequency, efficiency and synthesis, remote sensing is the promising method to monitor and assess karst rocky desertification on large scale. However, the actual measurement of bedrock exposed rate is difficult and existing remote sensing methods cannot directly be exploited to extract the bedrock exposed rate owing to the high complexity and heterogeneity of karst environments. Therefore, based on the UAV and TM data, the paper selected Xingren County as the research area, and the quantitative extraction of the bedrock exposed rate based on the multi scale remote sensing data was developed. Firstly, we used the object oriented method to carry out the accurate classification of UAV image and based on the results of rock extraction, the bedrock exposed rate was calculated in the 30m grid scale. Parts of the calculated samples were as training data and another samples were as the model validation data. Secondly, in each grid the band reflectivity of TM data was extracted and we also calculated a variety of rock index and vegetation index (NDVI, SAVI etc.). Finally, the network model was established to extract the bedrock exposed rate, the correlation coefficient (R) of the network model was 0.855 and the correlation coefficient (R) of the validation model was 0.677, the root mean square error (RMSE) was 0.073. Based on the quantitative inversion model, the distribution map of the bedrock exposed rate in Xingren County was obtained. Keywords: Bedrock exposed rate, quantitative extraction, UAV and TM data, Karst rocky desertification.

  5. Improved community model for social networks based on social mobility

    NASA Astrophysics Data System (ADS)

    Lu, Zhe-Ming; Wu, Zhen; Luo, Hao; Wang, Hao-Xian

    2015-07-01

    This paper proposes an improved community model for social networks based on social mobility. The relationship between the group distribution and the community size is investigated in terms of communication rate and turnover rate. The degree distributions, clustering coefficients, average distances and diameters of networks are analyzed. Experimental results demonstrate that the proposed model possesses the small-world property and can reproduce social networks effectively and efficiently.

  6. [Assessment of the distance between categories in rating scales by using the item response model].

    PubMed

    Wakita, Takafumi

    2004-10-01

    This study aimed to assess the distance between adjacent categories of rating scales. It is common practice to treat ordinal variables as interval-scaled variables in the analysis of rating scales. Strictly speaking, however, ordinal scale data should be treated as such, since there is little reason and assurance that they are equivalent to interval variables. In view of this practice, this study proposes a method to assess the interval of rating scales, and analyzes empirical data in order to examine the results obtained by the method. This method is based upon the generalized partial credit model which is one of item response theory (IRT) models. The experiment was carried out on two data sets that differed only on the verbal phrasing of the rating. Main results of the study were: 1) the difference in item content (positive or negative) affects the width of a neutral category; and 2) the distance between categories differs significantly reflecting the difference in verbal phrasing. PMID:15747553

  7. Agreement in cardiovascular risk rating based on anthropometric parameters

    PubMed Central

    Dantas, Endilly Maria da Silva; Pinto, Cristiane Jordânia; Freitas, Rodrigo Pegado de Abreu; de Medeiros, Anna Cecília Queiroz

    2015-01-01

    Objective To investigate the agreement in evaluation of risk of developing cardiovascular diseases based on anthropometric parameters in young adults. Methods The study included 406 students, measuring weight, height, and waist and neck circumferences. Waist-to-height ratio and the conicity index. The kappa coefficient was used to assess agreement in risk classification for cardiovascular diseases. The positive and negative specific agreement values were calculated as well. The Pearson chi-square (χ2) test was used to assess associations between categorical variables (p<0.05). Results The majority of the parameters assessed (44%) showed slight (k=0.21 to 0.40) and/or poor agreement (k<0.20), with low values of negative specific agreement. The best agreement was observed between waist circumference and waist-to-height ratio both for the general population (k=0.88) and between sexes (k=0.93 to 0.86). There was a significant association (p<0.001) between the risk of cardiovascular diseases and females when using waist circumference and conicity index, and with males when using neck circumference. This resulted in a wide variation in the prevalence of cardiovascular disease risk (5.5%-36.5%), depending on the parameter and the sex that was assessed. Conclusion The results indicate variability in agreement in assessing risk for cardiovascular diseases, based on anthropometric parameters, and which also seems to be influenced by sex. Further studies in the Brazilian population are required to better understand this issue. PMID:26466060

  8. Evaluating range-expansion models for calculating nonnative species' expansion rate

    PubMed Central

    Preuss, Sonja; Low, Matthew; Cassel-Lundhagen, Anna; Berggren, Åsa

    2014-01-01

    Species range shifts associated with environmental change or biological invasions are increasingly important study areas. However, quantifying range expansion rates may be heavily influenced by methodology and/or sampling bias. We compared expansion rate estimates of Roesel's bush-cricket (Metrioptera roeselii, Hagenbach 1822), a nonnative species currently expanding its range in south-central Sweden, from range statistic models based on distance measures (mean, median, 95th gamma quantile, marginal mean, maximum, and conditional maximum) and an area-based method (grid occupancy). We used sampling simulations to determine the sensitivity of the different methods to incomplete sampling across the species' range. For periods when we had comprehensive survey data, range expansion estimates clustered into two groups: (1) those calculated from range margin statistics (gamma, marginal mean, maximum, and conditional maximum: ˜3 km/year), and (2) those calculated from the central tendency (mean and median) and the area-based method of grid occupancy (˜1.5 km/year). Range statistic measures differed greatly in their sensitivity to sampling effort; the proportion of sampling required to achieve an estimate within 10% of the true value ranged from 0.17 to 0.9. Grid occupancy and median were most sensitive to sampling effort, and the maximum and gamma quantile the least. If periods with incomplete sampling were included in the range expansion calculations, this generally lowered the estimates (range 16–72%), with exception of the gamma quantile that was slightly higher (6%). Care should be taken when interpreting rate expansion estimates from data sampled from only a fraction of the full distribution. Methods based on the central tendency will give rates approximately half that of methods based on the range margin. The gamma quantile method appears to be the most robust to incomplete sampling bias and should be considered as the method of choice when sampling the entire

  9. Exposure Modeling of Residential Air Exchange Rates for NEXUS Participants.

    EPA Science Inventory

    Due to cost and participant burden of personal measurements, air pollution health studies often estimate exposures using local ambient air monitors. Since outdoor levels do not necessarily reflect personal exposures, we developed the Exposure Model for Individuals (EMI) to improv...

  10. Exposure Modeling of Residential Air Exchange Rates for NEXUS Participants

    EPA Science Inventory

    Due to cost and participant burden of personal measurements, air pollution health studies often estimate exposures using local ambient air monitors. Since outdoor levels do not necessarily reflect personal exposures, we developed the Exposure Model for Individuals (EMI) to improv...

  11. Enhanced algorithm based on persistent scatterer interferometry for the estimation of high-rate land subsidence

    NASA Astrophysics Data System (ADS)

    Sadeghi, Zahra; Valadan Zoej, Mohammad Javad; Dehghani, Maryam; Chang, Ni-Bin

    2012-01-01

    Persistent scatterer interferometry (PSI) techniques using amplitude analysis and considering a temporal deformation model for PS pixel selection are unable to identify PS pixels in rural areas lacking human-made structures. In contrast, high rates of land subsidence lead to significant phase-unwrapping errors in a recently developed PSI algorithm (StaMPS) that applies phase stability and amplitude analysis to select the PS pixels in rural areas. The objective of this paper is to present an enhanced algorithm based on PSI to estimate the deformation rate in rural areas undergoing high and nearly constant rates of deformation. The proposed approach integrates the strengths of all of the existing PSI algorithms in PS pixel selection and phase unwrapping. PS pixels are first selected based on the amplitude information and phase-stability estimation as performed in StaMPS. The phase-unwrapping step, including the deformation rate and phase-ambiguity estimation, is then performed using least-squares ambiguity decorrelation adjustment (LAMBDA). The atmospheric phase screen (APS) and nonlinear deformation contribution to the phase are estimated by applying a high-pass temporal filter to the residuals derived from the LAMBDA method. The final deformation rate and the ambiguity parameter are re-estimated after subtracting the APS and the nonlinear deformation from that of the initial phase. The proposed method is applied to 22 ENVISAT ASAR images of southwestern Tehran basin captured between 2003 and 2008. A quantitative comparison with the results obtained with leveling and GPS measurements demonstrates the significant improvement of the PSI technique.

  12. Modeling the influence of decomposing organic solids on sulfate reduction rates for iron precipitation.

    PubMed

    Hemsi, Paulo S; Shackelford, Charles D; Figueroa, Linda A

    2005-05-01

    The influence of decomposing organic solids on sulfate (S04(2-)) reduction rates for metals precipitation in sulfate-reducing systems, such as in bioreactors and permeable reactive barriers for treatment of acid mine drainage, is modeled. The results are evaluated by comparing the model simulations with published experimental data for two single-substrate and two multiple-substrate batch equilibrium experiments. The comparisons are based on the temporal trends in SO4(2-), ferrous iron (Fe2+), and hydrogen sulfide (H2S) concentrations, as well as on rates of sulfate reduction. The temporal behaviors of organic solid materials, dissolved organic substrates, and different bacterial populations also are simulated. The simulated results using Contois kinetics for polysaccharide decomposition, Monod kinetics for lactate-based sulfate reduction, instantaneous or kinetically controlled precipitation of ferrous iron mono-sulfide (FeS), and partial volatilization of H2S to the gas phase compare favorably with the experimental data. When Contois kinetics of polysaccharide decomposition is replaced by first-order kinetics to simulate one of the single-substrate batch experiments, a comparatively poorer approximation of the rates of sulfate reduction is obtained. The effect of sewage sludge in boosting the short-term rate of sulfate reduction in one of the multiple-substrate experiments also is approximated reasonably well. The results illustrate the importance of the type of kinetics used to describe the decomposition of organic solids on metals precipitation in sulfate-reducing systems as well as the potential application of the model as a predictive tool for assisting in the design of similar biochemical systems. PMID:15926572

  13. Empirical Study of User Preferences Based on Rating Data of Movies

    PubMed Central

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  14. Empirical Study of User Preferences Based on Rating Data of Movies.

    PubMed

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  15. Modeling Grain Size and Strain Rate in Linear Friction Welded Waspaloy

    NASA Astrophysics Data System (ADS)

    Chamanfar, Ahmad; Jahazi, Mohammad; Gholipour, Javad; Wanjara, Priti; Yue, Stephen

    2013-09-01

    The high-temperature deformation behavior of the Ni-base superalloy, Waspaloy, using uniaxial isothermal compression testing was investigated at temperatures above the γ' solvus, 1333 K, 1373 K, and 1413 K (1060 °C, 1100 °C, and 1140 °C) for constant true strain rates of 0.001, 0.01, 0.1, and 1 s-1 and up to a true strain of 0.83. Flow softening and microstructural investigation indicated that dynamic recrystallization took place during deformation. For the investigated conditions, the strain rate sensitivity factor and the activation energy of hot deformation were 0.199 and 462 kJ/mol, respectively. Constitutive equations relating the dynamic recrystallized grain size to the deformation temperature and strain rate were developed and used to predict the grain size and strain rate in linear friction-welded (LFWed) Waspaloy. The predictions were validated against experimental findings and data reported in the literature. It was found that the equations can reliably predict the grain size of LFWed Waspaloy. Furthermore, the estimated strain rate was in agreement with finite element modeling data reported in the literature.

  16. Comparison of CALPUFF and ISCST3 models for predicting downwind odor and source emission rates

    NASA Astrophysics Data System (ADS)

    Wang, Lingjuan; Parker, David B.; Parnell, Calvin B.; Lacey, Ronald E.; Shaw, Bryan W.

    CALPUFF model and ISCST3 Gaussian dispersion models were evaluated for predicting downwind odor concentrations and back-calculating area source odor emission rates. The comparison between the predicted and field-sampled downwind concentrations indicates that the CALPUFF model could fairly well predict average downwind odor concentrations. However, ISCST3 tended to under predict downwind odor concentrations as compared to the measured concentrations. Both the CALPUFF and ISCST3 models failed to predict peak odor concentrations using the constant average emission rate. Odor emission rates obtained by back-calculating fluxes using CALPUFF and ISC models with the same field measurements of downwind odor concentrations are significantly different. It indicates that back-calculated emission rates are model specific. The modeled emission rates tended to be higher than flux chamber source sampling results. The flux chamber protocol may under-estimate odor emission rates.

  17. A pheromone-rate-based analysis on the convergence time of ACO algorithm.

    PubMed

    Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng

    2009-08-01

    Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms. PMID:19380276

  18. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference

    PubMed Central

    Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.

    2015-01-01

    The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888

  19. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    PubMed

    Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J

    2015-10-01

    The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888

  20. Identifying sleep apnea syndrome using heart rate and breathing effort variation analysis based on ballistocardiography.

    PubMed

    Weichao Zhao; Hongbo Ni; Xingshe Zhou; Yalong Song; Tianben Wang

    2015-08-01

    Sleep apnea syndrome (SAS) is regarded as one of the most common sleep-related breathing disorders, which can severely affect sleep quality. Since SAS is usually accompanied with the cyclical heart rate variation (HRV), many studies have been conducted on heart rate (HR) to identify it at an earlier stage. While most related work mainly based on clinical devices or signals (e.g., polysomnography (PSG), electrocardiography (ECG)), in this paper we focus on the ballistocardiographic (BCG) signal which is obtained in a non-invasive way. Moreover, as the precision and reliability of BCG signal are not so good as PSG or ECG, we propose a fine-grained feature extraction and analysis approach in SAS recognition. Our analysis takes both the basic HRV features and the breathing effort variation into consideration during different sleep stages rather than the whole night. The breathing effort refers to the mechanical interaction between respiration and BCG signal when SAS events occur, which is independent from autonomous nervous system (ANS) modulations. Specifically, a novel method named STC-Min is presented to extract the breathing effort variation feature. The basic HRV features depict the ANS modulations on HR and Sample Entropy and Detrended Fluctuation Analysis are applied for the evaluations. All the extracted features along with personal factors are fed into the knowledge-based support vector machine (KSVM) classification model, and the prior knowledge is based on dataset distribution and domain knowledge. Experimental results on 42 subjects in 3 nights validate the effectiveness of the methods and features in identifying SAS (90.46% precision rate and 88.89% recall rate). PMID:26737303

  1. Sensor-based interior modeling

    SciTech Connect

    Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.

    1995-02-01

    Robots and remote systems will play crucial roles in future decontamination and decommissioning (D&D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D&D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D&D operations by constructing 3-D models of a robot`s work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation.

  2. Assessment of chloroethene degradation rates based on ratios of daughter/parent compounds in groundwater plumes

    NASA Astrophysics Data System (ADS)

    Höhener, Patrick

    2014-05-01

    Chlorinated solvent spills at industrial and urban sites create groundwater plumes where tetrachloro- and trichloroethene may degrade to their daughter compounds, dichloroethenes, vinyl chloride and ethane. The assessment of degradation and natural attenuation at such sites may be based on the analysis and inverse modelling of concentration data, on the calculation of mass fluxes in transsects, and/or on the analysis of stable isotope ratios in the ethenes. Relatively few work has investigated the possibility of using ratio of concentrations for gaining information on degradation rates. The use of ratios bears the advantage that dilution of a single sample with contaminant-free water does not matter. It will be shown that molar ratios of daughter to parent compounds measured along a plume streamline are a rapid and robust mean of determining whether degradation rates increase or decrease along the degradation chain, and allow furthermore a quantitation of the relative magnitude of degradation rates compared to the rate of the parent compound. Furthermore, ratios of concentration will become constant in zones where degradation is absent, and this allows to sketching the extension of actively degrading zones. The assessment is possible for pure sources and also for mixed sources. A quantification method is proposed in order to estimate first-order degradation rates in zones of constant degradation activity. This quantification method includes corrections that are needed due to longitudinal and transversal dispersivity. The method was tested on a number of real field sites from literature. At the majority of these sites, the first-order degradation rates were decreasing along the degradation chain from tetrachloroethene to vinyl chloride, meaning that the latter was often reaching important concentrations. This is bad news for site owners due to the increased toxicity of vinyl chloride compared to its parent compounds.

  3. A Comparison of Model-Data Fit for Parametric and Nonparametric Item Response Theory Models Using Ordinal-Level Ratings

    ERIC Educational Resources Information Center

    Dyehouse, Melissa A.

    2009-01-01

    This study compared the model-data fit of a parametric item response theory (PIRT) model to a nonparametric item response theory (NIRT) model to determine the best-fitting model for use with ordinal-level alternate assessment ratings. The PIRT Generalized Graded Unfolding Model (GGUM) was compared to the NIRT Mokken model. Chi-square statistics…

  4. Model-based vision for car following

    NASA Astrophysics Data System (ADS)

    Schneiderman, Henry; Nashman, Marilyn; Lumia, Ronald

    1993-08-01

    This paper describes a vision processing algorithm that supports autonomous car following. The algorithm visually tracks the position of a `lead vehicle' from the vantage of a pursuing `chase vehicle.' The algorithm requires a 2-D model of the back of the lead vehicle. This model is composed of line segments corresponding to features that give rise to strong edges. There are seven sequential stages of computation: (1) Extracting edge points; (2) Associating extracted edge points with the model features; (3) Determining the position of each model feature; (4) Determining the model position; (5) Updating the motion model of the object; (6) Predicting the position of the object in next image; (7) Predicting the location of all object features from prediction of object position. All processing is confined to the 2-D image plane. The 2-D model location computed in this processing is used to determine the position of the lead vehicle with respect to a 3-D coordinate frame affixed to the chase vehicle. This algorithm has been used as part of a complete system to drive an autonomous vehicle, a High Mobility Multipurpose Wheeled Vehicle (HMMWV) such that it follows a lead vehicle at speeds up to 35 km/hr. The algorithm runs at an update rate of 15 Hertz and has a worst case computational delay of 128 ms. The algorithm is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) and runs on a dedicated vision processing engine and a VME-based multiprocessor system.

  5. Modeling irradiation creep of graphite using rate theory

    NASA Astrophysics Data System (ADS)

    Sarkar, Apu; Eapen, Jacob; Raj, Anant; Murty, K. L.; Burchell, T. D.

    2016-05-01

    We have examined irradiation induced creep of graphite in the framework of transition state rate theory. Experimental data for two grades of nuclear graphite (H-337 and AGOT) have been analyzed to determine the stress exponent (n) and activation energy (Q) for plastic flow under irradiation. We show that the mean activation energy lies between 0.14 and 0.32 eV with a mean stress-exponent of 1.0 ± 0.2. A stress exponent of unity and the unusually low activation energies strongly indicate a diffusive defect transport mechanism for neutron doses in the range of 3-4 × 1022 n/cm2.

  6. Differential Geometry Based Multiscale Models

    PubMed Central

    Wei, Guo-Wei

    2010-01-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that

  7. A Model for Estimation of Rain Rate on Tropical Land from TRMM Microwave Imager Radiometer Observations

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Kim, Kyu-Myong

    2004-01-01

    Over the tropical land regions observations of the 85 GHz brightness temperature (T(sub 85v)) made by the TRMM Microwave Imager (TMI) radiometer when analyzed with the help of rain rate (R(sub pR)) deduced from the TRMM Precipitation Radar (PR) indicate that there are two maxima in rain rate. One strong maximum occurs when T(sub 85) has a value of about 220 K and the other weaker one when T(sub 85v) is much colder approx. 150 K. Together with the help of earlier studies based on airborne Doppler Radar observations and radiative transfer theoretical simulations, we infer the maximum near 220 K is a result of relatively weak scattering due to super cooled rain drops and water coated ice hydrometeors associated with a developing thunderstorm (Cb) that has a strong updraft. The other maximum is associated with strong scattering due to ice particles that are formed when the updraft collapses and the rain from the Cb is transit2oning from convective type to stratiform type. Incorporating these ideas and with a view to improve the estimation of rain rate from existing operational method applicable to the tropical land areas, we have developed a rain retrieval model. This model utilizes two parameters, that have a horizontal scale of approx. 20km, deduced from the TMI measurements at 19, 21 and 37 GHz (T(sub 19v), T(sub 21v), T(sub 37v). The third parameter in the model, namely the horizontal gradient of brightness temperature within the 20 km scale, is deduced from TMI measurements at 85 GHz. Utilizing these parameters our retrieval model is formulated to yield instantaneous rain rate on a scale of 20 km and seasonal average on a mesoscale that agree well with that of the PR.

  8. An analytical prediction of pilot ratings utilizing human pilot model

    NASA Technical Reports Server (NTRS)

    Tanaka, K.; Washizu, K.

    1982-01-01

    In order to analytically predict pilot ratings, an evaluation method of a manual control system which consists of an aircraft and a human pilot, is proposed and examined. The method is constructed upon the assumptions that the control mission determines the critical frequency the pilot should bring to his focus, and that the degree of closed-loop stability and the human compensation necessary to attain the stability determine the human subjective evaluation of the system. As a result, a simple evaluation chart is introduced. The chart enables prediction of the subjective evaluation, if the controlled element dynamics and the mission are given. The chart is in good accord with almost all of the existing results of pilot ratings. This method has the following advantages: (1) simplicity, in a sense that the method needs to evaluate only two typical controlled element parameters, namely, the gain slope and the phase at the critical control frequency; (2) applicability to unstable controlled elements; (3) predictability of controllability limits of manual control; (4) possibility of estimating human compensatory dynamics.

  9. RESIDENTIAL AIR EXCHANGE RATES FOR USE IN INDOOR AIR AND EXPOSURE MODELING STUDIES

    EPA Science Inventory

    Data on air exchange rates are important inputs to indoor air quality models. ndoor air models, in turn, are incorporated into the structure of total human exposure models. ragmentary data on residential ventilation rates are available in various governmental reports, journal art...

  10. Development of a pheromone elution rate physical model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A first principle modeling approach is applied to available data describing the elution of semiochemicals from pheromone dispensers. These data include field data for 27 products developed by several manufacturers, including homemade devices, as well as laboratory data collected on three semiochemi...

  11. Rate of neuronal fallout in a transsynaptic cerebellar model.

    PubMed

    Triarhou, L C

    1998-10-01

    Quantitative analyses of transsynaptic granule cell death subsequent to the genetically determined degeneration of Purkinje cells in the cerebellum of pcd/pcd mutant mice show that granule neuron fallout follows a typical mathematical pattern of elemental decay. Biological and theoretical connotations are discussed in light of the empirical observations and a simulation model. PMID:9865853

  12. Improving consumption rate estimates by incorporating wild activity into a bioenergetics model.

    PubMed

    Brodie, Stephanie; Taylor, Matthew D; Smith, James A; Suthers, Iain M; Gray, Charles A; Payne, Nicholas L

    2016-04-01

    Consumption is the basis of metabolic and trophic ecology and is used to assess an animal's trophic impact. The contribution of activity to an animal's energy budget is an important parameter when estimating consumption, yet activity is usually measured in captive animals. Developments in telemetry have allowed the energetic costs of activity to be measured for wild animals; however, wild activity is seldom incorporated into estimates of consumption rates. We calculated the consumption rate of a free-ranging marine predator (yellowtail kingfish, Seriola lalandi) by integrating the energetic cost of free-ranging activity into a bioenergetics model. Accelerometry transmitters were used in conjunction with laboratory respirometry trials to estimate kingfish active metabolic rate in the wild. These field-derived consumption rate estimates were compared with those estimated by two traditional bioenergetics methods. The first method derived routine swimming speed from fish morphology as an index of activity (a "morphometric" method), and the second considered activity as a fixed proportion of standard metabolic rate (a "physiological" method). The mean consumption rate for free-ranging kingfish measured by accelerometry was 152 J·g(-1)·day(-1), which lay between the estimates from the morphometric method (μ = 134 J·g(-1)·day(-1)) and the physiological method (μ = 181 J·g(-1)·day(-1)). Incorporating field-derived activity values resulted in the smallest variance in log-normally distributed consumption rates (σ = 0.31), compared with the morphometric (σ = 0.57) and physiological (σ = 0.78) methods. Incorporating field-derived activity into bioenergetics models probably provided more realistic estimates of consumption rate compared with the traditional methods, which may further our understanding of trophic interactions that underpin ecosystem-based fisheries management. The general methods used to estimate active metabolic rates of free-ranging fish

  13. Sensitivity study of forecasted aftershock seismicity based on Coulomb stress calculation and rate- and state-dependent frictional response (Invited)

    NASA Astrophysics Data System (ADS)

    Cocco, M.; Hainzl, S.; Woessner, J.; Enescu, B.; Catalli, F.; Lombardi, A.

    2009-12-01

    It is nowadays well established that both Coulomb stress perturbations and the rate- and state-dependent frictional response of fault populations are needed to model the spatial and temporal evolution of seismicity. This represents the most popular physics-based approach to forecast the rate of earthquake production and its performances have to be verified with respect to alternative statistical methods. Despite the numerous applications of Coulomb stress interactions, a rigorous validation of the forecasting capabilities is still missing. In this work, we use the Dieterich (1994) physics-based approach to simulate the spatio-temporal evolution of seismicity caused by stress changes applied to an infinite population of nucleating patches modelled through a rate- and state-dependent friction law. According to this model, seismicity rate changes depend on the amplitude of stress perturbation, the physical constitutive properties of faults (represented by the parameter Aσ), the stressing rate and the background seismicity rate of the study area. In order to apply this model in a predictive manner, we need to understand the variability of input physical model parameters and their correlations. We first discuss the impact of uncertainties in model parameters and, in particular, in computed coseismic stress perturbations on the seismicity rate changes forecasted through the frictional model. We aim to understand how the variability of Coulomb stress changes affects the correlation between predicted and observed changes in the rate of earthquake production. We use the aftershock activity following the 1992 M 7.3 Landers (California) earthquake as one of our case studies. We analyze the variability of stress changes resulting from the use of different published slip distributions. We find that the standard deviation of the uncertainty is of the same size as the absolute stress change and that their ratio, the coefficient of variation (CV), is approximately constant in

  14. Finite Element Modeling of the Behavior of Armor Materials Under High Strain Rates and Large Strains

    NASA Astrophysics Data System (ADS)

    Polyzois, Ioannis

    For years high strength steels and alloys have been widely used by the military for making armor plates. Advances in technology have led to the development of materials with improved resistance to penetration and deformation. Until recently, the behavior of these materials under high strain rates and large strains has been primarily based on laboratory testing using the Split Hopkinson Pressure Bar apparatus. With the advent of sophisticated computer programs, computer modeling and finite element simulations are being developed to predict the deformation behavior of these metals for a variety of co